nwo
stringlengths 5
91
| sha
stringlengths 40
40
| path
stringlengths 5
174
| language
stringclasses 1
value | identifier
stringlengths 1
120
| parameters
stringlengths 0
3.15k
| argument_list
stringclasses 1
value | return_statement
stringlengths 0
24.1k
| docstring
stringlengths 0
27.3k
| docstring_summary
stringlengths 0
13.8k
| docstring_tokens
sequence | function
stringlengths 22
139k
| function_tokens
sequence | url
stringlengths 87
283
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kubernetes-client/python | 47b9da9de2d02b2b7a34fbe05afb44afd130d73a | kubernetes/client/api/rbac_authorization_v1_api.py | python | RbacAuthorizationV1Api.list_cluster_role | (self, **kwargs) | return self.list_cluster_role_with_http_info(**kwargs) | list_cluster_role # noqa: E501
list or watch objects of kind ClusterRole # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cluster_role(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str pretty: If 'true', then the output is pretty printed.
:param bool allow_watch_bookmarks: allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset
:param str resource_version_match: resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset
:param int timeout_seconds: Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: V1ClusterRoleList
If the method is called asynchronously,
returns the request thread. | list_cluster_role # noqa: E501 | [
"list_cluster_role",
"#",
"noqa",
":",
"E501"
] | def list_cluster_role(self, **kwargs): # noqa: E501
"""list_cluster_role # noqa: E501
list or watch objects of kind ClusterRole # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cluster_role(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str pretty: If 'true', then the output is pretty printed.
:param bool allow_watch_bookmarks: allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset
:param str resource_version_match: resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset
:param int timeout_seconds: Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: V1ClusterRoleList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.list_cluster_role_with_http_info(**kwargs) | [
"def",
"list_cluster_role",
"(",
"self",
",",
"*",
"*",
"kwargs",
")",
":",
"# noqa: E501",
"kwargs",
"[",
"'_return_http_data_only'",
"]",
"=",
"True",
"return",
"self",
".",
"list_cluster_role_with_http_info",
"(",
"*",
"*",
"kwargs",
")"
] | https://github.com/kubernetes-client/python/blob/47b9da9de2d02b2b7a34fbe05afb44afd130d73a/kubernetes/client/api/rbac_authorization_v1_api.py#L1970-L2002 |
|
SanPen/GridCal | d3f4566d2d72c11c7e910c9d162538ef0e60df31 | src/GridCal/Gui/Main/GridCalMain.py | python | MainGUI.post_contingency_analysis | (self) | Action performed after the short circuit.
Returns: | Action performed after the short circuit.
Returns: | [
"Action",
"performed",
"after",
"the",
"short",
"circuit",
".",
"Returns",
":"
] | def post_contingency_analysis(self):
"""
Action performed after the short circuit.
Returns:
"""
drv, results = self.session.get_driver_results(sim.SimulationTypes.ContingencyAnalysis_run)
self.remove_simulation(sim.SimulationTypes.ContingencyAnalysis_run)
# update the results in the circuit structures
if not drv.__cancel__:
if results is not None:
self.ui.progress_label.setText('Colouring contingency analysis results in the grid...')
QtGui.QGuiApplication.processEvents()
if self.ui.draw_schematic_checkBox.isChecked():
if results.S.shape[0] > 0:
viz.colour_the_schematic(circuit=self.circuit,
Sbus=results.S[0, :], # same injection for all the contingencies
Sf=np.abs(results.Sf).max(axis=1),
voltages=results.voltage.max(axis=0),
loadings=np.abs(results.loading).max(axis=1),
types=results.bus_types,
use_flow_based_width=self.ui.branch_width_based_on_flow_checkBox.isChecked(),
min_branch_width=self.ui.min_branch_size_spinBox.value(),
max_branch_width=self.ui.max_branch_size_spinBox.value(),
min_bus_width=self.ui.min_node_size_spinBox.value(),
max_bus_width=self.ui.max_node_size_spinBox.value()
)
else:
info_msg('Cannot colour because there are no branches :/')
self.update_available_results()
self.colour_now()
else:
error_msg('Something went wrong, There are no contingency analysis results.')
if not self.session.is_anything_running():
self.UNLOCK() | [
"def",
"post_contingency_analysis",
"(",
"self",
")",
":",
"drv",
",",
"results",
"=",
"self",
".",
"session",
".",
"get_driver_results",
"(",
"sim",
".",
"SimulationTypes",
".",
"ContingencyAnalysis_run",
")",
"self",
".",
"remove_simulation",
"(",
"sim",
".",
"SimulationTypes",
".",
"ContingencyAnalysis_run",
")",
"# update the results in the circuit structures",
"if",
"not",
"drv",
".",
"__cancel__",
":",
"if",
"results",
"is",
"not",
"None",
":",
"self",
".",
"ui",
".",
"progress_label",
".",
"setText",
"(",
"'Colouring contingency analysis results in the grid...'",
")",
"QtGui",
".",
"QGuiApplication",
".",
"processEvents",
"(",
")",
"if",
"self",
".",
"ui",
".",
"draw_schematic_checkBox",
".",
"isChecked",
"(",
")",
":",
"if",
"results",
".",
"S",
".",
"shape",
"[",
"0",
"]",
">",
"0",
":",
"viz",
".",
"colour_the_schematic",
"(",
"circuit",
"=",
"self",
".",
"circuit",
",",
"Sbus",
"=",
"results",
".",
"S",
"[",
"0",
",",
":",
"]",
",",
"# same injection for all the contingencies",
"Sf",
"=",
"np",
".",
"abs",
"(",
"results",
".",
"Sf",
")",
".",
"max",
"(",
"axis",
"=",
"1",
")",
",",
"voltages",
"=",
"results",
".",
"voltage",
".",
"max",
"(",
"axis",
"=",
"0",
")",
",",
"loadings",
"=",
"np",
".",
"abs",
"(",
"results",
".",
"loading",
")",
".",
"max",
"(",
"axis",
"=",
"1",
")",
",",
"types",
"=",
"results",
".",
"bus_types",
",",
"use_flow_based_width",
"=",
"self",
".",
"ui",
".",
"branch_width_based_on_flow_checkBox",
".",
"isChecked",
"(",
")",
",",
"min_branch_width",
"=",
"self",
".",
"ui",
".",
"min_branch_size_spinBox",
".",
"value",
"(",
")",
",",
"max_branch_width",
"=",
"self",
".",
"ui",
".",
"max_branch_size_spinBox",
".",
"value",
"(",
")",
",",
"min_bus_width",
"=",
"self",
".",
"ui",
".",
"min_node_size_spinBox",
".",
"value",
"(",
")",
",",
"max_bus_width",
"=",
"self",
".",
"ui",
".",
"max_node_size_spinBox",
".",
"value",
"(",
")",
")",
"else",
":",
"info_msg",
"(",
"'Cannot colour because there are no branches :/'",
")",
"self",
".",
"update_available_results",
"(",
")",
"self",
".",
"colour_now",
"(",
")",
"else",
":",
"error_msg",
"(",
"'Something went wrong, There are no contingency analysis results.'",
")",
"if",
"not",
"self",
".",
"session",
".",
"is_anything_running",
"(",
")",
":",
"self",
".",
"UNLOCK",
"(",
")"
] | https://github.com/SanPen/GridCal/blob/d3f4566d2d72c11c7e910c9d162538ef0e60df31/src/GridCal/Gui/Main/GridCalMain.py#L2800-L2838 |
||
earthgecko/skyline | 12754424de72593e29eb21009fb1ae3f07f3abff | skyline/custom_algorithms/m66.py | python | m66 | (current_skyline_app, parent_pid, timeseries, algorithm_parameters) | return (anomalous, anomalyScore) | A time series data points are anomalous if the 6th median is 6 standard
deviations (six-sigma) from the time series 6th median standard deviation
and persists for x_windows, where `x_windows = int(window / 2)`.
This algorithm finds SIGNIFICANT cahngepoints in a time series, similar to
PELT and Bayesian Online Changepoint Detection, however it is more robust to
instaneous outliers and more conditionally selective of changepoints.
:param current_skyline_app: the Skyline app executing the algorithm. This
will be passed to the algorithm by Skyline. This is **required** for
error handling and logging. You do not have to worry about handling the
argument in the scope of the custom algorithm itself, but the algorithm
must accept it as the first agrument.
:param parent_pid: the parent pid which is executing the algorithm, this is
**required** for error handling and logging. You do not have to worry
about handling this argument in the scope of algorithm, but the
algorithm must accept it as the second argument.
:param timeseries: the time series as a list e.g. ``[[1578916800.0, 29.0],
[1578920400.0, 55.0], ... [1580353200.0, 55.0]]``
:param algorithm_parameters: a dictionary of any required parameters for the
custom_algorithm and algorithm itself for example:
``algorithm_parameters={
'nth_median': 6,
'sigma': 6,
'window': 5,
'return_anomalies' = True,
}``
:type current_skyline_app: str
:type parent_pid: int
:type timeseries: list
:type algorithm_parameters: dict
:return: True, False or Non
:rtype: boolean
Example CUSTOM_ALGORITHMS configuration:
'm66': {
'namespaces': [
'skyline.analyzer.run_time', 'skyline.analyzer.total_metrics',
'skyline.analyzer.exceptions'
],
'algorithm_source': '/opt/skyline/github/skyline/skyline/custom_algorithms/m66.py',
'algorithm_parameters': {
'nth_median': 6, 'sigma': 6, 'window': 5, 'resolution': 60,
'minimum_sparsity': 0, 'determine_duration': False,
'return_anomalies': True, 'save_plots_to': False,
'save_plots_to_absolute_dir': False, 'filename_prefix': False
},
'max_execution_time': 1.0
'consensus': 1,
'algorithms_allowed_in_consensus': ['m66'],
'run_3sigma_algorithms': False,
'run_before_3sigma': False,
'run_only_if_consensus': False,
'use_with': ['crucible', 'luminosity'],
'debug_logging': False,
}, | A time series data points are anomalous if the 6th median is 6 standard
deviations (six-sigma) from the time series 6th median standard deviation
and persists for x_windows, where `x_windows = int(window / 2)`.
This algorithm finds SIGNIFICANT cahngepoints in a time series, similar to
PELT and Bayesian Online Changepoint Detection, however it is more robust to
instaneous outliers and more conditionally selective of changepoints. | [
"A",
"time",
"series",
"data",
"points",
"are",
"anomalous",
"if",
"the",
"6th",
"median",
"is",
"6",
"standard",
"deviations",
"(",
"six",
"-",
"sigma",
")",
"from",
"the",
"time",
"series",
"6th",
"median",
"standard",
"deviation",
"and",
"persists",
"for",
"x_windows",
"where",
"x_windows",
"=",
"int",
"(",
"window",
"/",
"2",
")",
".",
"This",
"algorithm",
"finds",
"SIGNIFICANT",
"cahngepoints",
"in",
"a",
"time",
"series",
"similar",
"to",
"PELT",
"and",
"Bayesian",
"Online",
"Changepoint",
"Detection",
"however",
"it",
"is",
"more",
"robust",
"to",
"instaneous",
"outliers",
"and",
"more",
"conditionally",
"selective",
"of",
"changepoints",
"."
] | def m66(current_skyline_app, parent_pid, timeseries, algorithm_parameters):
"""
A time series data points are anomalous if the 6th median is 6 standard
deviations (six-sigma) from the time series 6th median standard deviation
and persists for x_windows, where `x_windows = int(window / 2)`.
This algorithm finds SIGNIFICANT cahngepoints in a time series, similar to
PELT and Bayesian Online Changepoint Detection, however it is more robust to
instaneous outliers and more conditionally selective of changepoints.
:param current_skyline_app: the Skyline app executing the algorithm. This
will be passed to the algorithm by Skyline. This is **required** for
error handling and logging. You do not have to worry about handling the
argument in the scope of the custom algorithm itself, but the algorithm
must accept it as the first agrument.
:param parent_pid: the parent pid which is executing the algorithm, this is
**required** for error handling and logging. You do not have to worry
about handling this argument in the scope of algorithm, but the
algorithm must accept it as the second argument.
:param timeseries: the time series as a list e.g. ``[[1578916800.0, 29.0],
[1578920400.0, 55.0], ... [1580353200.0, 55.0]]``
:param algorithm_parameters: a dictionary of any required parameters for the
custom_algorithm and algorithm itself for example:
``algorithm_parameters={
'nth_median': 6,
'sigma': 6,
'window': 5,
'return_anomalies' = True,
}``
:type current_skyline_app: str
:type parent_pid: int
:type timeseries: list
:type algorithm_parameters: dict
:return: True, False or Non
:rtype: boolean
Example CUSTOM_ALGORITHMS configuration:
'm66': {
'namespaces': [
'skyline.analyzer.run_time', 'skyline.analyzer.total_metrics',
'skyline.analyzer.exceptions'
],
'algorithm_source': '/opt/skyline/github/skyline/skyline/custom_algorithms/m66.py',
'algorithm_parameters': {
'nth_median': 6, 'sigma': 6, 'window': 5, 'resolution': 60,
'minimum_sparsity': 0, 'determine_duration': False,
'return_anomalies': True, 'save_plots_to': False,
'save_plots_to_absolute_dir': False, 'filename_prefix': False
},
'max_execution_time': 1.0
'consensus': 1,
'algorithms_allowed_in_consensus': ['m66'],
'run_3sigma_algorithms': False,
'run_before_3sigma': False,
'run_only_if_consensus': False,
'use_with': ['crucible', 'luminosity'],
'debug_logging': False,
},
"""
# You MUST define the algorithm_name
algorithm_name = 'm66'
# Define the default state of None and None, anomalous does not default to
# False as that is not correct, False is only correct if the algorithm
# determines the data point is not anomalous. The same is true for the
# anomalyScore.
anomalous = None
anomalyScore = None
return_anomalies = False
anomalies = []
anomalies_dict = {}
anomalies_dict['algorithm'] = algorithm_name
realtime_analysis = False
current_logger = None
dev_null = None
# If you wanted to log, you can but this should only be done during
# testing and development
def get_log(current_skyline_app):
current_skyline_app_logger = current_skyline_app + 'Log'
current_logger = logging.getLogger(current_skyline_app_logger)
return current_logger
start = timer()
# Use the algorithm_parameters to determine the sample_period
debug_logging = None
try:
debug_logging = algorithm_parameters['debug_logging']
except:
debug_logging = False
if debug_logging:
try:
current_logger = get_log(current_skyline_app)
current_logger.debug('debug :: %s :: debug_logging enabled with algorithm_parameters - %s' % (
algorithm_name, str(algorithm_parameters)))
except Exception as e:
# This except pattern MUST be used in ALL custom algortihms to
# facilitate the traceback from any errors. The algorithm we want to
# run super fast and without spamming the log with lots of errors.
# But we do not want the function returning and not reporting
# anything to the log, so the pythonic except is used to "sample" any
# algorithm errors to a tmp file and report once per run rather than
# spewing tons of errors into the log e.g. analyzer.log
dev_null = e
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
# Return None and None as the algorithm could not determine True or False
del dev_null
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
# Allow the m66 parameters to be passed in the algorithm_parameters
window = 6
try:
window = algorithm_parameters['window']
except KeyError:
window = 6
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
nth_median = 6
try:
nth_median = algorithm_parameters['nth_median']
except KeyError:
nth_median = 6
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
n_sigma = 6
try:
n_sigma = algorithm_parameters['sigma']
except KeyError:
n_sigma = 6
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
resolution = 0
try:
resolution = algorithm_parameters['resolution']
except KeyError:
resolution = 0
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
determine_duration = False
try:
determine_duration = algorithm_parameters['determine_duration']
except KeyError:
determine_duration = False
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
minimum_sparsity = 0
try:
minimum_sparsity = algorithm_parameters['minimum_sparsity']
except KeyError:
minimum_sparsity = 0
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
shift_to_start_of_window = True
try:
shift_to_start_of_window = algorithm_parameters['shift_to_start_of_window']
except KeyError:
shift_to_start_of_window = True
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
save_plots_to = False
try:
save_plots_to = algorithm_parameters['save_plots_to']
except KeyError:
save_plots_to = False
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
save_plots_to_absolute_dir = False
try:
save_plots_to_absolute_dir = algorithm_parameters['save_plots_to_absolute_dir']
except KeyError:
save_plots_to_absolute_dir = False
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
filename_prefix = False
try:
filename_prefix = algorithm_parameters['filename_prefix']
except KeyError:
filename_prefix = False
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
if debug_logging:
current_logger.debug('debug :: algorithm_parameters :: %s' % (
str(algorithm_parameters)))
return_anomalies = False
try:
return_anomalies = algorithm_parameters['return_anomalies']
except KeyError:
return_anomalies = False
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
try:
realtime_analysis = algorithm_parameters['realtime_analysis']
except KeyError:
realtime_analysis = False
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
save_plots_to = False
try:
save_plots_to = algorithm_parameters['save_plots_to']
except KeyError:
save_plots_to = False
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
save_plots_to_absolute_dir = False
try:
save_plots_to_absolute_dir = algorithm_parameters['save_plots_to_absolute_dir']
except KeyError:
save_plots_to = False
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
filename_prefix = False
try:
filename_prefix = algorithm_parameters['filename_prefix']
except KeyError:
filename_prefix = False
except Exception as e:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
dev_null = e
try:
base_name = algorithm_parameters['base_name']
except Exception as e:
# This except pattern MUST be used in ALL custom algortihms to
# facilitate the traceback from any errors. The algorithm we want to
# run super fast and without spamming the log with lots of errors.
# But we do not want the function returning and not reporting
# anything to the log, so the pythonic except is used to "sample" any
# algorithm errors to a tmp file and report once per run rather than
# spewing tons of errors into the log e.g. analyzer.log
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
# Return None and None as the algorithm could not determine True or False
dev_null = e
del dev_null
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (False, None, anomalies)
return (False, None)
if debug_logging:
current_logger.debug('debug :: %s :: base_name - %s' % (
algorithm_name, str(base_name)))
anomalies_dict['metric'] = base_name
anomalies_dict['anomalies'] = {}
use_bottleneck = True
if save_plots_to:
use_bottleneck = False
if use_bottleneck:
import bottleneck as bn
# ALWAYS WRAP YOUR ALGORITHM IN try and the BELOW except
try:
start_preprocessing = timer()
# INFO: Sorting time series of 10079 data points took 0.002215 seconds
timeseries = sorted(timeseries, key=lambda x: x[0])
if debug_logging:
current_logger.debug('debug :: %s :: time series of length - %s' % (
algorithm_name, str(len(timeseries))))
# Testing the data to ensure it meets minimum requirements, in the case
# of Skyline's use of the m66 algorithm this means that:
# - the time series must have at least 75% of its full_duration
do_not_use_sparse_data = False
if current_skyline_app == 'luminosity':
do_not_use_sparse_data = True
if minimum_sparsity == 0:
do_not_use_sparse_data = False
total_period = 0
total_datapoints = 0
calculate_variables = False
if do_not_use_sparse_data:
calculate_variables = True
if determine_duration:
calculate_variables = True
if calculate_variables:
try:
start_timestamp = int(timeseries[0][0])
end_timestamp = int(timeseries[-1][0])
total_period = end_timestamp - start_timestamp
total_datapoints = len(timeseries)
except SystemExit as e:
if debug_logging:
current_logger.debug('debug_logging :: %s :: SystemExit called, exiting - %s' % (
algorithm_name, e))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
except:
traceback_msg = traceback.format_exc()
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback_msg)
if debug_logging:
current_logger.error(traceback_msg)
current_logger.error('error :: debug_logging :: %s :: failed to determine total_period and total_datapoints' % (
algorithm_name))
timeseries = []
if not timeseries:
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
if current_skyline_app == 'analyzer':
# Default for analyzer at required period to 18 hours
period_required = int(FULL_DURATION * 0.75)
else:
# Determine from timeseries
if total_period < FULL_DURATION:
period_required = int(FULL_DURATION * 0.75)
else:
period_required = int(total_period * 0.75)
if determine_duration:
period_required = int(total_period * 0.75)
if do_not_use_sparse_data:
# If the time series does not have 75% of its full_duration it does
# not have sufficient data to sample
try:
if total_period < period_required:
if debug_logging:
current_logger.debug('debug :: %s :: time series does not have sufficient data' % (
algorithm_name))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
except SystemExit as e:
if debug_logging:
current_logger.debug('debug_logging :: %s :: SystemExit called, exiting - %s' % (
algorithm_name, e))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
except:
traceback_msg = traceback.format_exc()
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback_msg)
if debug_logging:
current_logger.error(traceback_msg)
current_logger.error('error :: debug_logging :: %s :: falied to determine if time series has sufficient data' % (
algorithm_name))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
# If the time series does not have 75% of its full_duration
# datapoints it does not have sufficient data to sample
# Determine resolution from the last 30 data points
# INFO took 0.002060 seconds
if not resolution:
resolution_timestamps = []
metric_resolution = False
for metric_datapoint in timeseries[-30:]:
timestamp = int(metric_datapoint[0])
resolution_timestamps.append(timestamp)
timestamp_resolutions = []
if resolution_timestamps:
last_timestamp = None
for timestamp in resolution_timestamps:
if last_timestamp:
resolution = timestamp - last_timestamp
timestamp_resolutions.append(resolution)
last_timestamp = timestamp
else:
last_timestamp = timestamp
try:
del resolution_timestamps
except:
pass
if timestamp_resolutions:
try:
timestamp_resolutions_count = Counter(timestamp_resolutions)
ordered_timestamp_resolutions_count = timestamp_resolutions_count.most_common()
metric_resolution = int(ordered_timestamp_resolutions_count[0][0])
except SystemExit as e:
if debug_logging:
current_logger.debug('debug_logging :: %s :: SystemExit called, exiting - %s' % (
algorithm_name, e))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
except:
traceback_msg = traceback.format_exc()
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback_msg)
if debug_logging:
current_logger.error(traceback_msg)
current_logger.error('error :: debug_logging :: %s :: failed to determine if time series has sufficient data' % (
algorithm_name))
try:
del timestamp_resolutions
except:
pass
else:
metric_resolution = resolution
minimum_datapoints = None
if metric_resolution:
minimum_datapoints = int(period_required / metric_resolution)
if minimum_datapoints:
if total_datapoints < minimum_datapoints:
if debug_logging:
current_logger.debug('debug :: %s :: time series does not have sufficient data, minimum_datapoints required is %s and time series has %s' % (
algorithm_name, str(minimum_datapoints),
str(total_datapoints)))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
# Is the time series fully populated?
# full_duration_datapoints = int(full_duration / metric_resolution)
total_period_datapoints = int(total_period / metric_resolution)
# minimum_percentage_sparsity = 95
minimum_percentage_sparsity = 90
sparsity = int(total_datapoints / (total_period_datapoints / 100))
if sparsity < minimum_percentage_sparsity:
if debug_logging:
current_logger.debug('debug :: %s :: time series does not have sufficient data, minimum_percentage_sparsity required is %s and time series has %s' % (
algorithm_name, str(minimum_percentage_sparsity),
str(sparsity)))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
if len(set(item[1] for item in timeseries)) == 1:
if debug_logging:
current_logger.debug('debug :: %s :: time series does not have sufficient variability, all the values are the same' % algorithm_name)
anomalous = False
anomalyScore = 0.0
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
end_preprocessing = timer()
preprocessing_runtime = end_preprocessing - start_preprocessing
if debug_logging:
current_logger.debug('debug :: %s :: preprocessing took %.6f seconds' % (
algorithm_name, preprocessing_runtime))
if not timeseries:
if debug_logging:
current_logger.debug('debug :: %s :: m66 not run as no data' % (
algorithm_name))
anomalies = []
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
if debug_logging:
current_logger.debug('debug :: %s :: timeseries length: %s' % (
algorithm_name, str(len(timeseries))))
anomalies_dict['timestamp'] = int(timeseries[-1][0])
anomalies_dict['from_timestamp'] = int(timeseries[0][0])
start_analysis = timer()
try:
# bottleneck is used because it is much faster
# pd dataframe method (1445 data point - 24hrs): took 0.077915 seconds
# bottleneck method (1445 data point - 24hrs): took 0.005692 seconds
# numpy and pandas rolling
# 2021-07-30 12:37:31 :: 2827897 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 136.93 seconds
# 2021-07-30 12:44:53 :: 2855884 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 148.82 seconds
# 2021-07-30 12:48:41 :: 2870822 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 145.62 seconds
# 2021-07-30 12:55:00 :: 2893634 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 139.00 seconds
# 2021-07-30 12:59:31 :: 2910443 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 144.80 seconds
# 2021-07-30 13:02:31 :: 2922928 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 143.35 seconds
# 2021-07-30 14:12:56 :: 3132457 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 129.25 seconds
# 2021-07-30 14:22:35 :: 3164370 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 125.72 seconds
# 2021-07-30 14:28:24 :: 3179687 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 222.43 seconds
# 2021-07-30 14:33:45 :: 3179687 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 244.00 seconds
# 2021-07-30 14:36:27 :: 3214047 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 141.10 seconds
# numpy and bottleneck
# 2021-07-30 16:41:52 :: 3585162 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 73.92 seconds
# 2021-07-30 16:46:46 :: 3585162 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 68.84 seconds
# 2021-07-30 16:51:48 :: 3585162 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 70.55 seconds
# numpy and bottleneck (passing resolution and not calculating in m66)
# 2021-07-30 16:57:46 :: 3643253 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 65.59 seconds
if use_bottleneck:
if len(timeseries) < 10:
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
x_np = np.asarray([x[1] for x in timeseries])
# Fast Min-Max scaling
data = (x_np - x_np.min()) / (x_np.max() - x_np.min())
# m66 - calculate to nth_median
median_count = 0
while median_count < nth_median:
median_count += 1
rolling_median_s = bn.move_median(data, window=window)
median = rolling_median_s.tolist()
data = median
if median_count == nth_median:
break
# m66 - calculate the moving standard deviation for the
# nth_median array
rolling_std_s = bn.move_std(data, window=window)
std_nth_median_array = np.nan_to_num(rolling_std_s, copy=False, nan=0.0, posinf=None, neginf=None)
std_nth_median = std_nth_median_array.tolist()
if debug_logging:
current_logger.debug('debug :: %s :: std_nth_median calculated with bn' % (
algorithm_name))
else:
df = pd.DataFrame(timeseries, columns=['date', 'value'])
df['date'] = pd.to_datetime(df['date'], unit='s')
datetime_index = pd.DatetimeIndex(df['date'].values)
df = df.set_index(datetime_index)
df.drop('date', axis=1, inplace=True)
original_df = df.copy()
# MinMax scale
df = (df - df.min()) / (df.max() - df.min())
# window = 6
data = df['value'].tolist()
if len(data) < 10:
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
# m66 - calculate to nth_median
median_count = 0
while median_count < nth_median:
median_count += 1
s = pd.Series(data)
rolling_median_s = s.rolling(window).median()
median = rolling_median_s.tolist()
data = median
if median_count == nth_median:
break
# m66 - calculate the moving standard deviation for the
# nth_median array
s = pd.Series(data)
rolling_std_s = s.rolling(window).std()
nth_median_column = 'std_nth_median_%s' % str(nth_median)
df[nth_median_column] = rolling_std_s.tolist()
std_nth_median = df[nth_median_column].fillna(0).tolist()
# m66 - calculate the standard deviation for the entire nth_median
# array
metric_stddev = np.std(std_nth_median)
std_nth_median_n_sigma = []
anomalies_found = False
for value in std_nth_median:
# m66 - if the value in the 6th median array is > six-sigma of
# the metric_stddev the datapoint is anomalous
if value > (metric_stddev * n_sigma):
std_nth_median_n_sigma.append(1)
anomalies_found = True
else:
std_nth_median_n_sigma.append(0)
std_nth_median_n_sigma_column = 'std_median_%s_%s_sigma' % (str(nth_median), str(n_sigma))
if not use_bottleneck:
df[std_nth_median_n_sigma_column] = std_nth_median_n_sigma
anomalies = []
# m66 - only label anomalous if the n_sigma triggers are persisted
# for (window / 2)
if anomalies_found:
current_triggers = []
for index, item in enumerate(timeseries):
if std_nth_median_n_sigma[index] == 1:
current_triggers.append(index)
else:
if len(current_triggers) > int(window / 2):
for trigger_index in current_triggers:
# Shift the anomaly back to the beginning of the
# window
if shift_to_start_of_window:
anomalies.append(timeseries[(trigger_index - (window * int((nth_median / 2))))])
else:
anomalies.append(timeseries[trigger_index])
current_triggers = []
# Process any remaining current_triggers
if len(current_triggers) > int(window / 2):
for trigger_index in current_triggers:
# Shift the anomaly back to the beginning of the
# window
if shift_to_start_of_window:
anomalies.append(timeseries[(trigger_index - (window * int((nth_median / 2))))])
else:
anomalies.append(timeseries[trigger_index])
if not anomalies:
anomalous = False
if anomalies:
anomalous = True
anomalies_data = []
anomaly_timestamps = [int(item[0]) for item in anomalies]
for item in timeseries:
if int(item[0]) in anomaly_timestamps:
anomalies_data.append(1)
else:
anomalies_data.append(0)
if not use_bottleneck:
df['anomalies'] = anomalies_data
anomalies_list = []
for ts, value in timeseries:
if int(ts) in anomaly_timestamps:
anomalies_list.append([int(ts), value])
anomalies_dict['anomalies'][int(ts)] = value
if anomalies and save_plots_to:
try:
from adtk.visualization import plot
metric_dir = base_name.replace('.', '/')
timestamp_dir = str(int(timeseries[-1][0]))
save_path = '%s/%s/%s/%s' % (
save_plots_to, algorithm_name, metric_dir,
timestamp_dir)
if save_plots_to_absolute_dir:
save_path = '%s' % save_plots_to
anomalies_dict['file_path'] = save_path
save_to_file = '%s/%s.%s.png' % (
save_path, algorithm_name, base_name)
if filename_prefix:
save_to_file = '%s/%s.%s.%s.png' % (
save_path, filename_prefix, algorithm_name,
base_name)
save_to_path = os_path_dirname(save_to_file)
title = '%s\n%s - median %s %s-sigma persisted (window=%s)' % (
base_name, algorithm_name, str(nth_median), str(n_sigma), str(window))
if not os_path_exists(save_to_path):
try:
mkdir_p(save_to_path)
except Exception as e:
current_logger.error('error :: %s :: failed to create dir - %s - %s' % (
algorithm_name, save_to_path, e))
if os_path_exists(save_to_path):
try:
plot(original_df['value'], anomaly=df['anomalies'], anomaly_color='red', title=title, save_to_file=save_to_file)
if debug_logging:
current_logger.debug('debug :: %s :: plot saved to - %s' % (
algorithm_name, save_to_file))
anomalies_dict['image'] = save_to_file
except Exception as e:
current_logger.error('error :: %s :: failed to plot - %s - %s' % (
algorithm_name, base_name, e))
anomalies_file = '%s/%s.%s.anomalies_list.txt' % (
save_path, algorithm_name, base_name)
with open(anomalies_file, 'w') as fh:
fh.write(str(anomalies_list))
# os.chmod(anomalies_file, mode=0o644)
data_file = '%s/data.txt' % (save_path)
with open(data_file, 'w') as fh:
fh.write(str(anomalies_dict))
except SystemExit as e:
if debug_logging:
current_logger.debug('debug_logging :: %s :: SystemExit called during save plot, exiting - %s' % (
algorithm_name, e))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
except Exception as e:
traceback_msg = traceback.format_exc()
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback_msg)
if debug_logging:
current_logger.error(traceback_msg)
current_logger.error('error :: %s :: failed to plot or save anomalies file - %s - %s' % (
algorithm_name, base_name, e))
try:
del df
except:
pass
except SystemExit as e:
if debug_logging:
current_logger.debug('debug_logging :: %s :: SystemExit called, during analysis, exiting - %s' % (
algorithm_name, e))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
except:
traceback_msg = traceback.format_exc()
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback_msg)
if debug_logging:
current_logger.error(traceback_msg)
current_logger.error('error :: debug_logging :: %s :: failed to run on ts' % (
algorithm_name))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
end_analysis = timer()
analysis_runtime = end_analysis - start_analysis
if debug_logging:
current_logger.debug('debug :: analysis with %s took %.6f seconds' % (
algorithm_name, analysis_runtime))
if anomalous:
anomalyScore = 1.0
else:
anomalyScore = 0.0
if debug_logging:
current_logger.info('%s :: anomalous - %s, anomalyScore - %s' % (
algorithm_name, str(anomalous), str(anomalyScore)))
if debug_logging:
end = timer()
processing_runtime = end - start
current_logger.info('%s :: completed in %.6f seconds' % (
algorithm_name, processing_runtime))
try:
del timeseries
except:
pass
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
except SystemExit as e:
if debug_logging:
current_logger.debug('debug_logging :: %s :: SystemExit called (before StopIteration), exiting - %s' % (
algorithm_name, e))
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore)
except StopIteration:
# This except pattern MUST be used in ALL custom algortihms to
# facilitate the traceback from any errors. The algorithm we want to
# run super fast and without spamming the log with lots of errors.
# But we do not want the function returning and not reporting
# anything to the log, so the pythonic except is used to "sample" any
# algorithm errors to a tmp file and report once per run rather than
# spewing tons of errors into the log e.g. analyzer.log
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (False, None, anomalies)
return (False, None)
except:
record_algorithm_error(current_skyline_app, parent_pid, algorithm_name, traceback.format_exc())
# Return None and None as the algorithm could not determine True or False
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (False, None, anomalies)
return (False, None)
if current_skyline_app == 'webapp':
return (anomalous, anomalyScore, anomalies, anomalies_dict)
if return_anomalies:
return (anomalous, anomalyScore, anomalies)
return (anomalous, anomalyScore) | [
"def",
"m66",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"timeseries",
",",
"algorithm_parameters",
")",
":",
"# You MUST define the algorithm_name",
"algorithm_name",
"=",
"'m66'",
"# Define the default state of None and None, anomalous does not default to",
"# False as that is not correct, False is only correct if the algorithm",
"# determines the data point is not anomalous. The same is true for the",
"# anomalyScore.",
"anomalous",
"=",
"None",
"anomalyScore",
"=",
"None",
"return_anomalies",
"=",
"False",
"anomalies",
"=",
"[",
"]",
"anomalies_dict",
"=",
"{",
"}",
"anomalies_dict",
"[",
"'algorithm'",
"]",
"=",
"algorithm_name",
"realtime_analysis",
"=",
"False",
"current_logger",
"=",
"None",
"dev_null",
"=",
"None",
"# If you wanted to log, you can but this should only be done during",
"# testing and development",
"def",
"get_log",
"(",
"current_skyline_app",
")",
":",
"current_skyline_app_logger",
"=",
"current_skyline_app",
"+",
"'Log'",
"current_logger",
"=",
"logging",
".",
"getLogger",
"(",
"current_skyline_app_logger",
")",
"return",
"current_logger",
"start",
"=",
"timer",
"(",
")",
"# Use the algorithm_parameters to determine the sample_period",
"debug_logging",
"=",
"None",
"try",
":",
"debug_logging",
"=",
"algorithm_parameters",
"[",
"'debug_logging'",
"]",
"except",
":",
"debug_logging",
"=",
"False",
"if",
"debug_logging",
":",
"try",
":",
"current_logger",
"=",
"get_log",
"(",
"current_skyline_app",
")",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: debug_logging enabled with algorithm_parameters - %s'",
"%",
"(",
"algorithm_name",
",",
"str",
"(",
"algorithm_parameters",
")",
")",
")",
"except",
"Exception",
"as",
"e",
":",
"# This except pattern MUST be used in ALL custom algortihms to",
"# facilitate the traceback from any errors. The algorithm we want to",
"# run super fast and without spamming the log with lots of errors.",
"# But we do not want the function returning and not reporting",
"# anything to the log, so the pythonic except is used to \"sample\" any",
"# algorithm errors to a tmp file and report once per run rather than",
"# spewing tons of errors into the log e.g. analyzer.log",
"dev_null",
"=",
"e",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"# Return None and None as the algorithm could not determine True or False",
"del",
"dev_null",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"# Allow the m66 parameters to be passed in the algorithm_parameters",
"window",
"=",
"6",
"try",
":",
"window",
"=",
"algorithm_parameters",
"[",
"'window'",
"]",
"except",
"KeyError",
":",
"window",
"=",
"6",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"nth_median",
"=",
"6",
"try",
":",
"nth_median",
"=",
"algorithm_parameters",
"[",
"'nth_median'",
"]",
"except",
"KeyError",
":",
"nth_median",
"=",
"6",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"n_sigma",
"=",
"6",
"try",
":",
"n_sigma",
"=",
"algorithm_parameters",
"[",
"'sigma'",
"]",
"except",
"KeyError",
":",
"n_sigma",
"=",
"6",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"resolution",
"=",
"0",
"try",
":",
"resolution",
"=",
"algorithm_parameters",
"[",
"'resolution'",
"]",
"except",
"KeyError",
":",
"resolution",
"=",
"0",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"determine_duration",
"=",
"False",
"try",
":",
"determine_duration",
"=",
"algorithm_parameters",
"[",
"'determine_duration'",
"]",
"except",
"KeyError",
":",
"determine_duration",
"=",
"False",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"minimum_sparsity",
"=",
"0",
"try",
":",
"minimum_sparsity",
"=",
"algorithm_parameters",
"[",
"'minimum_sparsity'",
"]",
"except",
"KeyError",
":",
"minimum_sparsity",
"=",
"0",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"shift_to_start_of_window",
"=",
"True",
"try",
":",
"shift_to_start_of_window",
"=",
"algorithm_parameters",
"[",
"'shift_to_start_of_window'",
"]",
"except",
"KeyError",
":",
"shift_to_start_of_window",
"=",
"True",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"save_plots_to",
"=",
"False",
"try",
":",
"save_plots_to",
"=",
"algorithm_parameters",
"[",
"'save_plots_to'",
"]",
"except",
"KeyError",
":",
"save_plots_to",
"=",
"False",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"save_plots_to_absolute_dir",
"=",
"False",
"try",
":",
"save_plots_to_absolute_dir",
"=",
"algorithm_parameters",
"[",
"'save_plots_to_absolute_dir'",
"]",
"except",
"KeyError",
":",
"save_plots_to_absolute_dir",
"=",
"False",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"filename_prefix",
"=",
"False",
"try",
":",
"filename_prefix",
"=",
"algorithm_parameters",
"[",
"'filename_prefix'",
"]",
"except",
"KeyError",
":",
"filename_prefix",
"=",
"False",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: algorithm_parameters :: %s'",
"%",
"(",
"str",
"(",
"algorithm_parameters",
")",
")",
")",
"return_anomalies",
"=",
"False",
"try",
":",
"return_anomalies",
"=",
"algorithm_parameters",
"[",
"'return_anomalies'",
"]",
"except",
"KeyError",
":",
"return_anomalies",
"=",
"False",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"try",
":",
"realtime_analysis",
"=",
"algorithm_parameters",
"[",
"'realtime_analysis'",
"]",
"except",
"KeyError",
":",
"realtime_analysis",
"=",
"False",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"save_plots_to",
"=",
"False",
"try",
":",
"save_plots_to",
"=",
"algorithm_parameters",
"[",
"'save_plots_to'",
"]",
"except",
"KeyError",
":",
"save_plots_to",
"=",
"False",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"save_plots_to_absolute_dir",
"=",
"False",
"try",
":",
"save_plots_to_absolute_dir",
"=",
"algorithm_parameters",
"[",
"'save_plots_to_absolute_dir'",
"]",
"except",
"KeyError",
":",
"save_plots_to",
"=",
"False",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"filename_prefix",
"=",
"False",
"try",
":",
"filename_prefix",
"=",
"algorithm_parameters",
"[",
"'filename_prefix'",
"]",
"except",
"KeyError",
":",
"filename_prefix",
"=",
"False",
"except",
"Exception",
"as",
"e",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"dev_null",
"=",
"e",
"try",
":",
"base_name",
"=",
"algorithm_parameters",
"[",
"'base_name'",
"]",
"except",
"Exception",
"as",
"e",
":",
"# This except pattern MUST be used in ALL custom algortihms to",
"# facilitate the traceback from any errors. The algorithm we want to",
"# run super fast and without spamming the log with lots of errors.",
"# But we do not want the function returning and not reporting",
"# anything to the log, so the pythonic except is used to \"sample\" any",
"# algorithm errors to a tmp file and report once per run rather than",
"# spewing tons of errors into the log e.g. analyzer.log",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"# Return None and None as the algorithm could not determine True or False",
"dev_null",
"=",
"e",
"del",
"dev_null",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"False",
",",
"None",
",",
"anomalies",
")",
"return",
"(",
"False",
",",
"None",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: base_name - %s'",
"%",
"(",
"algorithm_name",
",",
"str",
"(",
"base_name",
")",
")",
")",
"anomalies_dict",
"[",
"'metric'",
"]",
"=",
"base_name",
"anomalies_dict",
"[",
"'anomalies'",
"]",
"=",
"{",
"}",
"use_bottleneck",
"=",
"True",
"if",
"save_plots_to",
":",
"use_bottleneck",
"=",
"False",
"if",
"use_bottleneck",
":",
"import",
"bottleneck",
"as",
"bn",
"# ALWAYS WRAP YOUR ALGORITHM IN try and the BELOW except",
"try",
":",
"start_preprocessing",
"=",
"timer",
"(",
")",
"# INFO: Sorting time series of 10079 data points took 0.002215 seconds",
"timeseries",
"=",
"sorted",
"(",
"timeseries",
",",
"key",
"=",
"lambda",
"x",
":",
"x",
"[",
"0",
"]",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: time series of length - %s'",
"%",
"(",
"algorithm_name",
",",
"str",
"(",
"len",
"(",
"timeseries",
")",
")",
")",
")",
"# Testing the data to ensure it meets minimum requirements, in the case",
"# of Skyline's use of the m66 algorithm this means that:",
"# - the time series must have at least 75% of its full_duration",
"do_not_use_sparse_data",
"=",
"False",
"if",
"current_skyline_app",
"==",
"'luminosity'",
":",
"do_not_use_sparse_data",
"=",
"True",
"if",
"minimum_sparsity",
"==",
"0",
":",
"do_not_use_sparse_data",
"=",
"False",
"total_period",
"=",
"0",
"total_datapoints",
"=",
"0",
"calculate_variables",
"=",
"False",
"if",
"do_not_use_sparse_data",
":",
"calculate_variables",
"=",
"True",
"if",
"determine_duration",
":",
"calculate_variables",
"=",
"True",
"if",
"calculate_variables",
":",
"try",
":",
"start_timestamp",
"=",
"int",
"(",
"timeseries",
"[",
"0",
"]",
"[",
"0",
"]",
")",
"end_timestamp",
"=",
"int",
"(",
"timeseries",
"[",
"-",
"1",
"]",
"[",
"0",
"]",
")",
"total_period",
"=",
"end_timestamp",
"-",
"start_timestamp",
"total_datapoints",
"=",
"len",
"(",
"timeseries",
")",
"except",
"SystemExit",
"as",
"e",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug_logging :: %s :: SystemExit called, exiting - %s'",
"%",
"(",
"algorithm_name",
",",
"e",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"except",
":",
"traceback_msg",
"=",
"traceback",
".",
"format_exc",
"(",
")",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback_msg",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"error",
"(",
"traceback_msg",
")",
"current_logger",
".",
"error",
"(",
"'error :: debug_logging :: %s :: failed to determine total_period and total_datapoints'",
"%",
"(",
"algorithm_name",
")",
")",
"timeseries",
"=",
"[",
"]",
"if",
"not",
"timeseries",
":",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"if",
"current_skyline_app",
"==",
"'analyzer'",
":",
"# Default for analyzer at required period to 18 hours",
"period_required",
"=",
"int",
"(",
"FULL_DURATION",
"*",
"0.75",
")",
"else",
":",
"# Determine from timeseries",
"if",
"total_period",
"<",
"FULL_DURATION",
":",
"period_required",
"=",
"int",
"(",
"FULL_DURATION",
"*",
"0.75",
")",
"else",
":",
"period_required",
"=",
"int",
"(",
"total_period",
"*",
"0.75",
")",
"if",
"determine_duration",
":",
"period_required",
"=",
"int",
"(",
"total_period",
"*",
"0.75",
")",
"if",
"do_not_use_sparse_data",
":",
"# If the time series does not have 75% of its full_duration it does",
"# not have sufficient data to sample",
"try",
":",
"if",
"total_period",
"<",
"period_required",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: time series does not have sufficient data'",
"%",
"(",
"algorithm_name",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"except",
"SystemExit",
"as",
"e",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug_logging :: %s :: SystemExit called, exiting - %s'",
"%",
"(",
"algorithm_name",
",",
"e",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"except",
":",
"traceback_msg",
"=",
"traceback",
".",
"format_exc",
"(",
")",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback_msg",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"error",
"(",
"traceback_msg",
")",
"current_logger",
".",
"error",
"(",
"'error :: debug_logging :: %s :: falied to determine if time series has sufficient data'",
"%",
"(",
"algorithm_name",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"# If the time series does not have 75% of its full_duration",
"# datapoints it does not have sufficient data to sample",
"# Determine resolution from the last 30 data points",
"# INFO took 0.002060 seconds",
"if",
"not",
"resolution",
":",
"resolution_timestamps",
"=",
"[",
"]",
"metric_resolution",
"=",
"False",
"for",
"metric_datapoint",
"in",
"timeseries",
"[",
"-",
"30",
":",
"]",
":",
"timestamp",
"=",
"int",
"(",
"metric_datapoint",
"[",
"0",
"]",
")",
"resolution_timestamps",
".",
"append",
"(",
"timestamp",
")",
"timestamp_resolutions",
"=",
"[",
"]",
"if",
"resolution_timestamps",
":",
"last_timestamp",
"=",
"None",
"for",
"timestamp",
"in",
"resolution_timestamps",
":",
"if",
"last_timestamp",
":",
"resolution",
"=",
"timestamp",
"-",
"last_timestamp",
"timestamp_resolutions",
".",
"append",
"(",
"resolution",
")",
"last_timestamp",
"=",
"timestamp",
"else",
":",
"last_timestamp",
"=",
"timestamp",
"try",
":",
"del",
"resolution_timestamps",
"except",
":",
"pass",
"if",
"timestamp_resolutions",
":",
"try",
":",
"timestamp_resolutions_count",
"=",
"Counter",
"(",
"timestamp_resolutions",
")",
"ordered_timestamp_resolutions_count",
"=",
"timestamp_resolutions_count",
".",
"most_common",
"(",
")",
"metric_resolution",
"=",
"int",
"(",
"ordered_timestamp_resolutions_count",
"[",
"0",
"]",
"[",
"0",
"]",
")",
"except",
"SystemExit",
"as",
"e",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug_logging :: %s :: SystemExit called, exiting - %s'",
"%",
"(",
"algorithm_name",
",",
"e",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"except",
":",
"traceback_msg",
"=",
"traceback",
".",
"format_exc",
"(",
")",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback_msg",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"error",
"(",
"traceback_msg",
")",
"current_logger",
".",
"error",
"(",
"'error :: debug_logging :: %s :: failed to determine if time series has sufficient data'",
"%",
"(",
"algorithm_name",
")",
")",
"try",
":",
"del",
"timestamp_resolutions",
"except",
":",
"pass",
"else",
":",
"metric_resolution",
"=",
"resolution",
"minimum_datapoints",
"=",
"None",
"if",
"metric_resolution",
":",
"minimum_datapoints",
"=",
"int",
"(",
"period_required",
"/",
"metric_resolution",
")",
"if",
"minimum_datapoints",
":",
"if",
"total_datapoints",
"<",
"minimum_datapoints",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: time series does not have sufficient data, minimum_datapoints required is %s and time series has %s'",
"%",
"(",
"algorithm_name",
",",
"str",
"(",
"minimum_datapoints",
")",
",",
"str",
"(",
"total_datapoints",
")",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"# Is the time series fully populated?",
"# full_duration_datapoints = int(full_duration / metric_resolution)",
"total_period_datapoints",
"=",
"int",
"(",
"total_period",
"/",
"metric_resolution",
")",
"# minimum_percentage_sparsity = 95",
"minimum_percentage_sparsity",
"=",
"90",
"sparsity",
"=",
"int",
"(",
"total_datapoints",
"/",
"(",
"total_period_datapoints",
"/",
"100",
")",
")",
"if",
"sparsity",
"<",
"minimum_percentage_sparsity",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: time series does not have sufficient data, minimum_percentage_sparsity required is %s and time series has %s'",
"%",
"(",
"algorithm_name",
",",
"str",
"(",
"minimum_percentage_sparsity",
")",
",",
"str",
"(",
"sparsity",
")",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"if",
"len",
"(",
"set",
"(",
"item",
"[",
"1",
"]",
"for",
"item",
"in",
"timeseries",
")",
")",
"==",
"1",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: time series does not have sufficient variability, all the values are the same'",
"%",
"algorithm_name",
")",
"anomalous",
"=",
"False",
"anomalyScore",
"=",
"0.0",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"end_preprocessing",
"=",
"timer",
"(",
")",
"preprocessing_runtime",
"=",
"end_preprocessing",
"-",
"start_preprocessing",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: preprocessing took %.6f seconds'",
"%",
"(",
"algorithm_name",
",",
"preprocessing_runtime",
")",
")",
"if",
"not",
"timeseries",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: m66 not run as no data'",
"%",
"(",
"algorithm_name",
")",
")",
"anomalies",
"=",
"[",
"]",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: timeseries length: %s'",
"%",
"(",
"algorithm_name",
",",
"str",
"(",
"len",
"(",
"timeseries",
")",
")",
")",
")",
"anomalies_dict",
"[",
"'timestamp'",
"]",
"=",
"int",
"(",
"timeseries",
"[",
"-",
"1",
"]",
"[",
"0",
"]",
")",
"anomalies_dict",
"[",
"'from_timestamp'",
"]",
"=",
"int",
"(",
"timeseries",
"[",
"0",
"]",
"[",
"0",
"]",
")",
"start_analysis",
"=",
"timer",
"(",
")",
"try",
":",
"# bottleneck is used because it is much faster",
"# pd dataframe method (1445 data point - 24hrs): took 0.077915 seconds",
"# bottleneck method (1445 data point - 24hrs): took 0.005692 seconds",
"# numpy and pandas rolling",
"# 2021-07-30 12:37:31 :: 2827897 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 136.93 seconds",
"# 2021-07-30 12:44:53 :: 2855884 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 148.82 seconds",
"# 2021-07-30 12:48:41 :: 2870822 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 145.62 seconds",
"# 2021-07-30 12:55:00 :: 2893634 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 139.00 seconds",
"# 2021-07-30 12:59:31 :: 2910443 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 144.80 seconds",
"# 2021-07-30 13:02:31 :: 2922928 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 143.35 seconds",
"# 2021-07-30 14:12:56 :: 3132457 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 129.25 seconds",
"# 2021-07-30 14:22:35 :: 3164370 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 125.72 seconds",
"# 2021-07-30 14:28:24 :: 3179687 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 222.43 seconds",
"# 2021-07-30 14:33:45 :: 3179687 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 244.00 seconds",
"# 2021-07-30 14:36:27 :: 3214047 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 141.10 seconds",
"# numpy and bottleneck",
"# 2021-07-30 16:41:52 :: 3585162 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 73.92 seconds",
"# 2021-07-30 16:46:46 :: 3585162 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 68.84 seconds",
"# 2021-07-30 16:51:48 :: 3585162 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 70.55 seconds",
"# numpy and bottleneck (passing resolution and not calculating in m66)",
"# 2021-07-30 16:57:46 :: 3643253 :: cloudbursts :: find_cloudbursts completed on 1530 metrics in 65.59 seconds",
"if",
"use_bottleneck",
":",
"if",
"len",
"(",
"timeseries",
")",
"<",
"10",
":",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"x_np",
"=",
"np",
".",
"asarray",
"(",
"[",
"x",
"[",
"1",
"]",
"for",
"x",
"in",
"timeseries",
"]",
")",
"# Fast Min-Max scaling",
"data",
"=",
"(",
"x_np",
"-",
"x_np",
".",
"min",
"(",
")",
")",
"/",
"(",
"x_np",
".",
"max",
"(",
")",
"-",
"x_np",
".",
"min",
"(",
")",
")",
"# m66 - calculate to nth_median",
"median_count",
"=",
"0",
"while",
"median_count",
"<",
"nth_median",
":",
"median_count",
"+=",
"1",
"rolling_median_s",
"=",
"bn",
".",
"move_median",
"(",
"data",
",",
"window",
"=",
"window",
")",
"median",
"=",
"rolling_median_s",
".",
"tolist",
"(",
")",
"data",
"=",
"median",
"if",
"median_count",
"==",
"nth_median",
":",
"break",
"# m66 - calculate the moving standard deviation for the",
"# nth_median array",
"rolling_std_s",
"=",
"bn",
".",
"move_std",
"(",
"data",
",",
"window",
"=",
"window",
")",
"std_nth_median_array",
"=",
"np",
".",
"nan_to_num",
"(",
"rolling_std_s",
",",
"copy",
"=",
"False",
",",
"nan",
"=",
"0.0",
",",
"posinf",
"=",
"None",
",",
"neginf",
"=",
"None",
")",
"std_nth_median",
"=",
"std_nth_median_array",
".",
"tolist",
"(",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: std_nth_median calculated with bn'",
"%",
"(",
"algorithm_name",
")",
")",
"else",
":",
"df",
"=",
"pd",
".",
"DataFrame",
"(",
"timeseries",
",",
"columns",
"=",
"[",
"'date'",
",",
"'value'",
"]",
")",
"df",
"[",
"'date'",
"]",
"=",
"pd",
".",
"to_datetime",
"(",
"df",
"[",
"'date'",
"]",
",",
"unit",
"=",
"'s'",
")",
"datetime_index",
"=",
"pd",
".",
"DatetimeIndex",
"(",
"df",
"[",
"'date'",
"]",
".",
"values",
")",
"df",
"=",
"df",
".",
"set_index",
"(",
"datetime_index",
")",
"df",
".",
"drop",
"(",
"'date'",
",",
"axis",
"=",
"1",
",",
"inplace",
"=",
"True",
")",
"original_df",
"=",
"df",
".",
"copy",
"(",
")",
"# MinMax scale",
"df",
"=",
"(",
"df",
"-",
"df",
".",
"min",
"(",
")",
")",
"/",
"(",
"df",
".",
"max",
"(",
")",
"-",
"df",
".",
"min",
"(",
")",
")",
"# window = 6",
"data",
"=",
"df",
"[",
"'value'",
"]",
".",
"tolist",
"(",
")",
"if",
"len",
"(",
"data",
")",
"<",
"10",
":",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"# m66 - calculate to nth_median",
"median_count",
"=",
"0",
"while",
"median_count",
"<",
"nth_median",
":",
"median_count",
"+=",
"1",
"s",
"=",
"pd",
".",
"Series",
"(",
"data",
")",
"rolling_median_s",
"=",
"s",
".",
"rolling",
"(",
"window",
")",
".",
"median",
"(",
")",
"median",
"=",
"rolling_median_s",
".",
"tolist",
"(",
")",
"data",
"=",
"median",
"if",
"median_count",
"==",
"nth_median",
":",
"break",
"# m66 - calculate the moving standard deviation for the",
"# nth_median array",
"s",
"=",
"pd",
".",
"Series",
"(",
"data",
")",
"rolling_std_s",
"=",
"s",
".",
"rolling",
"(",
"window",
")",
".",
"std",
"(",
")",
"nth_median_column",
"=",
"'std_nth_median_%s'",
"%",
"str",
"(",
"nth_median",
")",
"df",
"[",
"nth_median_column",
"]",
"=",
"rolling_std_s",
".",
"tolist",
"(",
")",
"std_nth_median",
"=",
"df",
"[",
"nth_median_column",
"]",
".",
"fillna",
"(",
"0",
")",
".",
"tolist",
"(",
")",
"# m66 - calculate the standard deviation for the entire nth_median",
"# array",
"metric_stddev",
"=",
"np",
".",
"std",
"(",
"std_nth_median",
")",
"std_nth_median_n_sigma",
"=",
"[",
"]",
"anomalies_found",
"=",
"False",
"for",
"value",
"in",
"std_nth_median",
":",
"# m66 - if the value in the 6th median array is > six-sigma of",
"# the metric_stddev the datapoint is anomalous",
"if",
"value",
">",
"(",
"metric_stddev",
"*",
"n_sigma",
")",
":",
"std_nth_median_n_sigma",
".",
"append",
"(",
"1",
")",
"anomalies_found",
"=",
"True",
"else",
":",
"std_nth_median_n_sigma",
".",
"append",
"(",
"0",
")",
"std_nth_median_n_sigma_column",
"=",
"'std_median_%s_%s_sigma'",
"%",
"(",
"str",
"(",
"nth_median",
")",
",",
"str",
"(",
"n_sigma",
")",
")",
"if",
"not",
"use_bottleneck",
":",
"df",
"[",
"std_nth_median_n_sigma_column",
"]",
"=",
"std_nth_median_n_sigma",
"anomalies",
"=",
"[",
"]",
"# m66 - only label anomalous if the n_sigma triggers are persisted",
"# for (window / 2)",
"if",
"anomalies_found",
":",
"current_triggers",
"=",
"[",
"]",
"for",
"index",
",",
"item",
"in",
"enumerate",
"(",
"timeseries",
")",
":",
"if",
"std_nth_median_n_sigma",
"[",
"index",
"]",
"==",
"1",
":",
"current_triggers",
".",
"append",
"(",
"index",
")",
"else",
":",
"if",
"len",
"(",
"current_triggers",
")",
">",
"int",
"(",
"window",
"/",
"2",
")",
":",
"for",
"trigger_index",
"in",
"current_triggers",
":",
"# Shift the anomaly back to the beginning of the",
"# window",
"if",
"shift_to_start_of_window",
":",
"anomalies",
".",
"append",
"(",
"timeseries",
"[",
"(",
"trigger_index",
"-",
"(",
"window",
"*",
"int",
"(",
"(",
"nth_median",
"/",
"2",
")",
")",
")",
")",
"]",
")",
"else",
":",
"anomalies",
".",
"append",
"(",
"timeseries",
"[",
"trigger_index",
"]",
")",
"current_triggers",
"=",
"[",
"]",
"# Process any remaining current_triggers",
"if",
"len",
"(",
"current_triggers",
")",
">",
"int",
"(",
"window",
"/",
"2",
")",
":",
"for",
"trigger_index",
"in",
"current_triggers",
":",
"# Shift the anomaly back to the beginning of the",
"# window",
"if",
"shift_to_start_of_window",
":",
"anomalies",
".",
"append",
"(",
"timeseries",
"[",
"(",
"trigger_index",
"-",
"(",
"window",
"*",
"int",
"(",
"(",
"nth_median",
"/",
"2",
")",
")",
")",
")",
"]",
")",
"else",
":",
"anomalies",
".",
"append",
"(",
"timeseries",
"[",
"trigger_index",
"]",
")",
"if",
"not",
"anomalies",
":",
"anomalous",
"=",
"False",
"if",
"anomalies",
":",
"anomalous",
"=",
"True",
"anomalies_data",
"=",
"[",
"]",
"anomaly_timestamps",
"=",
"[",
"int",
"(",
"item",
"[",
"0",
"]",
")",
"for",
"item",
"in",
"anomalies",
"]",
"for",
"item",
"in",
"timeseries",
":",
"if",
"int",
"(",
"item",
"[",
"0",
"]",
")",
"in",
"anomaly_timestamps",
":",
"anomalies_data",
".",
"append",
"(",
"1",
")",
"else",
":",
"anomalies_data",
".",
"append",
"(",
"0",
")",
"if",
"not",
"use_bottleneck",
":",
"df",
"[",
"'anomalies'",
"]",
"=",
"anomalies_data",
"anomalies_list",
"=",
"[",
"]",
"for",
"ts",
",",
"value",
"in",
"timeseries",
":",
"if",
"int",
"(",
"ts",
")",
"in",
"anomaly_timestamps",
":",
"anomalies_list",
".",
"append",
"(",
"[",
"int",
"(",
"ts",
")",
",",
"value",
"]",
")",
"anomalies_dict",
"[",
"'anomalies'",
"]",
"[",
"int",
"(",
"ts",
")",
"]",
"=",
"value",
"if",
"anomalies",
"and",
"save_plots_to",
":",
"try",
":",
"from",
"adtk",
".",
"visualization",
"import",
"plot",
"metric_dir",
"=",
"base_name",
".",
"replace",
"(",
"'.'",
",",
"'/'",
")",
"timestamp_dir",
"=",
"str",
"(",
"int",
"(",
"timeseries",
"[",
"-",
"1",
"]",
"[",
"0",
"]",
")",
")",
"save_path",
"=",
"'%s/%s/%s/%s'",
"%",
"(",
"save_plots_to",
",",
"algorithm_name",
",",
"metric_dir",
",",
"timestamp_dir",
")",
"if",
"save_plots_to_absolute_dir",
":",
"save_path",
"=",
"'%s'",
"%",
"save_plots_to",
"anomalies_dict",
"[",
"'file_path'",
"]",
"=",
"save_path",
"save_to_file",
"=",
"'%s/%s.%s.png'",
"%",
"(",
"save_path",
",",
"algorithm_name",
",",
"base_name",
")",
"if",
"filename_prefix",
":",
"save_to_file",
"=",
"'%s/%s.%s.%s.png'",
"%",
"(",
"save_path",
",",
"filename_prefix",
",",
"algorithm_name",
",",
"base_name",
")",
"save_to_path",
"=",
"os_path_dirname",
"(",
"save_to_file",
")",
"title",
"=",
"'%s\\n%s - median %s %s-sigma persisted (window=%s)'",
"%",
"(",
"base_name",
",",
"algorithm_name",
",",
"str",
"(",
"nth_median",
")",
",",
"str",
"(",
"n_sigma",
")",
",",
"str",
"(",
"window",
")",
")",
"if",
"not",
"os_path_exists",
"(",
"save_to_path",
")",
":",
"try",
":",
"mkdir_p",
"(",
"save_to_path",
")",
"except",
"Exception",
"as",
"e",
":",
"current_logger",
".",
"error",
"(",
"'error :: %s :: failed to create dir - %s - %s'",
"%",
"(",
"algorithm_name",
",",
"save_to_path",
",",
"e",
")",
")",
"if",
"os_path_exists",
"(",
"save_to_path",
")",
":",
"try",
":",
"plot",
"(",
"original_df",
"[",
"'value'",
"]",
",",
"anomaly",
"=",
"df",
"[",
"'anomalies'",
"]",
",",
"anomaly_color",
"=",
"'red'",
",",
"title",
"=",
"title",
",",
"save_to_file",
"=",
"save_to_file",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: %s :: plot saved to - %s'",
"%",
"(",
"algorithm_name",
",",
"save_to_file",
")",
")",
"anomalies_dict",
"[",
"'image'",
"]",
"=",
"save_to_file",
"except",
"Exception",
"as",
"e",
":",
"current_logger",
".",
"error",
"(",
"'error :: %s :: failed to plot - %s - %s'",
"%",
"(",
"algorithm_name",
",",
"base_name",
",",
"e",
")",
")",
"anomalies_file",
"=",
"'%s/%s.%s.anomalies_list.txt'",
"%",
"(",
"save_path",
",",
"algorithm_name",
",",
"base_name",
")",
"with",
"open",
"(",
"anomalies_file",
",",
"'w'",
")",
"as",
"fh",
":",
"fh",
".",
"write",
"(",
"str",
"(",
"anomalies_list",
")",
")",
"# os.chmod(anomalies_file, mode=0o644)",
"data_file",
"=",
"'%s/data.txt'",
"%",
"(",
"save_path",
")",
"with",
"open",
"(",
"data_file",
",",
"'w'",
")",
"as",
"fh",
":",
"fh",
".",
"write",
"(",
"str",
"(",
"anomalies_dict",
")",
")",
"except",
"SystemExit",
"as",
"e",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug_logging :: %s :: SystemExit called during save plot, exiting - %s'",
"%",
"(",
"algorithm_name",
",",
"e",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"except",
"Exception",
"as",
"e",
":",
"traceback_msg",
"=",
"traceback",
".",
"format_exc",
"(",
")",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback_msg",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"error",
"(",
"traceback_msg",
")",
"current_logger",
".",
"error",
"(",
"'error :: %s :: failed to plot or save anomalies file - %s - %s'",
"%",
"(",
"algorithm_name",
",",
"base_name",
",",
"e",
")",
")",
"try",
":",
"del",
"df",
"except",
":",
"pass",
"except",
"SystemExit",
"as",
"e",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug_logging :: %s :: SystemExit called, during analysis, exiting - %s'",
"%",
"(",
"algorithm_name",
",",
"e",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"except",
":",
"traceback_msg",
"=",
"traceback",
".",
"format_exc",
"(",
")",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback_msg",
")",
"if",
"debug_logging",
":",
"current_logger",
".",
"error",
"(",
"traceback_msg",
")",
"current_logger",
".",
"error",
"(",
"'error :: debug_logging :: %s :: failed to run on ts'",
"%",
"(",
"algorithm_name",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"end_analysis",
"=",
"timer",
"(",
")",
"analysis_runtime",
"=",
"end_analysis",
"-",
"start_analysis",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug :: analysis with %s took %.6f seconds'",
"%",
"(",
"algorithm_name",
",",
"analysis_runtime",
")",
")",
"if",
"anomalous",
":",
"anomalyScore",
"=",
"1.0",
"else",
":",
"anomalyScore",
"=",
"0.0",
"if",
"debug_logging",
":",
"current_logger",
".",
"info",
"(",
"'%s :: anomalous - %s, anomalyScore - %s'",
"%",
"(",
"algorithm_name",
",",
"str",
"(",
"anomalous",
")",
",",
"str",
"(",
"anomalyScore",
")",
")",
")",
"if",
"debug_logging",
":",
"end",
"=",
"timer",
"(",
")",
"processing_runtime",
"=",
"end",
"-",
"start",
"current_logger",
".",
"info",
"(",
"'%s :: completed in %.6f seconds'",
"%",
"(",
"algorithm_name",
",",
"processing_runtime",
")",
")",
"try",
":",
"del",
"timeseries",
"except",
":",
"pass",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"except",
"SystemExit",
"as",
"e",
":",
"if",
"debug_logging",
":",
"current_logger",
".",
"debug",
"(",
"'debug_logging :: %s :: SystemExit called (before StopIteration), exiting - %s'",
"%",
"(",
"algorithm_name",
",",
"e",
")",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")",
"except",
"StopIteration",
":",
"# This except pattern MUST be used in ALL custom algortihms to",
"# facilitate the traceback from any errors. The algorithm we want to",
"# run super fast and without spamming the log with lots of errors.",
"# But we do not want the function returning and not reporting",
"# anything to the log, so the pythonic except is used to \"sample\" any",
"# algorithm errors to a tmp file and report once per run rather than",
"# spewing tons of errors into the log e.g. analyzer.log",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"False",
",",
"None",
",",
"anomalies",
")",
"return",
"(",
"False",
",",
"None",
")",
"except",
":",
"record_algorithm_error",
"(",
"current_skyline_app",
",",
"parent_pid",
",",
"algorithm_name",
",",
"traceback",
".",
"format_exc",
"(",
")",
")",
"# Return None and None as the algorithm could not determine True or False",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"False",
",",
"None",
",",
"anomalies",
")",
"return",
"(",
"False",
",",
"None",
")",
"if",
"current_skyline_app",
"==",
"'webapp'",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
",",
"anomalies_dict",
")",
"if",
"return_anomalies",
":",
"return",
"(",
"anomalous",
",",
"anomalyScore",
",",
"anomalies",
")",
"return",
"(",
"anomalous",
",",
"anomalyScore",
")"
] | https://github.com/earthgecko/skyline/blob/12754424de72593e29eb21009fb1ae3f07f3abff/skyline/custom_algorithms/m66.py#L35-L859 |
|
robotlearn/pyrobolearn | 9cd7c060723fda7d2779fa255ac998c2c82b8436 | pyrobolearn/simulators/gazebo_ros.py | python | GazeboROSEnv.get_obs | (self) | Return the observation.
Example:
The observation could be an image, while the state could be the position
(and velocity) of a target on the picture. The state is used to compute
the reward function. | Return the observation. | [
"Return",
"the",
"observation",
"."
] | def get_obs(self):
"""
Return the observation.
Example:
The observation could be an image, while the state could be the position
(and velocity) of a target on the picture. The state is used to compute
the reward function.
"""
raise NotImplementedError("This function needs to be overwritten...") | [
"def",
"get_obs",
"(",
"self",
")",
":",
"raise",
"NotImplementedError",
"(",
"\"This function needs to be overwritten...\"",
")"
] | https://github.com/robotlearn/pyrobolearn/blob/9cd7c060723fda7d2779fa255ac998c2c82b8436/pyrobolearn/simulators/gazebo_ros.py#L462-L471 |
||
JiYou/openstack | 8607dd488bde0905044b303eb6e52bdea6806923 | packages/source/nova/nova/network/floating_ips.py | python | FloatingIP._disassociate_floating_ip | (self, context, address, interface,
instance_uuid) | Performs db and driver calls to disassociate floating ip. | Performs db and driver calls to disassociate floating ip. | [
"Performs",
"db",
"and",
"driver",
"calls",
"to",
"disassociate",
"floating",
"ip",
"."
] | def _disassociate_floating_ip(self, context, address, interface,
instance_uuid):
"""Performs db and driver calls to disassociate floating ip."""
interface = CONF.public_interface or interface
@lockutils.synchronized(unicode(address), 'nova-')
def do_disassociate():
# NOTE(vish): Note that we are disassociating in the db before we
# actually remove the ip address on the host. We are
# safe from races on this host due to the decorator,
# but another host might grab the ip right away. We
# don't worry about this case because the minuscule
# window where the ip is on both hosts shouldn't cause
# any problems.
fixed = self.db.floating_ip_disassociate(context, address)
if not fixed:
# NOTE(vish): ip was already disassociated
return
if interface:
# go go driver time
self.l3driver.remove_floating_ip(address, fixed['address'],
interface, fixed['network'])
payload = dict(project_id=context.project_id,
instance_id=instance_uuid,
floating_ip=address)
notifier.notify(context,
notifier.publisher_id("network"),
'network.floating_ip.disassociate',
notifier.INFO, payload=payload)
do_disassociate() | [
"def",
"_disassociate_floating_ip",
"(",
"self",
",",
"context",
",",
"address",
",",
"interface",
",",
"instance_uuid",
")",
":",
"interface",
"=",
"CONF",
".",
"public_interface",
"or",
"interface",
"@",
"lockutils",
".",
"synchronized",
"(",
"unicode",
"(",
"address",
")",
",",
"'nova-'",
")",
"def",
"do_disassociate",
"(",
")",
":",
"# NOTE(vish): Note that we are disassociating in the db before we",
"# actually remove the ip address on the host. We are",
"# safe from races on this host due to the decorator,",
"# but another host might grab the ip right away. We",
"# don't worry about this case because the minuscule",
"# window where the ip is on both hosts shouldn't cause",
"# any problems.",
"fixed",
"=",
"self",
".",
"db",
".",
"floating_ip_disassociate",
"(",
"context",
",",
"address",
")",
"if",
"not",
"fixed",
":",
"# NOTE(vish): ip was already disassociated",
"return",
"if",
"interface",
":",
"# go go driver time",
"self",
".",
"l3driver",
".",
"remove_floating_ip",
"(",
"address",
",",
"fixed",
"[",
"'address'",
"]",
",",
"interface",
",",
"fixed",
"[",
"'network'",
"]",
")",
"payload",
"=",
"dict",
"(",
"project_id",
"=",
"context",
".",
"project_id",
",",
"instance_id",
"=",
"instance_uuid",
",",
"floating_ip",
"=",
"address",
")",
"notifier",
".",
"notify",
"(",
"context",
",",
"notifier",
".",
"publisher_id",
"(",
"\"network\"",
")",
",",
"'network.floating_ip.disassociate'",
",",
"notifier",
".",
"INFO",
",",
"payload",
"=",
"payload",
")",
"do_disassociate",
"(",
")"
] | https://github.com/JiYou/openstack/blob/8607dd488bde0905044b303eb6e52bdea6806923/packages/source/nova/nova/network/floating_ips.py#L439-L469 |
||
facebookresearch/mobile-vision | f40401a44e86bb3ba9c1b66e7700e15f96b880cb | mobile_cv/common/misc/py.py | python | dynamic_import | (obj_full_name) | return ret | Dynamically import an object (class or function or global variable).
Args:
obj_full_name: full name of the object, eg. ExampleClass/example_foo is defined
inside a/b.py, the obj_full_name is "a.b.ExampleClass" or "a.b.example_foo".
Returns:
The imported object. | Dynamically import an object (class or function or global variable). | [
"Dynamically",
"import",
"an",
"object",
"(",
"class",
"or",
"function",
"or",
"global",
"variable",
")",
"."
] | def dynamic_import(obj_full_name):
"""
Dynamically import an object (class or function or global variable).
Args:
obj_full_name: full name of the object, eg. ExampleClass/example_foo is defined
inside a/b.py, the obj_full_name is "a.b.ExampleClass" or "a.b.example_foo".
Returns:
The imported object.
"""
import importlib
import pydoc
ret = pydoc.locate(obj_full_name)
if ret is None:
# pydoc.locate imports in forward order, sometimes causing circular import,
# fallback to use importlib if pydoc.locate doesn't work
module_name, obj_name = obj_full_name.rsplit(".", 1)
module = importlib.import_module(module_name)
ret = getattr(module, obj_name)
return ret | [
"def",
"dynamic_import",
"(",
"obj_full_name",
")",
":",
"import",
"importlib",
"import",
"pydoc",
"ret",
"=",
"pydoc",
".",
"locate",
"(",
"obj_full_name",
")",
"if",
"ret",
"is",
"None",
":",
"# pydoc.locate imports in forward order, sometimes causing circular import,",
"# fallback to use importlib if pydoc.locate doesn't work",
"module_name",
",",
"obj_name",
"=",
"obj_full_name",
".",
"rsplit",
"(",
"\".\"",
",",
"1",
")",
"module",
"=",
"importlib",
".",
"import_module",
"(",
"module_name",
")",
"ret",
"=",
"getattr",
"(",
"module",
",",
"obj_name",
")",
"return",
"ret"
] | https://github.com/facebookresearch/mobile-vision/blob/f40401a44e86bb3ba9c1b66e7700e15f96b880cb/mobile_cv/common/misc/py.py#L37-L57 |
|
titusjan/argos | 5a9c31a8a9a2ca825bbf821aa1e685740e3682d7 | argos/repo/rtiplugins/exdir.py | python | ExdirScalarRti.unit | (self) | return dataSetUnit(self._exdirDataset) | Returns the unit of the RTI by calling dataSetUnit on the underlying dataset | Returns the unit of the RTI by calling dataSetUnit on the underlying dataset | [
"Returns",
"the",
"unit",
"of",
"the",
"RTI",
"by",
"calling",
"dataSetUnit",
"on",
"the",
"underlying",
"dataset"
] | def unit(self):
""" Returns the unit of the RTI by calling dataSetUnit on the underlying dataset
"""
return dataSetUnit(self._exdirDataset) | [
"def",
"unit",
"(",
"self",
")",
":",
"return",
"dataSetUnit",
"(",
"self",
".",
"_exdirDataset",
")"
] | https://github.com/titusjan/argos/blob/5a9c31a8a9a2ca825bbf821aa1e685740e3682d7/argos/repo/rtiplugins/exdir.py#L178-L181 |
|
mit-ll/LL-Fuzzer | 7c532a55cfd7dba9445afa39bd25574a320e1a69 | sulley/legos/dcerpc.py | python | ndr_conformant_array.render | (self) | return self.rendered | We overload and extend the render routine in order to properly pad and prefix the string.
[dword length][array][pad] | We overload and extend the render routine in order to properly pad and prefix the string. | [
"We",
"overload",
"and",
"extend",
"the",
"render",
"routine",
"in",
"order",
"to",
"properly",
"pad",
"and",
"prefix",
"the",
"string",
"."
] | def render (self):
'''
We overload and extend the render routine in order to properly pad and prefix the string.
[dword length][array][pad]
'''
# let the parent do the initial render.
blocks.block.render(self)
# encode the empty string correctly:
if self.rendered == "":
self.rendered = "\x00\x00\x00\x00"
else:
self.rendered = struct.pack("<L", len(self.rendered)) + self.rendered + ndr_pad(self.rendered)
return self.rendered | [
"def",
"render",
"(",
"self",
")",
":",
"# let the parent do the initial render.",
"blocks",
".",
"block",
".",
"render",
"(",
"self",
")",
"# encode the empty string correctly:",
"if",
"self",
".",
"rendered",
"==",
"\"\"",
":",
"self",
".",
"rendered",
"=",
"\"\\x00\\x00\\x00\\x00\"",
"else",
":",
"self",
".",
"rendered",
"=",
"struct",
".",
"pack",
"(",
"\"<L\"",
",",
"len",
"(",
"self",
".",
"rendered",
")",
")",
"+",
"self",
".",
"rendered",
"+",
"ndr_pad",
"(",
"self",
".",
"rendered",
")",
"return",
"self",
".",
"rendered"
] | https://github.com/mit-ll/LL-Fuzzer/blob/7c532a55cfd7dba9445afa39bd25574a320e1a69/sulley/legos/dcerpc.py#L33-L49 |
|
AnalogJ/lexicon | c7bedfed6ed34c96950954933b07ca3ce081d0e5 | lexicon/providers/namesilo.py | python | provider_parser | (subparser) | Configure provider parser for Namesilo | Configure provider parser for Namesilo | [
"Configure",
"provider",
"parser",
"for",
"Namesilo"
] | def provider_parser(subparser):
"""Configure provider parser for Namesilo"""
subparser.add_argument("--auth-token", help="specify key for authentication") | [
"def",
"provider_parser",
"(",
"subparser",
")",
":",
"subparser",
".",
"add_argument",
"(",
"\"--auth-token\"",
",",
"help",
"=",
"\"specify key for authentication\"",
")"
] | https://github.com/AnalogJ/lexicon/blob/c7bedfed6ed34c96950954933b07ca3ce081d0e5/lexicon/providers/namesilo.py#L15-L17 |
||
jgagneastro/coffeegrindsize | 22661ebd21831dba4cf32bfc6ba59fe3d49f879c | App/dist/coffeegrindsize.app/Contents/Resources/lib/python3.7/scipy/optimize/_hessian_update_strategy.py | python | HessianUpdateStrategy.dot | (self, p) | Compute the product of the internal matrix with the given vector.
Parameters
----------
p : array_like
1-d array representing a vector.
Returns
-------
Hp : array
1-d represents the result of multiplying the approximation matrix
by vector p. | Compute the product of the internal matrix with the given vector. | [
"Compute",
"the",
"product",
"of",
"the",
"internal",
"matrix",
"with",
"the",
"given",
"vector",
"."
] | def dot(self, p):
"""Compute the product of the internal matrix with the given vector.
Parameters
----------
p : array_like
1-d array representing a vector.
Returns
-------
Hp : array
1-d represents the result of multiplying the approximation matrix
by vector p.
"""
raise NotImplementedError("The method ``dot(p)``"
" is not implemented.") | [
"def",
"dot",
"(",
"self",
",",
"p",
")",
":",
"raise",
"NotImplementedError",
"(",
"\"The method ``dot(p)``\"",
"\" is not implemented.\"",
")"
] | https://github.com/jgagneastro/coffeegrindsize/blob/22661ebd21831dba4cf32bfc6ba59fe3d49f879c/App/dist/coffeegrindsize.app/Contents/Resources/lib/python3.7/scipy/optimize/_hessian_update_strategy.py#L73-L88 |
||
sametmax/Django--an-app-at-a-time | 99eddf12ead76e6dfbeb09ce0bae61e282e22f8a | ignore_this_directory/django/core/serializers/__init__.py | python | unregister_serializer | (format) | Unregister a given serializer. This is not a thread-safe operation. | Unregister a given serializer. This is not a thread-safe operation. | [
"Unregister",
"a",
"given",
"serializer",
".",
"This",
"is",
"not",
"a",
"thread",
"-",
"safe",
"operation",
"."
] | def unregister_serializer(format):
"Unregister a given serializer. This is not a thread-safe operation."
if not _serializers:
_load_serializers()
if format not in _serializers:
raise SerializerDoesNotExist(format)
del _serializers[format] | [
"def",
"unregister_serializer",
"(",
"format",
")",
":",
"if",
"not",
"_serializers",
":",
"_load_serializers",
"(",
")",
"if",
"format",
"not",
"in",
"_serializers",
":",
"raise",
"SerializerDoesNotExist",
"(",
"format",
")",
"del",
"_serializers",
"[",
"format",
"]"
] | https://github.com/sametmax/Django--an-app-at-a-time/blob/99eddf12ead76e6dfbeb09ce0bae61e282e22f8a/ignore_this_directory/django/core/serializers/__init__.py#L85-L91 |
||
ales-tsurko/cells | 4cf7e395cd433762bea70cdc863a346f3a6fe1d0 | packaging/macos/python/lib/python3.7/zipfile.py | python | ZipInfo.from_file | (cls, filename, arcname=None) | return zinfo | Construct an appropriate ZipInfo for a file on the filesystem.
filename should be the path to a file or directory on the filesystem.
arcname is the name which it will have within the archive (by default,
this will be the same as filename, but without a drive letter and with
leading path separators removed). | Construct an appropriate ZipInfo for a file on the filesystem. | [
"Construct",
"an",
"appropriate",
"ZipInfo",
"for",
"a",
"file",
"on",
"the",
"filesystem",
"."
] | def from_file(cls, filename, arcname=None):
"""Construct an appropriate ZipInfo for a file on the filesystem.
filename should be the path to a file or directory on the filesystem.
arcname is the name which it will have within the archive (by default,
this will be the same as filename, but without a drive letter and with
leading path separators removed).
"""
if isinstance(filename, os.PathLike):
filename = os.fspath(filename)
st = os.stat(filename)
isdir = stat.S_ISDIR(st.st_mode)
mtime = time.localtime(st.st_mtime)
date_time = mtime[0:6]
# Create ZipInfo instance to store file information
if arcname is None:
arcname = filename
arcname = os.path.normpath(os.path.splitdrive(arcname)[1])
while arcname[0] in (os.sep, os.altsep):
arcname = arcname[1:]
if isdir:
arcname += '/'
zinfo = cls(arcname, date_time)
zinfo.external_attr = (st.st_mode & 0xFFFF) << 16 # Unix attributes
if isdir:
zinfo.file_size = 0
zinfo.external_attr |= 0x10 # MS-DOS directory flag
else:
zinfo.file_size = st.st_size
return zinfo | [
"def",
"from_file",
"(",
"cls",
",",
"filename",
",",
"arcname",
"=",
"None",
")",
":",
"if",
"isinstance",
"(",
"filename",
",",
"os",
".",
"PathLike",
")",
":",
"filename",
"=",
"os",
".",
"fspath",
"(",
"filename",
")",
"st",
"=",
"os",
".",
"stat",
"(",
"filename",
")",
"isdir",
"=",
"stat",
".",
"S_ISDIR",
"(",
"st",
".",
"st_mode",
")",
"mtime",
"=",
"time",
".",
"localtime",
"(",
"st",
".",
"st_mtime",
")",
"date_time",
"=",
"mtime",
"[",
"0",
":",
"6",
"]",
"# Create ZipInfo instance to store file information",
"if",
"arcname",
"is",
"None",
":",
"arcname",
"=",
"filename",
"arcname",
"=",
"os",
".",
"path",
".",
"normpath",
"(",
"os",
".",
"path",
".",
"splitdrive",
"(",
"arcname",
")",
"[",
"1",
"]",
")",
"while",
"arcname",
"[",
"0",
"]",
"in",
"(",
"os",
".",
"sep",
",",
"os",
".",
"altsep",
")",
":",
"arcname",
"=",
"arcname",
"[",
"1",
":",
"]",
"if",
"isdir",
":",
"arcname",
"+=",
"'/'",
"zinfo",
"=",
"cls",
"(",
"arcname",
",",
"date_time",
")",
"zinfo",
".",
"external_attr",
"=",
"(",
"st",
".",
"st_mode",
"&",
"0xFFFF",
")",
"<<",
"16",
"# Unix attributes",
"if",
"isdir",
":",
"zinfo",
".",
"file_size",
"=",
"0",
"zinfo",
".",
"external_attr",
"|=",
"0x10",
"# MS-DOS directory flag",
"else",
":",
"zinfo",
".",
"file_size",
"=",
"st",
".",
"st_size",
"return",
"zinfo"
] | https://github.com/ales-tsurko/cells/blob/4cf7e395cd433762bea70cdc863a346f3a6fe1d0/packaging/macos/python/lib/python3.7/zipfile.py#L495-L526 |
|
demisto/content | 5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07 | Packs/Kafka/Integrations/Kafka_V2/Kafka_V2.py | python | check_latest_offset | (topic, partition_number=None) | return latest_offset - 1 | :param topic: topic to check the latest offset
:type topic: :class:`pykafka.topic.Topic`
:param partition_number: partition to take latest offset from
:type partition_number: int, str
:return latest_offset: last message offset
:rtype: int | :param topic: topic to check the latest offset
:type topic: :class:`pykafka.topic.Topic`
:param partition_number: partition to take latest offset from
:type partition_number: int, str
:return latest_offset: last message offset
:rtype: int | [
":",
"param",
"topic",
":",
"topic",
"to",
"check",
"the",
"latest",
"offset",
":",
"type",
"topic",
":",
":",
"class",
":",
"pykafka",
".",
"topic",
".",
"Topic",
":",
"param",
"partition_number",
":",
"partition",
"to",
"take",
"latest",
"offset",
"from",
":",
"type",
"partition_number",
":",
"int",
"str",
":",
"return",
"latest_offset",
":",
"last",
"message",
"offset",
":",
"rtype",
":",
"int"
] | def check_latest_offset(topic, partition_number=None):
"""
:param topic: topic to check the latest offset
:type topic: :class:`pykafka.topic.Topic`
:param partition_number: partition to take latest offset from
:type partition_number: int, str
:return latest_offset: last message offset
:rtype: int
"""
partitions = topic.latest_available_offsets()
latest_offset = 0
if partition_number is not None:
partition = partitions.get(str(partition_number))
if partitions:
latest_offset = partition[0][0]
else:
return_error('Partition does not exist')
else:
for partition in partitions.values():
if latest_offset < partition[0][0]:
latest_offset = partition[0][0]
return latest_offset - 1 | [
"def",
"check_latest_offset",
"(",
"topic",
",",
"partition_number",
"=",
"None",
")",
":",
"partitions",
"=",
"topic",
".",
"latest_available_offsets",
"(",
")",
"latest_offset",
"=",
"0",
"if",
"partition_number",
"is",
"not",
"None",
":",
"partition",
"=",
"partitions",
".",
"get",
"(",
"str",
"(",
"partition_number",
")",
")",
"if",
"partitions",
":",
"latest_offset",
"=",
"partition",
"[",
"0",
"]",
"[",
"0",
"]",
"else",
":",
"return_error",
"(",
"'Partition does not exist'",
")",
"else",
":",
"for",
"partition",
"in",
"partitions",
".",
"values",
"(",
")",
":",
"if",
"latest_offset",
"<",
"partition",
"[",
"0",
"]",
"[",
"0",
"]",
":",
"latest_offset",
"=",
"partition",
"[",
"0",
"]",
"[",
"0",
"]",
"return",
"latest_offset",
"-",
"1"
] | https://github.com/demisto/content/blob/5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07/Packs/Kafka/Integrations/Kafka_V2/Kafka_V2.py#L104-L125 |
|
virus-warnning/twnews | 4c7ef436018480d07b5f3f5f474f3843af46eb99 | bin/publish.py | python | wheel_check | () | 檢查 wheel 是否能正常運作在各個 Python 版本環境上 | 檢查 wheel 是否能正常運作在各個 Python 版本環境上 | [
"檢查",
"wheel",
"是否能正常運作在各個",
"Python",
"版本環境上"
] | def wheel_check():
""" 檢查 wheel 是否能正常運作在各個 Python 版本環境上 """
"""
print('檢查 logging.ini')
config = configparser.ConfigParser()
config.read('twnews/conf/logging.ini')
if config['handler_stdout']['level'] != 'CRITICAL':
print('handler_stdout 忘記切換成 CRITICAL level')
exit(1)
"""
print('檢查程式碼品質')
ret = os.system('pylint -f colorized twnews')
if ret != 0:
print('檢查沒通過,停止封裝')
exit(ret)
print('檢查 README.rst')
ret = os.system('rstcheck README.rst')
if ret != 0:
print('檢查沒通過,停止封裝')
exit(ret)
print('偵測可用的測試環境')
os.system('rm -rf sandbox/*')
wheel = get_wheel()
latest_python = get_latest_python()
if len(latest_python) == 0:
print('沒有任何可用的測試環境')
exit(1)
for pyver in latest_python:
print('測試 Python %s' % pyver)
test_in_virtualenv(pyver, wheel) | [
"def",
"wheel_check",
"(",
")",
":",
"\"\"\"\n print('檢查 logging.ini')\n config = configparser.ConfigParser()\n config.read('twnews/conf/logging.ini')\n if config['handler_stdout']['level'] != 'CRITICAL':\n print('handler_stdout 忘記切換成 CRITICAL level')\n exit(1)\n \"\"\"",
"print",
"(",
"'檢查程式碼品質')",
"",
"ret",
"=",
"os",
".",
"system",
"(",
"'pylint -f colorized twnews'",
")",
"if",
"ret",
"!=",
"0",
":",
"print",
"(",
"'檢查沒通過,停止封裝')",
"",
"exit",
"(",
"ret",
")",
"print",
"(",
"'檢查 README.rst')",
"",
"ret",
"=",
"os",
".",
"system",
"(",
"'rstcheck README.rst'",
")",
"if",
"ret",
"!=",
"0",
":",
"print",
"(",
"'檢查沒通過,停止封裝')",
"",
"exit",
"(",
"ret",
")",
"print",
"(",
"'偵測可用的測試環境')",
"",
"os",
".",
"system",
"(",
"'rm -rf sandbox/*'",
")",
"wheel",
"=",
"get_wheel",
"(",
")",
"latest_python",
"=",
"get_latest_python",
"(",
")",
"if",
"len",
"(",
"latest_python",
")",
"==",
"0",
":",
"print",
"(",
"'沒有任何可用的測試環境')",
"",
"exit",
"(",
"1",
")",
"for",
"pyver",
"in",
"latest_python",
":",
"print",
"(",
"'測試 Python %s' % p",
"v",
"r)",
"",
"test_in_virtualenv",
"(",
"pyver",
",",
"wheel",
")"
] | https://github.com/virus-warnning/twnews/blob/4c7ef436018480d07b5f3f5f474f3843af46eb99/bin/publish.py#L81-L115 |
||
XKNX/xknx | 1deeeb3dc0978aebacf14492a84e1f1eaf0970ed | xknx/devices/cover.py | python | Cover.supports_angle | (self) | return self.angle.initialized | Return if cover supports tilt angle. | Return if cover supports tilt angle. | [
"Return",
"if",
"cover",
"supports",
"tilt",
"angle",
"."
] | def supports_angle(self) -> bool:
"""Return if cover supports tilt angle."""
return self.angle.initialized | [
"def",
"supports_angle",
"(",
"self",
")",
"->",
"bool",
":",
"return",
"self",
".",
"angle",
".",
"initialized"
] | https://github.com/XKNX/xknx/blob/1deeeb3dc0978aebacf14492a84e1f1eaf0970ed/xknx/devices/cover.py#L357-L359 |
|
oilshell/oil | 94388e7d44a9ad879b12615f6203b38596b5a2d3 | core/state.py | python | MutableOpts.SetShoptOption | (self, opt_name, b) | For shopt -s/-u and sh -O/+O. | For shopt -s/-u and sh -O/+O. | [
"For",
"shopt",
"-",
"s",
"/",
"-",
"u",
"and",
"sh",
"-",
"O",
"/",
"+",
"O",
"."
] | def SetShoptOption(self, opt_name, b):
# type: (str, bool) -> None
""" For shopt -s/-u and sh -O/+O. """
# shopt -s all:oil turns on all Oil options, which includes all strict #
# options
if opt_name == 'oil:basic':
self._SetGroup(consts.OIL_BASIC, b)
self.SetDeferredErrExit(b) # Special case
return
if opt_name == 'oil:all':
self._SetGroup(consts.OIL_ALL, b)
self.SetDeferredErrExit(b) # Special case
return
if opt_name == 'strict:all':
self._SetGroup(consts.STRICT_ALL, b)
return
opt_num = _ShoptOptionNum(opt_name)
if opt_num == option_i.errexit:
self.SetDeferredErrExit(b)
return
self._SetArrayByNum(opt_num, b) | [
"def",
"SetShoptOption",
"(",
"self",
",",
"opt_name",
",",
"b",
")",
":",
"# type: (str, bool) -> None",
"# shopt -s all:oil turns on all Oil options, which includes all strict #",
"# options",
"if",
"opt_name",
"==",
"'oil:basic'",
":",
"self",
".",
"_SetGroup",
"(",
"consts",
".",
"OIL_BASIC",
",",
"b",
")",
"self",
".",
"SetDeferredErrExit",
"(",
"b",
")",
"# Special case",
"return",
"if",
"opt_name",
"==",
"'oil:all'",
":",
"self",
".",
"_SetGroup",
"(",
"consts",
".",
"OIL_ALL",
",",
"b",
")",
"self",
".",
"SetDeferredErrExit",
"(",
"b",
")",
"# Special case",
"return",
"if",
"opt_name",
"==",
"'strict:all'",
":",
"self",
".",
"_SetGroup",
"(",
"consts",
".",
"STRICT_ALL",
",",
"b",
")",
"return",
"opt_num",
"=",
"_ShoptOptionNum",
"(",
"opt_name",
")",
"if",
"opt_num",
"==",
"option_i",
".",
"errexit",
":",
"self",
".",
"SetDeferredErrExit",
"(",
"b",
")",
"return",
"self",
".",
"_SetArrayByNum",
"(",
"opt_num",
",",
"b",
")"
] | https://github.com/oilshell/oil/blob/94388e7d44a9ad879b12615f6203b38596b5a2d3/core/state.py#L523-L549 |
||
scikit-hep/awkward-0.x | dd885bef15814f588b58944d2505296df4aaae0e | awkward0/array/masked.py | python | MaskedArray.counts | (self) | return out | [] | def counts(self):
self._valid()
content = self._util_counts(self._content)
out = self.numpy.full(self.shape, -1, dtype=content.dtype)
mask = self.boolmask(maskedwhen=False)
out[mask] = content[mask]
return out | [
"def",
"counts",
"(",
"self",
")",
":",
"self",
".",
"_valid",
"(",
")",
"content",
"=",
"self",
".",
"_util_counts",
"(",
"self",
".",
"_content",
")",
"out",
"=",
"self",
".",
"numpy",
".",
"full",
"(",
"self",
".",
"shape",
",",
"-",
"1",
",",
"dtype",
"=",
"content",
".",
"dtype",
")",
"mask",
"=",
"self",
".",
"boolmask",
"(",
"maskedwhen",
"=",
"False",
")",
"out",
"[",
"mask",
"]",
"=",
"content",
"[",
"mask",
"]",
"return",
"out"
] | https://github.com/scikit-hep/awkward-0.x/blob/dd885bef15814f588b58944d2505296df4aaae0e/awkward0/array/masked.py#L256-L262 |
|||
oracle/oci-python-sdk | 3c1604e4e212008fb6718e2f68cdb5ef71fd5793 | src/oci/data_safe/data_safe_client.py | python | DataSafeClient.__init__ | (self, config, **kwargs) | Creates a new service client
:param dict config:
Configuration keys and values as per `SDK and Tool Configuration <https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm>`__.
The :py:meth:`~oci.config.from_file` method can be used to load configuration from a file. Alternatively, a ``dict`` can be passed. You can validate_config
the dict using :py:meth:`~oci.config.validate_config`
:param str service_endpoint: (optional)
The endpoint of the service to call using this client. For example ``https://iaas.us-ashburn-1.oraclecloud.com``. If this keyword argument is
not provided then it will be derived using the region in the config parameter. You should only provide this keyword argument if you have an explicit
need to specify a service endpoint.
:param timeout: (optional)
The connection and read timeouts for the client. The default values are connection timeout 10 seconds and read timeout 60 seconds. This keyword argument can be provided
as a single float, in which case the value provided is used for both the read and connection timeouts, or as a tuple of two floats. If
a tuple is provided then the first value is used as the connection timeout and the second value as the read timeout.
:type timeout: float or tuple(float, float)
:param signer: (optional)
The signer to use when signing requests made by the service client. The default is to use a :py:class:`~oci.signer.Signer` based on the values
provided in the config parameter.
One use case for this parameter is for `Instance Principals authentication <https://docs.cloud.oracle.com/Content/Identity/Tasks/callingservicesfrominstances.htm>`__
by passing an instance of :py:class:`~oci.auth.signers.InstancePrincipalsSecurityTokenSigner` as the value for this keyword argument
:type signer: :py:class:`~oci.signer.AbstractBaseSigner`
:param obj retry_strategy: (optional)
A retry strategy to apply to all calls made by this service client (i.e. at the client level). There is no retry strategy applied by default.
Retry strategies can also be applied at the operation level by passing a ``retry_strategy`` keyword argument as part of calling the operation.
Any value provided at the operation level will override whatever is specified at the client level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
:param obj circuit_breaker_strategy: (optional)
A circuit breaker strategy to apply to all calls made by this service client (i.e. at the client level).
This client uses :py:data:`~oci.circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY` as default if no circuit breaker strategy is provided.
The specifics of circuit breaker strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/circuit_breakers.html>`__.
:param function circuit_breaker_callback: (optional)
Callback function to receive any exceptions triggerred by the circuit breaker. | Creates a new service client | [
"Creates",
"a",
"new",
"service",
"client"
] | def __init__(self, config, **kwargs):
"""
Creates a new service client
:param dict config:
Configuration keys and values as per `SDK and Tool Configuration <https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm>`__.
The :py:meth:`~oci.config.from_file` method can be used to load configuration from a file. Alternatively, a ``dict`` can be passed. You can validate_config
the dict using :py:meth:`~oci.config.validate_config`
:param str service_endpoint: (optional)
The endpoint of the service to call using this client. For example ``https://iaas.us-ashburn-1.oraclecloud.com``. If this keyword argument is
not provided then it will be derived using the region in the config parameter. You should only provide this keyword argument if you have an explicit
need to specify a service endpoint.
:param timeout: (optional)
The connection and read timeouts for the client. The default values are connection timeout 10 seconds and read timeout 60 seconds. This keyword argument can be provided
as a single float, in which case the value provided is used for both the read and connection timeouts, or as a tuple of two floats. If
a tuple is provided then the first value is used as the connection timeout and the second value as the read timeout.
:type timeout: float or tuple(float, float)
:param signer: (optional)
The signer to use when signing requests made by the service client. The default is to use a :py:class:`~oci.signer.Signer` based on the values
provided in the config parameter.
One use case for this parameter is for `Instance Principals authentication <https://docs.cloud.oracle.com/Content/Identity/Tasks/callingservicesfrominstances.htm>`__
by passing an instance of :py:class:`~oci.auth.signers.InstancePrincipalsSecurityTokenSigner` as the value for this keyword argument
:type signer: :py:class:`~oci.signer.AbstractBaseSigner`
:param obj retry_strategy: (optional)
A retry strategy to apply to all calls made by this service client (i.e. at the client level). There is no retry strategy applied by default.
Retry strategies can also be applied at the operation level by passing a ``retry_strategy`` keyword argument as part of calling the operation.
Any value provided at the operation level will override whatever is specified at the client level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
:param obj circuit_breaker_strategy: (optional)
A circuit breaker strategy to apply to all calls made by this service client (i.e. at the client level).
This client uses :py:data:`~oci.circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY` as default if no circuit breaker strategy is provided.
The specifics of circuit breaker strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/circuit_breakers.html>`__.
:param function circuit_breaker_callback: (optional)
Callback function to receive any exceptions triggerred by the circuit breaker.
"""
validate_config(config, signer=kwargs.get('signer'))
if 'signer' in kwargs:
signer = kwargs['signer']
elif AUTHENTICATION_TYPE_FIELD_NAME in config:
signer = get_signer_from_authentication_type(config)
else:
signer = Signer(
tenancy=config["tenancy"],
user=config["user"],
fingerprint=config["fingerprint"],
private_key_file_location=config.get("key_file"),
pass_phrase=get_config_value_or_default(config, "pass_phrase"),
private_key_content=config.get("key_content")
)
base_client_init_kwargs = {
'regional_client': True,
'service_endpoint': kwargs.get('service_endpoint'),
'base_path': '/20181201',
'service_endpoint_template': 'https://datasafe.{region}.oci.{secondLevelDomain}',
'skip_deserialization': kwargs.get('skip_deserialization', False),
'circuit_breaker_strategy': kwargs.get('circuit_breaker_strategy', circuit_breaker.GLOBAL_CIRCUIT_BREAKER_STRATEGY)
}
if 'timeout' in kwargs:
base_client_init_kwargs['timeout'] = kwargs.get('timeout')
if base_client_init_kwargs.get('circuit_breaker_strategy') is None:
base_client_init_kwargs['circuit_breaker_strategy'] = circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY
self.base_client = BaseClient("data_safe", config, signer, data_safe_type_mapping, **base_client_init_kwargs)
self.retry_strategy = kwargs.get('retry_strategy')
self.circuit_breaker_callback = kwargs.get('circuit_breaker_callback') | [
"def",
"__init__",
"(",
"self",
",",
"config",
",",
"*",
"*",
"kwargs",
")",
":",
"validate_config",
"(",
"config",
",",
"signer",
"=",
"kwargs",
".",
"get",
"(",
"'signer'",
")",
")",
"if",
"'signer'",
"in",
"kwargs",
":",
"signer",
"=",
"kwargs",
"[",
"'signer'",
"]",
"elif",
"AUTHENTICATION_TYPE_FIELD_NAME",
"in",
"config",
":",
"signer",
"=",
"get_signer_from_authentication_type",
"(",
"config",
")",
"else",
":",
"signer",
"=",
"Signer",
"(",
"tenancy",
"=",
"config",
"[",
"\"tenancy\"",
"]",
",",
"user",
"=",
"config",
"[",
"\"user\"",
"]",
",",
"fingerprint",
"=",
"config",
"[",
"\"fingerprint\"",
"]",
",",
"private_key_file_location",
"=",
"config",
".",
"get",
"(",
"\"key_file\"",
")",
",",
"pass_phrase",
"=",
"get_config_value_or_default",
"(",
"config",
",",
"\"pass_phrase\"",
")",
",",
"private_key_content",
"=",
"config",
".",
"get",
"(",
"\"key_content\"",
")",
")",
"base_client_init_kwargs",
"=",
"{",
"'regional_client'",
":",
"True",
",",
"'service_endpoint'",
":",
"kwargs",
".",
"get",
"(",
"'service_endpoint'",
")",
",",
"'base_path'",
":",
"'/20181201'",
",",
"'service_endpoint_template'",
":",
"'https://datasafe.{region}.oci.{secondLevelDomain}'",
",",
"'skip_deserialization'",
":",
"kwargs",
".",
"get",
"(",
"'skip_deserialization'",
",",
"False",
")",
",",
"'circuit_breaker_strategy'",
":",
"kwargs",
".",
"get",
"(",
"'circuit_breaker_strategy'",
",",
"circuit_breaker",
".",
"GLOBAL_CIRCUIT_BREAKER_STRATEGY",
")",
"}",
"if",
"'timeout'",
"in",
"kwargs",
":",
"base_client_init_kwargs",
"[",
"'timeout'",
"]",
"=",
"kwargs",
".",
"get",
"(",
"'timeout'",
")",
"if",
"base_client_init_kwargs",
".",
"get",
"(",
"'circuit_breaker_strategy'",
")",
"is",
"None",
":",
"base_client_init_kwargs",
"[",
"'circuit_breaker_strategy'",
"]",
"=",
"circuit_breaker",
".",
"DEFAULT_CIRCUIT_BREAKER_STRATEGY",
"self",
".",
"base_client",
"=",
"BaseClient",
"(",
"\"data_safe\"",
",",
"config",
",",
"signer",
",",
"data_safe_type_mapping",
",",
"*",
"*",
"base_client_init_kwargs",
")",
"self",
".",
"retry_strategy",
"=",
"kwargs",
".",
"get",
"(",
"'retry_strategy'",
")",
"self",
".",
"circuit_breaker_callback",
"=",
"kwargs",
".",
"get",
"(",
"'circuit_breaker_callback'",
")"
] | https://github.com/oracle/oci-python-sdk/blob/3c1604e4e212008fb6718e2f68cdb5ef71fd5793/src/oci/data_safe/data_safe_client.py#L24-L99 |
||
number5/cloud-init | 19948dbaf40309355e1a2dbef116efb0ce66245c | cloudinit/distros/debian.py | python | Distro.get_locale | (self) | return (
self.system_locale if self.system_locale else self.default_locale
) | Return the default locale if set, else use default locale | Return the default locale if set, else use default locale | [
"Return",
"the",
"default",
"locale",
"if",
"set",
"else",
"use",
"default",
"locale"
] | def get_locale(self):
"""Return the default locale if set, else use default locale"""
# read system locale value
if not self.system_locale:
self.system_locale = read_system_locale()
# Return system_locale setting if valid, else use default locale
return (
self.system_locale if self.system_locale else self.default_locale
) | [
"def",
"get_locale",
"(",
"self",
")",
":",
"# read system locale value",
"if",
"not",
"self",
".",
"system_locale",
":",
"self",
".",
"system_locale",
"=",
"read_system_locale",
"(",
")",
"# Return system_locale setting if valid, else use default locale",
"return",
"(",
"self",
".",
"system_locale",
"if",
"self",
".",
"system_locale",
"else",
"self",
".",
"default_locale",
")"
] | https://github.com/number5/cloud-init/blob/19948dbaf40309355e1a2dbef116efb0ce66245c/cloudinit/distros/debian.py#L88-L98 |
|
MasoniteFramework/masonite | faa448377916e9e0f618ea6bdc82330fa6604efc | src/masonite/response/response.py | python | Response.status | (self, status) | return self | Set the HTTP status code.
Arguments:
status {string|integer} -- A string or integer with the standardized status code
Returns:
self | Set the HTTP status code. | [
"Set",
"the",
"HTTP",
"status",
"code",
"."
] | def status(self, status):
"""Set the HTTP status code.
Arguments:
status {string|integer} -- A string or integer with the standardized status code
Returns:
self
"""
if isinstance(status, str):
self._status = status
elif isinstance(status, int):
try:
self._status = self.statuses[status]
except KeyError:
raise InvalidHTTPStatusCode
return self | [
"def",
"status",
"(",
"self",
",",
"status",
")",
":",
"if",
"isinstance",
"(",
"status",
",",
"str",
")",
":",
"self",
".",
"_status",
"=",
"status",
"elif",
"isinstance",
"(",
"status",
",",
"int",
")",
":",
"try",
":",
"self",
".",
"_status",
"=",
"self",
".",
"statuses",
"[",
"status",
"]",
"except",
"KeyError",
":",
"raise",
"InvalidHTTPStatusCode",
"return",
"self"
] | https://github.com/MasoniteFramework/masonite/blob/faa448377916e9e0f618ea6bdc82330fa6604efc/src/masonite/response/response.py#L87-L103 |
|
google-research/electra | 8a46635f32083ada044d7e9ad09604742600ee7b | finetune/preprocessing.py | python | Preprocessor._serialize_dataset | (self, tasks, is_training, split) | return input_fn, steps | Write out the dataset as tfrecords. | Write out the dataset as tfrecords. | [
"Write",
"out",
"the",
"dataset",
"as",
"tfrecords",
"."
] | def _serialize_dataset(self, tasks, is_training, split):
"""Write out the dataset as tfrecords."""
dataset_name = "_".join(sorted([task.name for task in tasks]))
dataset_name += "_" + split
dataset_prefix = os.path.join(
self._config.preprocessed_data_dir, dataset_name)
tfrecords_path = dataset_prefix + ".tfrecord"
metadata_path = dataset_prefix + ".metadata"
batch_size = (self._config.train_batch_size if is_training else
self._config.eval_batch_size)
utils.log("Loading dataset", dataset_name)
n_examples = None
if (self._config.use_tfrecords_if_existing and
tf.io.gfile.exists(metadata_path)):
n_examples = utils.load_json(metadata_path)["n_examples"]
if n_examples is None:
utils.log("Existing tfrecords not found so creating")
examples = []
for task in tasks:
task_examples = task.get_examples(split)
examples += task_examples
if is_training:
random.shuffle(examples)
utils.mkdir(tfrecords_path.rsplit("/", 1)[0])
n_examples = self.serialize_examples(
examples, is_training, tfrecords_path, batch_size)
utils.write_json({"n_examples": n_examples}, metadata_path)
input_fn = self._input_fn_builder(tfrecords_path, is_training)
if is_training:
steps = int(n_examples // batch_size * self._config.num_train_epochs)
else:
steps = n_examples // batch_size
return input_fn, steps | [
"def",
"_serialize_dataset",
"(",
"self",
",",
"tasks",
",",
"is_training",
",",
"split",
")",
":",
"dataset_name",
"=",
"\"_\"",
".",
"join",
"(",
"sorted",
"(",
"[",
"task",
".",
"name",
"for",
"task",
"in",
"tasks",
"]",
")",
")",
"dataset_name",
"+=",
"\"_\"",
"+",
"split",
"dataset_prefix",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"_config",
".",
"preprocessed_data_dir",
",",
"dataset_name",
")",
"tfrecords_path",
"=",
"dataset_prefix",
"+",
"\".tfrecord\"",
"metadata_path",
"=",
"dataset_prefix",
"+",
"\".metadata\"",
"batch_size",
"=",
"(",
"self",
".",
"_config",
".",
"train_batch_size",
"if",
"is_training",
"else",
"self",
".",
"_config",
".",
"eval_batch_size",
")",
"utils",
".",
"log",
"(",
"\"Loading dataset\"",
",",
"dataset_name",
")",
"n_examples",
"=",
"None",
"if",
"(",
"self",
".",
"_config",
".",
"use_tfrecords_if_existing",
"and",
"tf",
".",
"io",
".",
"gfile",
".",
"exists",
"(",
"metadata_path",
")",
")",
":",
"n_examples",
"=",
"utils",
".",
"load_json",
"(",
"metadata_path",
")",
"[",
"\"n_examples\"",
"]",
"if",
"n_examples",
"is",
"None",
":",
"utils",
".",
"log",
"(",
"\"Existing tfrecords not found so creating\"",
")",
"examples",
"=",
"[",
"]",
"for",
"task",
"in",
"tasks",
":",
"task_examples",
"=",
"task",
".",
"get_examples",
"(",
"split",
")",
"examples",
"+=",
"task_examples",
"if",
"is_training",
":",
"random",
".",
"shuffle",
"(",
"examples",
")",
"utils",
".",
"mkdir",
"(",
"tfrecords_path",
".",
"rsplit",
"(",
"\"/\"",
",",
"1",
")",
"[",
"0",
"]",
")",
"n_examples",
"=",
"self",
".",
"serialize_examples",
"(",
"examples",
",",
"is_training",
",",
"tfrecords_path",
",",
"batch_size",
")",
"utils",
".",
"write_json",
"(",
"{",
"\"n_examples\"",
":",
"n_examples",
"}",
",",
"metadata_path",
")",
"input_fn",
"=",
"self",
".",
"_input_fn_builder",
"(",
"tfrecords_path",
",",
"is_training",
")",
"if",
"is_training",
":",
"steps",
"=",
"int",
"(",
"n_examples",
"//",
"batch_size",
"*",
"self",
".",
"_config",
".",
"num_train_epochs",
")",
"else",
":",
"steps",
"=",
"n_examples",
"//",
"batch_size",
"return",
"input_fn",
",",
"steps"
] | https://github.com/google-research/electra/blob/8a46635f32083ada044d7e9ad09604742600ee7b/finetune/preprocessing.py#L56-L92 |
|
jzlianglu/pykaldi2 | 4d31968f8dff7cccf6a8395b7e69005ae3b2b30a | reader/stream.py | python | SpeechDataStream.utt_id2spk_id | (self, utt_id) | Different corpus usually has different ways to convert utt_id to spk_id
So we should overload the class and provide the implementation in the subclasses. | Different corpus usually has different ways to convert utt_id to spk_id
So we should overload the class and provide the implementation in the subclasses. | [
"Different",
"corpus",
"usually",
"has",
"different",
"ways",
"to",
"convert",
"utt_id",
"to",
"spk_id",
"So",
"we",
"should",
"overload",
"the",
"class",
"and",
"provide",
"the",
"implementation",
"in",
"the",
"subclasses",
"."
] | def utt_id2spk_id(self, utt_id):
"""Different corpus usually has different ways to convert utt_id to spk_id
So we should overload the class and provide the implementation in the subclasses."""
pass | [
"def",
"utt_id2spk_id",
"(",
"self",
",",
"utt_id",
")",
":",
"pass"
] | https://github.com/jzlianglu/pykaldi2/blob/4d31968f8dff7cccf6a8395b7e69005ae3b2b30a/reader/stream.py#L483-L486 |
||
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_flaskbb/lib/python2.7/site-packages/PIL/Image.py | python | Image._dump | (self, file=None, format=None, **options) | return filename | [] | def _dump(self, file=None, format=None, **options):
import tempfile
suffix = ''
if format:
suffix = '.'+format
if not file:
f, filename = tempfile.mkstemp(suffix)
os.close(f)
else:
filename = file
if not filename.endswith(suffix):
filename = filename + suffix
self.load()
if not format or format == "PPM":
self.im.save_ppm(filename)
else:
self.save(filename, format, **options)
return filename | [
"def",
"_dump",
"(",
"self",
",",
"file",
"=",
"None",
",",
"format",
"=",
"None",
",",
"*",
"*",
"options",
")",
":",
"import",
"tempfile",
"suffix",
"=",
"''",
"if",
"format",
":",
"suffix",
"=",
"'.'",
"+",
"format",
"if",
"not",
"file",
":",
"f",
",",
"filename",
"=",
"tempfile",
".",
"mkstemp",
"(",
"suffix",
")",
"os",
".",
"close",
"(",
"f",
")",
"else",
":",
"filename",
"=",
"file",
"if",
"not",
"filename",
".",
"endswith",
"(",
"suffix",
")",
":",
"filename",
"=",
"filename",
"+",
"suffix",
"self",
".",
"load",
"(",
")",
"if",
"not",
"format",
"or",
"format",
"==",
"\"PPM\"",
":",
"self",
".",
"im",
".",
"save_ppm",
"(",
"filename",
")",
"else",
":",
"self",
".",
"save",
"(",
"filename",
",",
"format",
",",
"*",
"*",
"options",
")",
"return",
"filename"
] | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/lib/python2.7/site-packages/PIL/Image.py#L606-L628 |
|||
ifwe/digsby | f5fe00244744aa131e07f09348d10563f3d8fa99 | digsby/src/gui/imwin/imwin_ctrl.py | python | ImWinCtrl.message | (self, messageobj, convo = None, mode = 'im', meta = None) | Called by imhub.py with incoming messages. | Called by imhub.py with incoming messages. | [
"Called",
"by",
"imhub",
".",
"py",
"with",
"incoming",
"messages",
"."
] | def message(self, messageobj, convo = None, mode = 'im', meta = None):
"Called by imhub.py with incoming messages."
info('%r', self)
info_s(' messageobj: %r', messageobj)
info(' convo: %r', convo)
info(' mode: %r', mode)
info(' meta: %r', meta)
assert wx.IsMainThread()
if messageobj is None:
# starting a new conversation--no message
self.set_conversation(convo)
self.set_mode(mode)
self.IMControl.SetConvo(convo)
if convo.ischat:
self.show_roomlist(True)
elif (messageobj.get('sms', False) or getattr(convo.buddy, 'sms', None)) and not profile.blist.on_buddylist(convo.buddy):
# an incoming SMS message
if self.convo is None:
self.set_conversation(convo)
if self.mode != 'sms':
self.set_mode('sms')
# just show it
self.show_message(messageobj)
else:
convo = messageobj.conversation
if self.mode is None:
self.set_mode(mode)
self.show_message(messageobj)
self.set_conversation(convo) | [
"def",
"message",
"(",
"self",
",",
"messageobj",
",",
"convo",
"=",
"None",
",",
"mode",
"=",
"'im'",
",",
"meta",
"=",
"None",
")",
":",
"info",
"(",
"'%r'",
",",
"self",
")",
"info_s",
"(",
"' messageobj: %r'",
",",
"messageobj",
")",
"info",
"(",
"' convo: %r'",
",",
"convo",
")",
"info",
"(",
"' mode: %r'",
",",
"mode",
")",
"info",
"(",
"' meta: %r'",
",",
"meta",
")",
"assert",
"wx",
".",
"IsMainThread",
"(",
")",
"if",
"messageobj",
"is",
"None",
":",
"# starting a new conversation--no message",
"self",
".",
"set_conversation",
"(",
"convo",
")",
"self",
".",
"set_mode",
"(",
"mode",
")",
"self",
".",
"IMControl",
".",
"SetConvo",
"(",
"convo",
")",
"if",
"convo",
".",
"ischat",
":",
"self",
".",
"show_roomlist",
"(",
"True",
")",
"elif",
"(",
"messageobj",
".",
"get",
"(",
"'sms'",
",",
"False",
")",
"or",
"getattr",
"(",
"convo",
".",
"buddy",
",",
"'sms'",
",",
"None",
")",
")",
"and",
"not",
"profile",
".",
"blist",
".",
"on_buddylist",
"(",
"convo",
".",
"buddy",
")",
":",
"# an incoming SMS message",
"if",
"self",
".",
"convo",
"is",
"None",
":",
"self",
".",
"set_conversation",
"(",
"convo",
")",
"if",
"self",
".",
"mode",
"!=",
"'sms'",
":",
"self",
".",
"set_mode",
"(",
"'sms'",
")",
"# just show it",
"self",
".",
"show_message",
"(",
"messageobj",
")",
"else",
":",
"convo",
"=",
"messageobj",
".",
"conversation",
"if",
"self",
".",
"mode",
"is",
"None",
":",
"self",
".",
"set_mode",
"(",
"mode",
")",
"self",
".",
"show_message",
"(",
"messageobj",
")",
"self",
".",
"set_conversation",
"(",
"convo",
")"
] | https://github.com/ifwe/digsby/blob/f5fe00244744aa131e07f09348d10563f3d8fa99/digsby/src/gui/imwin/imwin_ctrl.py#L73-L108 |
||
blasty/moneyshot | 0541356cca38e57ec03a30b6dff1d38c0c7dfd00 | shell.py | python | main | (args) | [] | def main(args):
if (len(args) != 2):
print "usage: moneyshot shell <host> <port>"
exit()
target = (args[0], int(args[1]))
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(target)
old_settings = termios.tcgetattr(0)
try:
tty.setcbreak(0)
c = True
while c:
for i in select.select([0, s.fileno()], [], [], 0)[0]:
c = os.read(i, 1024)
if c: os.write(s.fileno() if i == 0 else 1, c)
except KeyboardInterrupt: pass
finally: termios.tcsetattr(0, termios.TCSADRAIN, old_settings) | [
"def",
"main",
"(",
"args",
")",
":",
"if",
"(",
"len",
"(",
"args",
")",
"!=",
"2",
")",
":",
"print",
"\"usage: moneyshot shell <host> <port>\"",
"exit",
"(",
")",
"target",
"=",
"(",
"args",
"[",
"0",
"]",
",",
"int",
"(",
"args",
"[",
"1",
"]",
")",
")",
"s",
"=",
"socket",
".",
"socket",
"(",
"socket",
".",
"AF_INET",
",",
"socket",
".",
"SOCK_STREAM",
")",
"s",
".",
"connect",
"(",
"target",
")",
"old_settings",
"=",
"termios",
".",
"tcgetattr",
"(",
"0",
")",
"try",
":",
"tty",
".",
"setcbreak",
"(",
"0",
")",
"c",
"=",
"True",
"while",
"c",
":",
"for",
"i",
"in",
"select",
".",
"select",
"(",
"[",
"0",
",",
"s",
".",
"fileno",
"(",
")",
"]",
",",
"[",
"]",
",",
"[",
"]",
",",
"0",
")",
"[",
"0",
"]",
":",
"c",
"=",
"os",
".",
"read",
"(",
"i",
",",
"1024",
")",
"if",
"c",
":",
"os",
".",
"write",
"(",
"s",
".",
"fileno",
"(",
")",
"if",
"i",
"==",
"0",
"else",
"1",
",",
"c",
")",
"except",
"KeyboardInterrupt",
":",
"pass",
"finally",
":",
"termios",
".",
"tcsetattr",
"(",
"0",
",",
"termios",
".",
"TCSADRAIN",
",",
"old_settings",
")"
] | https://github.com/blasty/moneyshot/blob/0541356cca38e57ec03a30b6dff1d38c0c7dfd00/shell.py#L5-L24 |
||||
FSecureLABS/drozer | df11e6e63fbaefa9b58ed1e42533ddf76241d7e1 | src/drozer/repoman/repository_builder.py | python | RepositoryBuilder.__find_sources | (self) | Searches the source folder, to identify source files and packages, and
isolate them ready for building. | Searches the source folder, to identify source files and packages, and
isolate them ready for building. | [
"Searches",
"the",
"source",
"folder",
"to",
"identify",
"source",
"files",
"and",
"packages",
"and",
"isolate",
"them",
"ready",
"for",
"building",
"."
] | def __find_sources(self):
"""
Searches the source folder, to identify source files and packages, and
isolate them ready for building.
"""
for root, folders, files in os.walk(self.source):
self.__skip_folders(folders)
if ".drozer_package" in files:
yield SourcePackage(self.source, root, files)
else:
for f in files:
if f.endswith(".py") and not f == "__init__.py":
yield SourceFile(self.source, os.sep.join([root, f])) | [
"def",
"__find_sources",
"(",
"self",
")",
":",
"for",
"root",
",",
"folders",
",",
"files",
"in",
"os",
".",
"walk",
"(",
"self",
".",
"source",
")",
":",
"self",
".",
"__skip_folders",
"(",
"folders",
")",
"if",
"\".drozer_package\"",
"in",
"files",
":",
"yield",
"SourcePackage",
"(",
"self",
".",
"source",
",",
"root",
",",
"files",
")",
"else",
":",
"for",
"f",
"in",
"files",
":",
"if",
"f",
".",
"endswith",
"(",
"\".py\"",
")",
"and",
"not",
"f",
"==",
"\"__init__.py\"",
":",
"yield",
"SourceFile",
"(",
"self",
".",
"source",
",",
"os",
".",
"sep",
".",
"join",
"(",
"[",
"root",
",",
"f",
"]",
")",
")"
] | https://github.com/FSecureLABS/drozer/blob/df11e6e63fbaefa9b58ed1e42533ddf76241d7e1/src/drozer/repoman/repository_builder.py#L52-L66 |
||
psychopy/psychopy | 01b674094f38d0e0bd51c45a6f66f671d7041696 | psychopy/app/coder/coder.py | python | CodeEditor.setLexer | (self, lexer=None) | Lexer is a simple string (e.g. 'python', 'html')
that will be converted to use the right STC_LEXER_XXXX value | Lexer is a simple string (e.g. 'python', 'html')
that will be converted to use the right STC_LEXER_XXXX value | [
"Lexer",
"is",
"a",
"simple",
"string",
"(",
"e",
".",
"g",
".",
"python",
"html",
")",
"that",
"will",
"be",
"converted",
"to",
"use",
"the",
"right",
"STC_LEXER_XXXX",
"value"
] | def setLexer(self, lexer=None):
"""Lexer is a simple string (e.g. 'python', 'html')
that will be converted to use the right STC_LEXER_XXXX value
"""
lexer = 'null' if lexer is None else lexer
try:
lex = getattr(wx.stc, "STC_LEX_%s" % (lexer.upper()))
except AttributeError:
logging.warn("Unknown lexer %r. Using plain text." % lexer)
lex = wx.stc.STC_LEX_NULL
lexer = 'null'
# then actually set it
self.SetLexer(lex)
self.setFonts()
if lexer == 'python':
self.SetIndentationGuides(self.coder.appData['showIndentGuides'])
self.SetProperty("fold", "1") # allow folding
self.SetProperty("tab.timmy.whinge.level", "1")
elif lexer.lower() == 'html':
self.SetProperty("fold", "1") # allow folding
# 4 means 'tabs are bad'; 1 means 'flag inconsistency'
self.SetProperty("tab.timmy.whinge.level", "1")
elif lexer == 'cpp': # JS, C/C++, GLSL, mex, arduino
self.SetIndentationGuides(self.coder.appData['showIndentGuides'])
self.SetProperty("fold", "1")
self.SetProperty("tab.timmy.whinge.level", "1")
# don't grey out preprocessor lines
self.SetProperty("lexer.cpp.track.preprocessor", "0")
elif lexer == 'R':
self.SetIndentationGuides(self.coder.appData['showIndentGuides'])
self.SetProperty("fold", "1")
self.SetProperty("tab.timmy.whinge.level", "1")
else:
self.SetIndentationGuides(0)
self.SetProperty("tab.timmy.whinge.level", "0")
# deprecated in newer versions of Scintilla
self.SetStyleBits(self.GetStyleBitsNeeded())
# keep text from being squashed and hard to read
spacing = self.coder.prefs['lineSpacing'] / 2.
self.SetExtraAscent(int(spacing))
self.SetExtraDescent(int(spacing))
self.Colourise(0, -1)
self._applyAppTheme() | [
"def",
"setLexer",
"(",
"self",
",",
"lexer",
"=",
"None",
")",
":",
"lexer",
"=",
"'null'",
"if",
"lexer",
"is",
"None",
"else",
"lexer",
"try",
":",
"lex",
"=",
"getattr",
"(",
"wx",
".",
"stc",
",",
"\"STC_LEX_%s\"",
"%",
"(",
"lexer",
".",
"upper",
"(",
")",
")",
")",
"except",
"AttributeError",
":",
"logging",
".",
"warn",
"(",
"\"Unknown lexer %r. Using plain text.\"",
"%",
"lexer",
")",
"lex",
"=",
"wx",
".",
"stc",
".",
"STC_LEX_NULL",
"lexer",
"=",
"'null'",
"# then actually set it",
"self",
".",
"SetLexer",
"(",
"lex",
")",
"self",
".",
"setFonts",
"(",
")",
"if",
"lexer",
"==",
"'python'",
":",
"self",
".",
"SetIndentationGuides",
"(",
"self",
".",
"coder",
".",
"appData",
"[",
"'showIndentGuides'",
"]",
")",
"self",
".",
"SetProperty",
"(",
"\"fold\"",
",",
"\"1\"",
")",
"# allow folding",
"self",
".",
"SetProperty",
"(",
"\"tab.timmy.whinge.level\"",
",",
"\"1\"",
")",
"elif",
"lexer",
".",
"lower",
"(",
")",
"==",
"'html'",
":",
"self",
".",
"SetProperty",
"(",
"\"fold\"",
",",
"\"1\"",
")",
"# allow folding",
"# 4 means 'tabs are bad'; 1 means 'flag inconsistency'",
"self",
".",
"SetProperty",
"(",
"\"tab.timmy.whinge.level\"",
",",
"\"1\"",
")",
"elif",
"lexer",
"==",
"'cpp'",
":",
"# JS, C/C++, GLSL, mex, arduino",
"self",
".",
"SetIndentationGuides",
"(",
"self",
".",
"coder",
".",
"appData",
"[",
"'showIndentGuides'",
"]",
")",
"self",
".",
"SetProperty",
"(",
"\"fold\"",
",",
"\"1\"",
")",
"self",
".",
"SetProperty",
"(",
"\"tab.timmy.whinge.level\"",
",",
"\"1\"",
")",
"# don't grey out preprocessor lines",
"self",
".",
"SetProperty",
"(",
"\"lexer.cpp.track.preprocessor\"",
",",
"\"0\"",
")",
"elif",
"lexer",
"==",
"'R'",
":",
"self",
".",
"SetIndentationGuides",
"(",
"self",
".",
"coder",
".",
"appData",
"[",
"'showIndentGuides'",
"]",
")",
"self",
".",
"SetProperty",
"(",
"\"fold\"",
",",
"\"1\"",
")",
"self",
".",
"SetProperty",
"(",
"\"tab.timmy.whinge.level\"",
",",
"\"1\"",
")",
"else",
":",
"self",
".",
"SetIndentationGuides",
"(",
"0",
")",
"self",
".",
"SetProperty",
"(",
"\"tab.timmy.whinge.level\"",
",",
"\"0\"",
")",
"# deprecated in newer versions of Scintilla",
"self",
".",
"SetStyleBits",
"(",
"self",
".",
"GetStyleBitsNeeded",
"(",
")",
")",
"# keep text from being squashed and hard to read",
"spacing",
"=",
"self",
".",
"coder",
".",
"prefs",
"[",
"'lineSpacing'",
"]",
"/",
"2.",
"self",
".",
"SetExtraAscent",
"(",
"int",
"(",
"spacing",
")",
")",
"self",
".",
"SetExtraDescent",
"(",
"int",
"(",
"spacing",
")",
")",
"self",
".",
"Colourise",
"(",
"0",
",",
"-",
"1",
")",
"self",
".",
"_applyAppTheme",
"(",
")"
] | https://github.com/psychopy/psychopy/blob/01b674094f38d0e0bd51c45a6f66f671d7041696/psychopy/app/coder/coder.py#L1044-L1091 |
||
xiaoyufenfei/Efficient-Segmentation-Networks | 0f0c32e7af3463d381cb184a158ff60e16f7fb9a | dataset/camvid.py | python | CamVidTrainInform.__init__ | (self, data_dir='', classes=11, train_set_file="",
inform_data_file="", normVal=1.10) | Args:
data_dir: directory where the dataset is kept
classes: number of classes in the dataset
inform_data_file: location where cached file has to be stored
normVal: normalization value, as defined in ERFNet paper | Args:
data_dir: directory where the dataset is kept
classes: number of classes in the dataset
inform_data_file: location where cached file has to be stored
normVal: normalization value, as defined in ERFNet paper | [
"Args",
":",
"data_dir",
":",
"directory",
"where",
"the",
"dataset",
"is",
"kept",
"classes",
":",
"number",
"of",
"classes",
"in",
"the",
"dataset",
"inform_data_file",
":",
"location",
"where",
"cached",
"file",
"has",
"to",
"be",
"stored",
"normVal",
":",
"normalization",
"value",
"as",
"defined",
"in",
"ERFNet",
"paper"
] | def __init__(self, data_dir='', classes=11, train_set_file="",
inform_data_file="", normVal=1.10):
"""
Args:
data_dir: directory where the dataset is kept
classes: number of classes in the dataset
inform_data_file: location where cached file has to be stored
normVal: normalization value, as defined in ERFNet paper
"""
self.data_dir = data_dir
self.classes = classes
self.classWeights = np.ones(self.classes, dtype=np.float32)
self.normVal = normVal
self.mean = np.zeros(3, dtype=np.float32)
self.std = np.zeros(3, dtype=np.float32)
self.train_set_file = train_set_file
self.inform_data_file = inform_data_file | [
"def",
"__init__",
"(",
"self",
",",
"data_dir",
"=",
"''",
",",
"classes",
"=",
"11",
",",
"train_set_file",
"=",
"\"\"",
",",
"inform_data_file",
"=",
"\"\"",
",",
"normVal",
"=",
"1.10",
")",
":",
"self",
".",
"data_dir",
"=",
"data_dir",
"self",
".",
"classes",
"=",
"classes",
"self",
".",
"classWeights",
"=",
"np",
".",
"ones",
"(",
"self",
".",
"classes",
",",
"dtype",
"=",
"np",
".",
"float32",
")",
"self",
".",
"normVal",
"=",
"normVal",
"self",
".",
"mean",
"=",
"np",
".",
"zeros",
"(",
"3",
",",
"dtype",
"=",
"np",
".",
"float32",
")",
"self",
".",
"std",
"=",
"np",
".",
"zeros",
"(",
"3",
",",
"dtype",
"=",
"np",
".",
"float32",
")",
"self",
".",
"train_set_file",
"=",
"train_set_file",
"self",
".",
"inform_data_file",
"=",
"inform_data_file"
] | https://github.com/xiaoyufenfei/Efficient-Segmentation-Networks/blob/0f0c32e7af3463d381cb184a158ff60e16f7fb9a/dataset/camvid.py#L213-L229 |
||
gramps-project/gramps | 04d4651a43eb210192f40a9f8c2bad8ee8fa3753 | gramps/plugins/drawreport/statisticschart.py | python | StatisticsChart.index_items | (self, data, sort, reverse) | return index | creates & stores a sorted index for the items | creates & stores a sorted index for the items | [
"creates",
"&",
"stores",
"a",
"sorted",
"index",
"for",
"the",
"items"
] | def index_items(self, data, sort, reverse):
"""creates & stores a sorted index for the items"""
# sort by item keys
index = sorted(data, reverse=True if reverse else False)
if sort == _options.SORT_VALUE:
# set for the sorting function
self.lookup_items = data
# then sort by value
index.sort(key=lambda x: self.lookup_items[x],
reverse=True if reverse else False)
return index | [
"def",
"index_items",
"(",
"self",
",",
"data",
",",
"sort",
",",
"reverse",
")",
":",
"# sort by item keys",
"index",
"=",
"sorted",
"(",
"data",
",",
"reverse",
"=",
"True",
"if",
"reverse",
"else",
"False",
")",
"if",
"sort",
"==",
"_options",
".",
"SORT_VALUE",
":",
"# set for the sorting function",
"self",
".",
"lookup_items",
"=",
"data",
"# then sort by value",
"index",
".",
"sort",
"(",
"key",
"=",
"lambda",
"x",
":",
"self",
".",
"lookup_items",
"[",
"x",
"]",
",",
"reverse",
"=",
"True",
"if",
"reverse",
"else",
"False",
")",
"return",
"index"
] | https://github.com/gramps-project/gramps/blob/04d4651a43eb210192f40a9f8c2bad8ee8fa3753/gramps/plugins/drawreport/statisticschart.py#L837-L851 |
|
sametmax/Django--an-app-at-a-time | 99eddf12ead76e6dfbeb09ce0bae61e282e22f8a | ignore_this_directory/django/contrib/messages/storage/session.py | python | SessionStorage._store | (self, messages, response, *args, **kwargs) | return [] | Store a list of messages to the request's session. | Store a list of messages to the request's session. | [
"Store",
"a",
"list",
"of",
"messages",
"to",
"the",
"request",
"s",
"session",
"."
] | def _store(self, messages, response, *args, **kwargs):
"""
Store a list of messages to the request's session.
"""
if messages:
self.request.session[self.session_key] = self.serialize_messages(messages)
else:
self.request.session.pop(self.session_key, None)
return [] | [
"def",
"_store",
"(",
"self",
",",
"messages",
",",
"response",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"messages",
":",
"self",
".",
"request",
".",
"session",
"[",
"self",
".",
"session_key",
"]",
"=",
"self",
".",
"serialize_messages",
"(",
"messages",
")",
"else",
":",
"self",
".",
"request",
".",
"session",
".",
"pop",
"(",
"self",
".",
"session_key",
",",
"None",
")",
"return",
"[",
"]"
] | https://github.com/sametmax/Django--an-app-at-a-time/blob/99eddf12ead76e6dfbeb09ce0bae61e282e22f8a/ignore_this_directory/django/contrib/messages/storage/session.py#L31-L39 |
|
zutianbiao/baize | cf78a4a59b1fed29e825fe94c31dd093d7a747be | APP/APP_web/views.py | python | bussiness_manage_delete | (request) | return HttpResponse(json.dumps(json_response_data), content_type="application/json; charset=utf-8") | 任务删除 | 任务删除 | [
"任务删除"
] | def bussiness_manage_delete(request):
""" 任务删除 """
id = request.POST.get('id', '')
try:
bussiness = Bussiness.objects.get(id=id)
bussiness.delete()
except Exception, e:
json_response_data = {
"success": False,
"msg": u"业务已经不存在",
'data': None
}
return HttpResponse(json.dumps(json_response_data), content_type="application/json; charset=utf-8")
data = {
"id": id
}
json_response_data = {
"success": True,
"msg": u"删除业务成功",
'data': data
}
return HttpResponse(json.dumps(json_response_data), content_type="application/json; charset=utf-8") | [
"def",
"bussiness_manage_delete",
"(",
"request",
")",
":",
"id",
"=",
"request",
".",
"POST",
".",
"get",
"(",
"'id'",
",",
"''",
")",
"try",
":",
"bussiness",
"=",
"Bussiness",
".",
"objects",
".",
"get",
"(",
"id",
"=",
"id",
")",
"bussiness",
".",
"delete",
"(",
")",
"except",
"Exception",
",",
"e",
":",
"json_response_data",
"=",
"{",
"\"success\"",
":",
"False",
",",
"\"msg\"",
":",
"u\"业务已经不存在\",",
"",
"'data'",
":",
"None",
"}",
"return",
"HttpResponse",
"(",
"json",
".",
"dumps",
"(",
"json_response_data",
")",
",",
"content_type",
"=",
"\"application/json; charset=utf-8\"",
")",
"data",
"=",
"{",
"\"id\"",
":",
"id",
"}",
"json_response_data",
"=",
"{",
"\"success\"",
":",
"True",
",",
"\"msg\"",
":",
"u\"删除业务成功\",",
"",
"'data'",
":",
"data",
"}",
"return",
"HttpResponse",
"(",
"json",
".",
"dumps",
"(",
"json_response_data",
")",
",",
"content_type",
"=",
"\"application/json; charset=utf-8\"",
")"
] | https://github.com/zutianbiao/baize/blob/cf78a4a59b1fed29e825fe94c31dd093d7a747be/APP/APP_web/views.py#L4437-L4458 |
|
michaelhush/M-LOOP | cd0bf2d0de0bfe7f533156399a94b576f7f34a35 | mloop/learners.py | python | GaussianProcessLearner.find_next_parameters | (self) | return next_params | Returns next parameters to find. Increments counters and bias function appropriately.
Return:
next_params (array): Returns next parameters from biased cost search. | Returns next parameters to find. Increments counters and bias function appropriately. | [
"Returns",
"next",
"parameters",
"to",
"find",
".",
"Increments",
"counters",
"and",
"bias",
"function",
"appropriately",
"."
] | def find_next_parameters(self):
'''
Returns next parameters to find. Increments counters and bias function appropriately.
Return:
next_params (array): Returns next parameters from biased cost search.
'''
self.params_count += 1
self.update_bias_function()
self.update_search_params()
next_params = None
next_cost = float('inf')
for start_params in self.search_params:
result = so.minimize(self.predict_biased_cost, start_params, bounds = self.search_region, tol=self.search_precision)
if result.fun < next_cost:
next_params = result.x
next_cost = result.fun
return next_params | [
"def",
"find_next_parameters",
"(",
"self",
")",
":",
"self",
".",
"params_count",
"+=",
"1",
"self",
".",
"update_bias_function",
"(",
")",
"self",
".",
"update_search_params",
"(",
")",
"next_params",
"=",
"None",
"next_cost",
"=",
"float",
"(",
"'inf'",
")",
"for",
"start_params",
"in",
"self",
".",
"search_params",
":",
"result",
"=",
"so",
".",
"minimize",
"(",
"self",
".",
"predict_biased_cost",
",",
"start_params",
",",
"bounds",
"=",
"self",
".",
"search_region",
",",
"tol",
"=",
"self",
".",
"search_precision",
")",
"if",
"result",
".",
"fun",
"<",
"next_cost",
":",
"next_params",
"=",
"result",
".",
"x",
"next_cost",
"=",
"result",
".",
"fun",
"return",
"next_params"
] | https://github.com/michaelhush/M-LOOP/blob/cd0bf2d0de0bfe7f533156399a94b576f7f34a35/mloop/learners.py#L1998-L2015 |
|
daavoo/pyntcloud | 1bd6d0a6409ae13147c54c132cd9ffaa6455d398 | pyntcloud/io/las.py | python | read_las | (filename, xyz_dtype="float32", rgb_dtype="uint8", backend="pylas") | return data | Read a .las/laz file and store elements in pandas DataFrame.
Parameters
----------
filename: str
Path to the filename
xyz_dtype: str
Defines the data type of the xyz coordinate
rgb_dtype: str
Defines the data type of the color
Returns
-------
data: dict
Elements as pandas DataFrames. | Read a .las/laz file and store elements in pandas DataFrame. | [
"Read",
"a",
".",
"las",
"/",
"laz",
"file",
"and",
"store",
"elements",
"in",
"pandas",
"DataFrame",
"."
] | def read_las(filename, xyz_dtype="float32", rgb_dtype="uint8", backend="pylas"):
"""Read a .las/laz file and store elements in pandas DataFrame.
Parameters
----------
filename: str
Path to the filename
xyz_dtype: str
Defines the data type of the xyz coordinate
rgb_dtype: str
Defines the data type of the color
Returns
-------
data: dict
Elements as pandas DataFrames.
"""
if backend == "pylas":
data = read_las_with_pylas(filename)
elif backend == "laspy":
data = read_las_with_laspy(filename)
else:
raise ValueError(f"Unsupported backend. Expected one of ['pylas', 'laspy'] but got {backend}")
data = convert_location_to_dtype(data, xyz_dtype)
data = convert_color_to_dtype(data, rgb_dtype)
return data | [
"def",
"read_las",
"(",
"filename",
",",
"xyz_dtype",
"=",
"\"float32\"",
",",
"rgb_dtype",
"=",
"\"uint8\"",
",",
"backend",
"=",
"\"pylas\"",
")",
":",
"if",
"backend",
"==",
"\"pylas\"",
":",
"data",
"=",
"read_las_with_pylas",
"(",
"filename",
")",
"elif",
"backend",
"==",
"\"laspy\"",
":",
"data",
"=",
"read_las_with_laspy",
"(",
"filename",
")",
"else",
":",
"raise",
"ValueError",
"(",
"f\"Unsupported backend. Expected one of ['pylas', 'laspy'] but got {backend}\"",
")",
"data",
"=",
"convert_location_to_dtype",
"(",
"data",
",",
"xyz_dtype",
")",
"data",
"=",
"convert_color_to_dtype",
"(",
"data",
",",
"rgb_dtype",
")",
"return",
"data"
] | https://github.com/daavoo/pyntcloud/blob/1bd6d0a6409ae13147c54c132cd9ffaa6455d398/pyntcloud/io/las.py#L73-L97 |
|
openedx/edx-platform | 68dd185a0ab45862a2a61e0f803d7e03d2be71b5 | common/lib/xmodule/xmodule/modulestore/mixed.py | python | MixedModuleStore.get_courses | (self, **kwargs) | return list(courses.values()) | Returns a list containing the top level XModuleDescriptors of the courses in this modulestore. | Returns a list containing the top level XModuleDescriptors of the courses in this modulestore. | [
"Returns",
"a",
"list",
"containing",
"the",
"top",
"level",
"XModuleDescriptors",
"of",
"the",
"courses",
"in",
"this",
"modulestore",
"."
] | def get_courses(self, **kwargs):
'''
Returns a list containing the top level XModuleDescriptors of the courses in this modulestore.
'''
courses = {}
for store in self.modulestores:
# filter out ones which were fetched from earlier stores but locations may not be ==
for course in store.get_courses(**kwargs):
course_id = self._clean_locator_for_mapping(course.id)
if course_id not in courses:
# course is indeed unique. save it in result
courses[course_id] = course
return list(courses.values()) | [
"def",
"get_courses",
"(",
"self",
",",
"*",
"*",
"kwargs",
")",
":",
"courses",
"=",
"{",
"}",
"for",
"store",
"in",
"self",
".",
"modulestores",
":",
"# filter out ones which were fetched from earlier stores but locations may not be ==",
"for",
"course",
"in",
"store",
".",
"get_courses",
"(",
"*",
"*",
"kwargs",
")",
":",
"course_id",
"=",
"self",
".",
"_clean_locator_for_mapping",
"(",
"course",
".",
"id",
")",
"if",
"course_id",
"not",
"in",
"courses",
":",
"# course is indeed unique. save it in result",
"courses",
"[",
"course_id",
"]",
"=",
"course",
"return",
"list",
"(",
"courses",
".",
"values",
"(",
")",
")"
] | https://github.com/openedx/edx-platform/blob/68dd185a0ab45862a2a61e0f803d7e03d2be71b5/common/lib/xmodule/xmodule/modulestore/mixed.py#L303-L315 |
|
fortharris/Pcode | 147962d160a834c219e12cb456abc130826468e4 | pyflakes/checker.py | python | Checker.handleNodeStore | (self, node) | [] | def handleNodeStore(self, node):
name = getNodeName(node)
if not name:
return
# if the name hasn't already been defined in the current scope
if isinstance(self.scope, FunctionScope) and name not in self.scope:
# for each function or module scope above us
for scope in self.scopeStack[:-1]:
if not isinstance(scope, (FunctionScope, ModuleScope)):
continue
# if the name was defined in that scope, and the name has
# been accessed already in the current scope, and hasn't
# been declared global
used = name in scope and scope[name].used
if used and used[0] is self.scope and name not in self.scope.globals:
# then it's probably a mistake
self.report(messages.UndefinedLocal,
scope[name].used[1], name, scope[name].source)
break
parent_stmt = self.getParent(node)
if isinstance(parent_stmt, (ast.For, ast.comprehension)) or (
parent_stmt != node.parent and
not self.isLiteralTupleUnpacking(parent_stmt)):
binding = Binding(name, node)
elif name == '__all__' and isinstance(self.scope, ModuleScope):
binding = ExportBinding(name, node.parent, self.scope)
else:
binding = Assignment(name, node)
if name in self.scope:
binding.used = self.scope[name].used
self.addBinding(node, binding) | [
"def",
"handleNodeStore",
"(",
"self",
",",
"node",
")",
":",
"name",
"=",
"getNodeName",
"(",
"node",
")",
"if",
"not",
"name",
":",
"return",
"# if the name hasn't already been defined in the current scope",
"if",
"isinstance",
"(",
"self",
".",
"scope",
",",
"FunctionScope",
")",
"and",
"name",
"not",
"in",
"self",
".",
"scope",
":",
"# for each function or module scope above us",
"for",
"scope",
"in",
"self",
".",
"scopeStack",
"[",
":",
"-",
"1",
"]",
":",
"if",
"not",
"isinstance",
"(",
"scope",
",",
"(",
"FunctionScope",
",",
"ModuleScope",
")",
")",
":",
"continue",
"# if the name was defined in that scope, and the name has",
"# been accessed already in the current scope, and hasn't",
"# been declared global",
"used",
"=",
"name",
"in",
"scope",
"and",
"scope",
"[",
"name",
"]",
".",
"used",
"if",
"used",
"and",
"used",
"[",
"0",
"]",
"is",
"self",
".",
"scope",
"and",
"name",
"not",
"in",
"self",
".",
"scope",
".",
"globals",
":",
"# then it's probably a mistake",
"self",
".",
"report",
"(",
"messages",
".",
"UndefinedLocal",
",",
"scope",
"[",
"name",
"]",
".",
"used",
"[",
"1",
"]",
",",
"name",
",",
"scope",
"[",
"name",
"]",
".",
"source",
")",
"break",
"parent_stmt",
"=",
"self",
".",
"getParent",
"(",
"node",
")",
"if",
"isinstance",
"(",
"parent_stmt",
",",
"(",
"ast",
".",
"For",
",",
"ast",
".",
"comprehension",
")",
")",
"or",
"(",
"parent_stmt",
"!=",
"node",
".",
"parent",
"and",
"not",
"self",
".",
"isLiteralTupleUnpacking",
"(",
"parent_stmt",
")",
")",
":",
"binding",
"=",
"Binding",
"(",
"name",
",",
"node",
")",
"elif",
"name",
"==",
"'__all__'",
"and",
"isinstance",
"(",
"self",
".",
"scope",
",",
"ModuleScope",
")",
":",
"binding",
"=",
"ExportBinding",
"(",
"name",
",",
"node",
".",
"parent",
",",
"self",
".",
"scope",
")",
"else",
":",
"binding",
"=",
"Assignment",
"(",
"name",
",",
"node",
")",
"if",
"name",
"in",
"self",
".",
"scope",
":",
"binding",
".",
"used",
"=",
"self",
".",
"scope",
"[",
"name",
"]",
".",
"used",
"self",
".",
"addBinding",
"(",
"node",
",",
"binding",
")"
] | https://github.com/fortharris/Pcode/blob/147962d160a834c219e12cb456abc130826468e4/pyflakes/checker.py#L500-L531 |
||||
mherrmann/selenium-python-helium | 02f9a5a872871999d683c84461ac0d0b3e9da192 | helium/__init__.py | python | drag_file | (file_path, to) | Simulates the dragging of a file from the computer over the browser window
and dropping it over the given element. This allows, for example, to attach
files to emails in Gmail::
click("COMPOSE")
write("example@gmail.com", into="To")
write("Email subject", into="Subject")
drag_file(r"C:\\Documents\\notes.txt", to="Drop files here") | Simulates the dragging of a file from the computer over the browser window
and dropping it over the given element. This allows, for example, to attach
files to emails in Gmail:: | [
"Simulates",
"the",
"dragging",
"of",
"a",
"file",
"from",
"the",
"computer",
"over",
"the",
"browser",
"window",
"and",
"dropping",
"it",
"over",
"the",
"given",
"element",
".",
"This",
"allows",
"for",
"example",
"to",
"attach",
"files",
"to",
"emails",
"in",
"Gmail",
"::"
] | def drag_file(file_path, to):
"""
Simulates the dragging of a file from the computer over the browser window
and dropping it over the given element. This allows, for example, to attach
files to emails in Gmail::
click("COMPOSE")
write("example@gmail.com", into="To")
write("Email subject", into="Subject")
drag_file(r"C:\\Documents\\notes.txt", to="Drop files here")
"""
_get_api_impl().drag_file_impl(file_path, to) | [
"def",
"drag_file",
"(",
"file_path",
",",
"to",
")",
":",
"_get_api_impl",
"(",
")",
".",
"drag_file_impl",
"(",
"file_path",
",",
"to",
")"
] | https://github.com/mherrmann/selenium-python-helium/blob/02f9a5a872871999d683c84461ac0d0b3e9da192/helium/__init__.py#L413-L424 |
||
deepmipt/DeepPavlov | 08555428388fed3c7b036c0a82a70a25efcabcff | deeppavlov/models/multitask_bert/multitask_bert.py | python | MTBertTask.get_sess_run_train_args | (self, *args) | Returns fetches and feed_dict for task ``train_on_batch`` method.
Overriding methods take task inputs as positional arguments.
ATTENTION! Let ``get_sess_run_infer_args`` method have ``n_x_args`` arguments. Then the order of first
``n_x_args`` arguments of ``get_sess_run_train_args`` method arguments has to match the order of
``get_sess_run_infer_args`` arguments.
Args:
args: task inputs followed by expect outputs.
Returns:
fetches and feed_dict | Returns fetches and feed_dict for task ``train_on_batch`` method. | [
"Returns",
"fetches",
"and",
"feed_dict",
"for",
"task",
"train_on_batch",
"method",
"."
] | def get_sess_run_train_args(self, *args) -> Tuple[List[tf.Tensor], Dict[tf.placeholder, Any]]:
"""Returns fetches and feed_dict for task ``train_on_batch`` method.
Overriding methods take task inputs as positional arguments.
ATTENTION! Let ``get_sess_run_infer_args`` method have ``n_x_args`` arguments. Then the order of first
``n_x_args`` arguments of ``get_sess_run_train_args`` method arguments has to match the order of
``get_sess_run_infer_args`` arguments.
Args:
args: task inputs followed by expect outputs.
Returns:
fetches and feed_dict
"""
pass | [
"def",
"get_sess_run_train_args",
"(",
"self",
",",
"*",
"args",
")",
"->",
"Tuple",
"[",
"List",
"[",
"tf",
".",
"Tensor",
"]",
",",
"Dict",
"[",
"tf",
".",
"placeholder",
",",
"Any",
"]",
"]",
":",
"pass"
] | https://github.com/deepmipt/DeepPavlov/blob/08555428388fed3c7b036c0a82a70a25efcabcff/deeppavlov/models/multitask_bert/multitask_bert.py#L242-L257 |
||
kuri65536/python-for-android | 26402a08fc46b09ef94e8d7a6bbc3a54ff9d0891 | python-modules/twisted/twisted/conch/ssh/keys.py | python | lenSig | (obj) | return obj.size()/8 | Return the length of the signature in bytes for a key object.
@type obj: C{Crypto.PublicKey.pubkey.pubkey}
@rtype: C{long} | Return the length of the signature in bytes for a key object. | [
"Return",
"the",
"length",
"of",
"the",
"signature",
"in",
"bytes",
"for",
"a",
"key",
"object",
"."
] | def lenSig(obj):
"""
Return the length of the signature in bytes for a key object.
@type obj: C{Crypto.PublicKey.pubkey.pubkey}
@rtype: C{long}
"""
return obj.size()/8 | [
"def",
"lenSig",
"(",
"obj",
")",
":",
"return",
"obj",
".",
"size",
"(",
")",
"/",
"8"
] | https://github.com/kuri65536/python-for-android/blob/26402a08fc46b09ef94e8d7a6bbc3a54ff9d0891/python-modules/twisted/twisted/conch/ssh/keys.py#L770-L777 |
|
robcarver17/pysystemtrade | b0385705b7135c52d39cb6d2400feece881bcca9 | systems/accounts/account_buffering_subsystem.py | python | accountBufferingSubSystemLevel.get_buffers_for_subsystem_position | (self, instrument_code: str) | return self.parent.positionSize.get_buffers_for_subsystem_position(instrument_code) | Get the buffered position from a previous module
:param instrument_code: instrument to get values for
:type instrument_code: str
:returns: Tx2 pd.DataFrame: columns top_pos, bot_pos
KEY INPUT | Get the buffered position from a previous module | [
"Get",
"the",
"buffered",
"position",
"from",
"a",
"previous",
"module"
] | def get_buffers_for_subsystem_position(self, instrument_code: str) -> pd.DataFrame:
"""
Get the buffered position from a previous module
:param instrument_code: instrument to get values for
:type instrument_code: str
:returns: Tx2 pd.DataFrame: columns top_pos, bot_pos
KEY INPUT
"""
return self.parent.positionSize.get_buffers_for_subsystem_position(instrument_code) | [
"def",
"get_buffers_for_subsystem_position",
"(",
"self",
",",
"instrument_code",
":",
"str",
")",
"->",
"pd",
".",
"DataFrame",
":",
"return",
"self",
".",
"parent",
".",
"positionSize",
".",
"get_buffers_for_subsystem_position",
"(",
"instrument_code",
")"
] | https://github.com/robcarver17/pysystemtrade/blob/b0385705b7135c52d39cb6d2400feece881bcca9/systems/accounts/account_buffering_subsystem.py#L89-L101 |
|
mchristopher/PokemonGo-DesktopMap | ec37575f2776ee7d64456e2a1f6b6b78830b4fe0 | app/pylibs/osx64/Cryptodome/Hash/SHAKE256.py | python | SHAKE256_XOF.read | (self, length) | return get_raw_buffer(bfr) | Return the next ``length`` bytes of **binary** (non-printable)
digest for the message.
You cannot use ``update`` anymore after the first call to ``read``.
:Return: A byte string of `length` bytes. | Return the next ``length`` bytes of **binary** (non-printable)
digest for the message. | [
"Return",
"the",
"next",
"length",
"bytes",
"of",
"**",
"binary",
"**",
"(",
"non",
"-",
"printable",
")",
"digest",
"for",
"the",
"message",
"."
] | def read(self, length):
"""Return the next ``length`` bytes of **binary** (non-printable)
digest for the message.
You cannot use ``update`` anymore after the first call to ``read``.
:Return: A byte string of `length` bytes.
"""
self._is_squeezing = True
bfr = create_string_buffer(length)
result = _raw_keccak_lib.keccak_squeeze(self._state.get(),
bfr,
c_size_t(length))
if result:
raise ValueError("Error %d while extracting from SHAKE256"
% result)
return get_raw_buffer(bfr) | [
"def",
"read",
"(",
"self",
",",
"length",
")",
":",
"self",
".",
"_is_squeezing",
"=",
"True",
"bfr",
"=",
"create_string_buffer",
"(",
"length",
")",
"result",
"=",
"_raw_keccak_lib",
".",
"keccak_squeeze",
"(",
"self",
".",
"_state",
".",
"get",
"(",
")",
",",
"bfr",
",",
"c_size_t",
"(",
"length",
")",
")",
"if",
"result",
":",
"raise",
"ValueError",
"(",
"\"Error %d while extracting from SHAKE256\"",
"%",
"result",
")",
"return",
"get_raw_buffer",
"(",
"bfr",
")"
] | https://github.com/mchristopher/PokemonGo-DesktopMap/blob/ec37575f2776ee7d64456e2a1f6b6b78830b4fe0/app/pylibs/osx64/Cryptodome/Hash/SHAKE256.py#L116-L134 |
|
UCL-INGI/INGInious | 60f10cb4c375ce207471043e76bd813220b95399 | inginious/frontend/user_manager.py | python | UserManager.set_session_token | (self, token) | Sets the token of the current user in the session, if one is open. | Sets the token of the current user in the session, if one is open. | [
"Sets",
"the",
"token",
"of",
"the",
"current",
"user",
"in",
"the",
"session",
"if",
"one",
"is",
"open",
"."
] | def set_session_token(self, token):
""" Sets the token of the current user in the session, if one is open."""
if self.session_logged_in():
self._session["token"] = token | [
"def",
"set_session_token",
"(",
"self",
",",
"token",
")",
":",
"if",
"self",
".",
"session_logged_in",
"(",
")",
":",
"self",
".",
"_session",
"[",
"\"token\"",
"]",
"=",
"token"
] | https://github.com/UCL-INGI/INGInious/blob/60f10cb4c375ce207471043e76bd813220b95399/inginious/frontend/user_manager.py#L185-L188 |
||
KhronosGroup/OpenXR-SDK-Source | 76756e2e7849b15466d29bee7d80cada92865550 | external/python/jinja2/runtime.py | python | Context.super | (self, name, current) | return BlockReference(name, self, blocks, index) | Render a parent block. | Render a parent block. | [
"Render",
"a",
"parent",
"block",
"."
] | def super(self, name, current):
"""Render a parent block."""
try:
blocks = self.blocks[name]
index = blocks.index(current) + 1
blocks[index]
except LookupError:
return self.environment.undefined('there is no parent block '
'called %r.' % name,
name='super')
return BlockReference(name, self, blocks, index) | [
"def",
"super",
"(",
"self",
",",
"name",
",",
"current",
")",
":",
"try",
":",
"blocks",
"=",
"self",
".",
"blocks",
"[",
"name",
"]",
"index",
"=",
"blocks",
".",
"index",
"(",
"current",
")",
"+",
"1",
"blocks",
"[",
"index",
"]",
"except",
"LookupError",
":",
"return",
"self",
".",
"environment",
".",
"undefined",
"(",
"'there is no parent block '",
"'called %r.'",
"%",
"name",
",",
"name",
"=",
"'super'",
")",
"return",
"BlockReference",
"(",
"name",
",",
"self",
",",
"blocks",
",",
"index",
")"
] | https://github.com/KhronosGroup/OpenXR-SDK-Source/blob/76756e2e7849b15466d29bee7d80cada92865550/external/python/jinja2/runtime.py#L175-L185 |
|
tkarras/progressive_growing_of_gans | 2504c3f3cb98ca58751610ad61fa1097313152bd | util_scripts.py | python | generate_fake_images | (run_id, snapshot=None, grid_size=[1,1], num_pngs=1, image_shrink=1, png_prefix=None, random_seed=1000, minibatch_size=8) | [] | def generate_fake_images(run_id, snapshot=None, grid_size=[1,1], num_pngs=1, image_shrink=1, png_prefix=None, random_seed=1000, minibatch_size=8):
network_pkl = misc.locate_network_pkl(run_id, snapshot)
if png_prefix is None:
png_prefix = misc.get_id_string_for_network_pkl(network_pkl) + '-'
random_state = np.random.RandomState(random_seed)
print('Loading network from "%s"...' % network_pkl)
G, D, Gs = misc.load_network_pkl(run_id, snapshot)
result_subdir = misc.create_result_subdir(config.result_dir, config.desc)
for png_idx in range(num_pngs):
print('Generating png %d / %d...' % (png_idx, num_pngs))
latents = misc.random_latents(np.prod(grid_size), Gs, random_state=random_state)
labels = np.zeros([latents.shape[0], 0], np.float32)
images = Gs.run(latents, labels, minibatch_size=minibatch_size, num_gpus=config.num_gpus, out_mul=127.5, out_add=127.5, out_shrink=image_shrink, out_dtype=np.uint8)
misc.save_image_grid(images, os.path.join(result_subdir, '%s%06d.png' % (png_prefix, png_idx)), [0,255], grid_size)
open(os.path.join(result_subdir, '_done.txt'), 'wt').close() | [
"def",
"generate_fake_images",
"(",
"run_id",
",",
"snapshot",
"=",
"None",
",",
"grid_size",
"=",
"[",
"1",
",",
"1",
"]",
",",
"num_pngs",
"=",
"1",
",",
"image_shrink",
"=",
"1",
",",
"png_prefix",
"=",
"None",
",",
"random_seed",
"=",
"1000",
",",
"minibatch_size",
"=",
"8",
")",
":",
"network_pkl",
"=",
"misc",
".",
"locate_network_pkl",
"(",
"run_id",
",",
"snapshot",
")",
"if",
"png_prefix",
"is",
"None",
":",
"png_prefix",
"=",
"misc",
".",
"get_id_string_for_network_pkl",
"(",
"network_pkl",
")",
"+",
"'-'",
"random_state",
"=",
"np",
".",
"random",
".",
"RandomState",
"(",
"random_seed",
")",
"print",
"(",
"'Loading network from \"%s\"...'",
"%",
"network_pkl",
")",
"G",
",",
"D",
",",
"Gs",
"=",
"misc",
".",
"load_network_pkl",
"(",
"run_id",
",",
"snapshot",
")",
"result_subdir",
"=",
"misc",
".",
"create_result_subdir",
"(",
"config",
".",
"result_dir",
",",
"config",
".",
"desc",
")",
"for",
"png_idx",
"in",
"range",
"(",
"num_pngs",
")",
":",
"print",
"(",
"'Generating png %d / %d...'",
"%",
"(",
"png_idx",
",",
"num_pngs",
")",
")",
"latents",
"=",
"misc",
".",
"random_latents",
"(",
"np",
".",
"prod",
"(",
"grid_size",
")",
",",
"Gs",
",",
"random_state",
"=",
"random_state",
")",
"labels",
"=",
"np",
".",
"zeros",
"(",
"[",
"latents",
".",
"shape",
"[",
"0",
"]",
",",
"0",
"]",
",",
"np",
".",
"float32",
")",
"images",
"=",
"Gs",
".",
"run",
"(",
"latents",
",",
"labels",
",",
"minibatch_size",
"=",
"minibatch_size",
",",
"num_gpus",
"=",
"config",
".",
"num_gpus",
",",
"out_mul",
"=",
"127.5",
",",
"out_add",
"=",
"127.5",
",",
"out_shrink",
"=",
"image_shrink",
",",
"out_dtype",
"=",
"np",
".",
"uint8",
")",
"misc",
".",
"save_image_grid",
"(",
"images",
",",
"os",
".",
"path",
".",
"join",
"(",
"result_subdir",
",",
"'%s%06d.png'",
"%",
"(",
"png_prefix",
",",
"png_idx",
")",
")",
",",
"[",
"0",
",",
"255",
"]",
",",
"grid_size",
")",
"open",
"(",
"os",
".",
"path",
".",
"join",
"(",
"result_subdir",
",",
"'_done.txt'",
")",
",",
"'wt'",
")",
".",
"close",
"(",
")"
] | https://github.com/tkarras/progressive_growing_of_gans/blob/2504c3f3cb98ca58751610ad61fa1097313152bd/util_scripts.py#L28-L44 |
||||
marcoeilers/nagini | a2a19df7d833e67841e03c9885869c3dddef3327 | src/nagini_translation/lib/util.py | python | contains_stmt | (container: Any, contained: ast.AST) | Checks if 'contained' is a part of the partial AST
whose root is 'container'. | Checks if 'contained' is a part of the partial AST
whose root is 'container'. | [
"Checks",
"if",
"contained",
"is",
"a",
"part",
"of",
"the",
"partial",
"AST",
"whose",
"root",
"is",
"container",
"."
] | def contains_stmt(container: Any, contained: ast.AST) -> bool:
"""
Checks if 'contained' is a part of the partial AST
whose root is 'container'.
"""
if container is contained:
return True
if isinstance(container, list):
for stmt in container:
if contains_stmt(stmt, contained):
return True
return False
elif isinstance(container, ast.AST):
for field in container._fields:
if contains_stmt(getattr(container, field), contained):
return True
return False
else:
return False | [
"def",
"contains_stmt",
"(",
"container",
":",
"Any",
",",
"contained",
":",
"ast",
".",
"AST",
")",
"->",
"bool",
":",
"if",
"container",
"is",
"contained",
":",
"return",
"True",
"if",
"isinstance",
"(",
"container",
",",
"list",
")",
":",
"for",
"stmt",
"in",
"container",
":",
"if",
"contains_stmt",
"(",
"stmt",
",",
"contained",
")",
":",
"return",
"True",
"return",
"False",
"elif",
"isinstance",
"(",
"container",
",",
"ast",
".",
"AST",
")",
":",
"for",
"field",
"in",
"container",
".",
"_fields",
":",
"if",
"contains_stmt",
"(",
"getattr",
"(",
"container",
",",
"field",
")",
",",
"contained",
")",
":",
"return",
"True",
"return",
"False",
"else",
":",
"return",
"False"
] | https://github.com/marcoeilers/nagini/blob/a2a19df7d833e67841e03c9885869c3dddef3327/src/nagini_translation/lib/util.py#L128-L146 |
||
facelessuser/ColorHelper | cfed17c35dbae4db49a14165ef222407c48a3014 | lib/coloraide/color.py | python | Color.space | (self) | return self._space.NAME | The current color space. | The current color space. | [
"The",
"current",
"color",
"space",
"."
] | def space(self) -> str:
"""The current color space."""
return self._space.NAME | [
"def",
"space",
"(",
"self",
")",
"->",
"str",
":",
"return",
"self",
".",
"_space",
".",
"NAME"
] | https://github.com/facelessuser/ColorHelper/blob/cfed17c35dbae4db49a14165ef222407c48a3014/lib/coloraide/color.py#L359-L362 |
|
facebookresearch/pytorch_GAN_zoo | b75dee40918caabb4fe7ec561522717bf096a8cb | visualization/np_visualizer.py | python | publishTensors | (data, out_size_image, caption="", window_token=None, env="main") | return None | [] | def publishTensors(data, out_size_image, caption="", window_token=None, env="main"):
return None | [
"def",
"publishTensors",
"(",
"data",
",",
"out_size_image",
",",
"caption",
"=",
"\"\"",
",",
"window_token",
"=",
"None",
",",
"env",
"=",
"\"main\"",
")",
":",
"return",
"None"
] | https://github.com/facebookresearch/pytorch_GAN_zoo/blob/b75dee40918caabb4fe7ec561522717bf096a8cb/visualization/np_visualizer.py#L75-L76 |
|||
NervanaSystems/neon | 8c3fb8a93b4a89303467b25817c60536542d08bd | neon/transforms/cost.py | python | LogLoss.__init__ | (self) | [] | def __init__(self):
self.correctProbs = self.be.iobuf(1)
self.metric_names = ['LogLoss'] | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"correctProbs",
"=",
"self",
".",
"be",
".",
"iobuf",
"(",
"1",
")",
"self",
".",
"metric_names",
"=",
"[",
"'LogLoss'",
"]"
] | https://github.com/NervanaSystems/neon/blob/8c3fb8a93b4a89303467b25817c60536542d08bd/neon/transforms/cost.py#L329-L331 |
||||
maqp/tfc | 4bb13da1f19671e1e723db7e8a21be58847209af | src/common/db_contacts.py | python | ContactList.remove_contact_by_pub_key | (self, onion_pub_key: bytes) | Remove the contact that has a matching Onion Service public key.
If the contact was found and removed, write changes to the database. | Remove the contact that has a matching Onion Service public key. | [
"Remove",
"the",
"contact",
"that",
"has",
"a",
"matching",
"Onion",
"Service",
"public",
"key",
"."
] | def remove_contact_by_pub_key(self, onion_pub_key: bytes) -> None:
"""Remove the contact that has a matching Onion Service public key.
If the contact was found and removed, write changes to the database.
"""
for i, c in enumerate(self.contacts):
if c.onion_pub_key == onion_pub_key:
del self.contacts[i]
self.store_contacts()
break | [
"def",
"remove_contact_by_pub_key",
"(",
"self",
",",
"onion_pub_key",
":",
"bytes",
")",
"->",
"None",
":",
"for",
"i",
",",
"c",
"in",
"enumerate",
"(",
"self",
".",
"contacts",
")",
":",
"if",
"c",
".",
"onion_pub_key",
"==",
"onion_pub_key",
":",
"del",
"self",
".",
"contacts",
"[",
"i",
"]",
"self",
".",
"store_contacts",
"(",
")",
"break"
] | https://github.com/maqp/tfc/blob/4bb13da1f19671e1e723db7e8a21be58847209af/src/common/db_contacts.py#L367-L376 |
||
elastic/elasticsearch-py | 6ef1adfa3c840a84afda7369cd8e43ae7dc45cdb | elasticsearch/_sync/client/cat.py | python | CatClient.aliases | (
self,
*,
name: Optional[Any] = None,
error_trace: Optional[bool] = None,
expand_wildcards: Optional[Any] = None,
filter_path: Optional[Union[List[str], str]] = None,
format: Optional[str] = None,
h: Optional[Any] = None,
help: Optional[bool] = None,
human: Optional[bool] = None,
local: Optional[bool] = None,
master_timeout: Optional[Any] = None,
pretty: Optional[bool] = None,
s: Optional[List[str]] = None,
v: Optional[bool] = None,
) | return self._perform_request("GET", __target, headers=__headers) | Shows information about currently configured aliases to indices including filter
and routing infos.
`<https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-alias.html>`_
:param name: A comma-separated list of alias names to return
:param expand_wildcards: Whether to expand wildcard expression to concrete indices
that are open, closed or both.
:param format: Specifies the format to return the columnar data in, can be set
to `text`, `json`, `cbor`, `yaml`, or `smile`.
:param h: List of columns to appear in the response. Supports simple wildcards.
:param help: When set to `true` will output available columns. This option can't
be combined with any other query string option.
:param local: If `true`, the request computes the list of selected nodes from
the local cluster state. If `false` the list of selected nodes are computed
from the cluster state of the master node. In both cases the coordinating
node will send requests for further information to each selected node.
:param master_timeout: Period to wait for a connection to the master node.
:param s: List of columns that determine how the table should be sorted. Sorting
defaults to ascending and can be changed by setting `:asc` or `:desc` as
a suffix to the column name.
:param v: When set to `true` will enable verbose output. | Shows information about currently configured aliases to indices including filter
and routing infos. | [
"Shows",
"information",
"about",
"currently",
"configured",
"aliases",
"to",
"indices",
"including",
"filter",
"and",
"routing",
"infos",
"."
] | def aliases(
self,
*,
name: Optional[Any] = None,
error_trace: Optional[bool] = None,
expand_wildcards: Optional[Any] = None,
filter_path: Optional[Union[List[str], str]] = None,
format: Optional[str] = None,
h: Optional[Any] = None,
help: Optional[bool] = None,
human: Optional[bool] = None,
local: Optional[bool] = None,
master_timeout: Optional[Any] = None,
pretty: Optional[bool] = None,
s: Optional[List[str]] = None,
v: Optional[bool] = None,
) -> Union[ObjectApiResponse[Any], TextApiResponse]:
"""
Shows information about currently configured aliases to indices including filter
and routing infos.
`<https://www.elastic.co/guide/en/elasticsearch/reference/master/cat-alias.html>`_
:param name: A comma-separated list of alias names to return
:param expand_wildcards: Whether to expand wildcard expression to concrete indices
that are open, closed or both.
:param format: Specifies the format to return the columnar data in, can be set
to `text`, `json`, `cbor`, `yaml`, or `smile`.
:param h: List of columns to appear in the response. Supports simple wildcards.
:param help: When set to `true` will output available columns. This option can't
be combined with any other query string option.
:param local: If `true`, the request computes the list of selected nodes from
the local cluster state. If `false` the list of selected nodes are computed
from the cluster state of the master node. In both cases the coordinating
node will send requests for further information to each selected node.
:param master_timeout: Period to wait for a connection to the master node.
:param s: List of columns that determine how the table should be sorted. Sorting
defaults to ascending and can be changed by setting `:asc` or `:desc` as
a suffix to the column name.
:param v: When set to `true` will enable verbose output.
"""
if name not in SKIP_IN_PATH:
__path = f"/_cat/aliases/{_quote(name)}"
else:
__path = "/_cat/aliases"
__query: Dict[str, Any] = {}
if error_trace is not None:
__query["error_trace"] = error_trace
if expand_wildcards is not None:
__query["expand_wildcards"] = expand_wildcards
if filter_path is not None:
__query["filter_path"] = filter_path
if format is not None:
__query["format"] = format
if h is not None:
__query["h"] = h
if help is not None:
__query["help"] = help
if human is not None:
__query["human"] = human
if local is not None:
__query["local"] = local
if master_timeout is not None:
__query["master_timeout"] = master_timeout
if pretty is not None:
__query["pretty"] = pretty
if s is not None:
__query["s"] = s
if v is not None:
__query["v"] = v
if __query:
__target = f"{__path}?{_quote_query(__query)}"
else:
__target = __path
__headers = {"accept": "text/plain,application/json"}
return self._perform_request("GET", __target, headers=__headers) | [
"def",
"aliases",
"(",
"self",
",",
"*",
",",
"name",
":",
"Optional",
"[",
"Any",
"]",
"=",
"None",
",",
"error_trace",
":",
"Optional",
"[",
"bool",
"]",
"=",
"None",
",",
"expand_wildcards",
":",
"Optional",
"[",
"Any",
"]",
"=",
"None",
",",
"filter_path",
":",
"Optional",
"[",
"Union",
"[",
"List",
"[",
"str",
"]",
",",
"str",
"]",
"]",
"=",
"None",
",",
"format",
":",
"Optional",
"[",
"str",
"]",
"=",
"None",
",",
"h",
":",
"Optional",
"[",
"Any",
"]",
"=",
"None",
",",
"help",
":",
"Optional",
"[",
"bool",
"]",
"=",
"None",
",",
"human",
":",
"Optional",
"[",
"bool",
"]",
"=",
"None",
",",
"local",
":",
"Optional",
"[",
"bool",
"]",
"=",
"None",
",",
"master_timeout",
":",
"Optional",
"[",
"Any",
"]",
"=",
"None",
",",
"pretty",
":",
"Optional",
"[",
"bool",
"]",
"=",
"None",
",",
"s",
":",
"Optional",
"[",
"List",
"[",
"str",
"]",
"]",
"=",
"None",
",",
"v",
":",
"Optional",
"[",
"bool",
"]",
"=",
"None",
",",
")",
"->",
"Union",
"[",
"ObjectApiResponse",
"[",
"Any",
"]",
",",
"TextApiResponse",
"]",
":",
"if",
"name",
"not",
"in",
"SKIP_IN_PATH",
":",
"__path",
"=",
"f\"/_cat/aliases/{_quote(name)}\"",
"else",
":",
"__path",
"=",
"\"/_cat/aliases\"",
"__query",
":",
"Dict",
"[",
"str",
",",
"Any",
"]",
"=",
"{",
"}",
"if",
"error_trace",
"is",
"not",
"None",
":",
"__query",
"[",
"\"error_trace\"",
"]",
"=",
"error_trace",
"if",
"expand_wildcards",
"is",
"not",
"None",
":",
"__query",
"[",
"\"expand_wildcards\"",
"]",
"=",
"expand_wildcards",
"if",
"filter_path",
"is",
"not",
"None",
":",
"__query",
"[",
"\"filter_path\"",
"]",
"=",
"filter_path",
"if",
"format",
"is",
"not",
"None",
":",
"__query",
"[",
"\"format\"",
"]",
"=",
"format",
"if",
"h",
"is",
"not",
"None",
":",
"__query",
"[",
"\"h\"",
"]",
"=",
"h",
"if",
"help",
"is",
"not",
"None",
":",
"__query",
"[",
"\"help\"",
"]",
"=",
"help",
"if",
"human",
"is",
"not",
"None",
":",
"__query",
"[",
"\"human\"",
"]",
"=",
"human",
"if",
"local",
"is",
"not",
"None",
":",
"__query",
"[",
"\"local\"",
"]",
"=",
"local",
"if",
"master_timeout",
"is",
"not",
"None",
":",
"__query",
"[",
"\"master_timeout\"",
"]",
"=",
"master_timeout",
"if",
"pretty",
"is",
"not",
"None",
":",
"__query",
"[",
"\"pretty\"",
"]",
"=",
"pretty",
"if",
"s",
"is",
"not",
"None",
":",
"__query",
"[",
"\"s\"",
"]",
"=",
"s",
"if",
"v",
"is",
"not",
"None",
":",
"__query",
"[",
"\"v\"",
"]",
"=",
"v",
"if",
"__query",
":",
"__target",
"=",
"f\"{__path}?{_quote_query(__query)}\"",
"else",
":",
"__target",
"=",
"__path",
"__headers",
"=",
"{",
"\"accept\"",
":",
"\"text/plain,application/json\"",
"}",
"return",
"self",
".",
"_perform_request",
"(",
"\"GET\"",
",",
"__target",
",",
"headers",
"=",
"__headers",
")"
] | https://github.com/elastic/elasticsearch-py/blob/6ef1adfa3c840a84afda7369cd8e43ae7dc45cdb/elasticsearch/_sync/client/cat.py#L28-L103 |
|
wxWidgets/Phoenix | b2199e299a6ca6d866aa6f3d0888499136ead9d6 | wx/py/crustslices.py | python | CrustSlicesFrame.bufferNew | (self) | return cancel | Create new buffer. | Create new buffer. | [
"Create",
"new",
"buffer",
"."
] | def bufferNew(self):
"""Create new buffer."""
cancel = self.bufferSuggestSave()
if cancel:
return cancel
self.sliceshell.clear()
self.SetTitle( 'PySlices')
self.sliceshell.NeedsCheckForSave=False
self.sliceshell.SetSavePoint()
self.buffer.doc = document.Document()
self.buffer.name = 'This shell'
self.buffer.modulename = self.buffer.doc.filebase
#self.bufferCreate()
cancel = False
return cancel | [
"def",
"bufferNew",
"(",
"self",
")",
":",
"cancel",
"=",
"self",
".",
"bufferSuggestSave",
"(",
")",
"if",
"cancel",
":",
"return",
"cancel",
"self",
".",
"sliceshell",
".",
"clear",
"(",
")",
"self",
".",
"SetTitle",
"(",
"'PySlices'",
")",
"self",
".",
"sliceshell",
".",
"NeedsCheckForSave",
"=",
"False",
"self",
".",
"sliceshell",
".",
"SetSavePoint",
"(",
")",
"self",
".",
"buffer",
".",
"doc",
"=",
"document",
".",
"Document",
"(",
")",
"self",
".",
"buffer",
".",
"name",
"=",
"'This shell'",
"self",
".",
"buffer",
".",
"modulename",
"=",
"self",
".",
"buffer",
".",
"doc",
".",
"filebase",
"#self.bufferCreate()",
"cancel",
"=",
"False",
"return",
"cancel"
] | https://github.com/wxWidgets/Phoenix/blob/b2199e299a6ca6d866aa6f3d0888499136ead9d6/wx/py/crustslices.py#L262-L276 |
|
kamalgill/flask-appengine-template | 11760f83faccbb0d0afe416fc58e67ecfb4643c2 | src/lib/click/decorators.py | python | make_pass_decorator | (object_type, ensure=False) | return decorator | Given an object type this creates a decorator that will work
similar to :func:`pass_obj` but instead of passing the object of the
current context, it will find the innermost context of type
:func:`object_type`.
This generates a decorator that works roughly like this::
from functools import update_wrapper
def decorator(f):
@pass_context
def new_func(ctx, *args, **kwargs):
obj = ctx.find_object(object_type)
return ctx.invoke(f, obj, *args, **kwargs)
return update_wrapper(new_func, f)
return decorator
:param object_type: the type of the object to pass.
:param ensure: if set to `True`, a new object will be created and
remembered on the context if it's not there yet. | Given an object type this creates a decorator that will work
similar to :func:`pass_obj` but instead of passing the object of the
current context, it will find the innermost context of type
:func:`object_type`. | [
"Given",
"an",
"object",
"type",
"this",
"creates",
"a",
"decorator",
"that",
"will",
"work",
"similar",
"to",
":",
"func",
":",
"pass_obj",
"but",
"instead",
"of",
"passing",
"the",
"object",
"of",
"the",
"current",
"context",
"it",
"will",
"find",
"the",
"innermost",
"context",
"of",
"type",
":",
"func",
":",
"object_type",
"."
] | def make_pass_decorator(object_type, ensure=False):
"""Given an object type this creates a decorator that will work
similar to :func:`pass_obj` but instead of passing the object of the
current context, it will find the innermost context of type
:func:`object_type`.
This generates a decorator that works roughly like this::
from functools import update_wrapper
def decorator(f):
@pass_context
def new_func(ctx, *args, **kwargs):
obj = ctx.find_object(object_type)
return ctx.invoke(f, obj, *args, **kwargs)
return update_wrapper(new_func, f)
return decorator
:param object_type: the type of the object to pass.
:param ensure: if set to `True`, a new object will be created and
remembered on the context if it's not there yet.
"""
def decorator(f):
def new_func(*args, **kwargs):
ctx = get_current_context()
if ensure:
obj = ctx.ensure_object(object_type)
else:
obj = ctx.find_object(object_type)
if obj is None:
raise RuntimeError('Managed to invoke callback without a '
'context object of type %r existing'
% object_type.__name__)
return ctx.invoke(f, obj, *args[1:], **kwargs)
return update_wrapper(new_func, f)
return decorator | [
"def",
"make_pass_decorator",
"(",
"object_type",
",",
"ensure",
"=",
"False",
")",
":",
"def",
"decorator",
"(",
"f",
")",
":",
"def",
"new_func",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"ctx",
"=",
"get_current_context",
"(",
")",
"if",
"ensure",
":",
"obj",
"=",
"ctx",
".",
"ensure_object",
"(",
"object_type",
")",
"else",
":",
"obj",
"=",
"ctx",
".",
"find_object",
"(",
"object_type",
")",
"if",
"obj",
"is",
"None",
":",
"raise",
"RuntimeError",
"(",
"'Managed to invoke callback without a '",
"'context object of type %r existing'",
"%",
"object_type",
".",
"__name__",
")",
"return",
"ctx",
".",
"invoke",
"(",
"f",
",",
"obj",
",",
"*",
"args",
"[",
"1",
":",
"]",
",",
"*",
"*",
"kwargs",
")",
"return",
"update_wrapper",
"(",
"new_func",
",",
"f",
")",
"return",
"decorator"
] | https://github.com/kamalgill/flask-appengine-template/blob/11760f83faccbb0d0afe416fc58e67ecfb4643c2/src/lib/click/decorators.py#L31-L66 |
|
PythonCharmers/python-future | 80523f383fbba1c6de0551e19d0277e73e69573c | src/future/standard_library/__init__.py | python | cache_py2_modules | () | Currently this function is unneeded, as we are not attempting to provide import hooks
for modules with ambiguous names: email, urllib, pickle. | Currently this function is unneeded, as we are not attempting to provide import hooks
for modules with ambiguous names: email, urllib, pickle. | [
"Currently",
"this",
"function",
"is",
"unneeded",
"as",
"we",
"are",
"not",
"attempting",
"to",
"provide",
"import",
"hooks",
"for",
"modules",
"with",
"ambiguous",
"names",
":",
"email",
"urllib",
"pickle",
"."
] | def cache_py2_modules():
"""
Currently this function is unneeded, as we are not attempting to provide import hooks
for modules with ambiguous names: email, urllib, pickle.
"""
if len(sys.py2_modules) != 0:
return
assert not detect_hooks()
import urllib
sys.py2_modules['urllib'] = urllib
import email
sys.py2_modules['email'] = email
import pickle
sys.py2_modules['pickle'] = pickle | [
"def",
"cache_py2_modules",
"(",
")",
":",
"if",
"len",
"(",
"sys",
".",
"py2_modules",
")",
"!=",
"0",
":",
"return",
"assert",
"not",
"detect_hooks",
"(",
")",
"import",
"urllib",
"sys",
".",
"py2_modules",
"[",
"'urllib'",
"]",
"=",
"urllib",
"import",
"email",
"sys",
".",
"py2_modules",
"[",
"'email'",
"]",
"=",
"email",
"import",
"pickle",
"sys",
".",
"py2_modules",
"[",
"'pickle'",
"]",
"=",
"pickle"
] | https://github.com/PythonCharmers/python-future/blob/80523f383fbba1c6de0551e19d0277e73e69573c/src/future/standard_library/__init__.py#L600-L615 |
||
chribsen/simple-machine-learning-examples | dc94e52a4cebdc8bb959ff88b81ff8cfeca25022 | venv/lib/python2.7/site-packages/sklearn/model_selection/_search.py | python | BaseSearchCV.transform | (self, X) | return self.best_estimator_.transform(X) | Call transform on the estimator with the best found parameters.
Only available if the underlying estimator supports ``transform`` and
``refit=True``.
Parameters
-----------
X : indexable, length n_samples
Must fulfill the input assumptions of the
underlying estimator. | Call transform on the estimator with the best found parameters. | [
"Call",
"transform",
"on",
"the",
"estimator",
"with",
"the",
"best",
"found",
"parameters",
"."
] | def transform(self, X):
"""Call transform on the estimator with the best found parameters.
Only available if the underlying estimator supports ``transform`` and
``refit=True``.
Parameters
-----------
X : indexable, length n_samples
Must fulfill the input assumptions of the
underlying estimator.
"""
self._check_is_fitted('transform')
return self.best_estimator_.transform(X) | [
"def",
"transform",
"(",
"self",
",",
"X",
")",
":",
"self",
".",
"_check_is_fitted",
"(",
"'transform'",
")",
"return",
"self",
".",
"best_estimator_",
".",
"transform",
"(",
"X",
")"
] | https://github.com/chribsen/simple-machine-learning-examples/blob/dc94e52a4cebdc8bb959ff88b81ff8cfeca25022/venv/lib/python2.7/site-packages/sklearn/model_selection/_search.py#L502-L516 |
|
sagemath/sage | f9b2db94f675ff16963ccdefba4f1a3393b3fe0d | src/sage/rings/polynomial/convolution.py | python | _combine | (L, m, k) | return [L[0][j] for j in range(half_K)] + \
[L[i+1][j] + L[i][j+half_K] \
for i in range(M-1) for j in range(half_K)] | r"""
Assumes L is a list of length `2^m`, each entry a list of
length `2^k`. Combines together into a single list,
effectively inverting ``_split()``, but overlaying
coefficients, i.e. list #i gets added in starting at position
`2^{k-1} i`. Note that the second half of the last list is
ignored. | r"""
Assumes L is a list of length `2^m`, each entry a list of
length `2^k`. Combines together into a single list,
effectively inverting ``_split()``, but overlaying
coefficients, i.e. list #i gets added in starting at position
`2^{k-1} i`. Note that the second half of the last list is
ignored. | [
"r",
"Assumes",
"L",
"is",
"a",
"list",
"of",
"length",
"2^m",
"each",
"entry",
"a",
"list",
"of",
"length",
"2^k",
".",
"Combines",
"together",
"into",
"a",
"single",
"list",
"effectively",
"inverting",
"_split",
"()",
"but",
"overlaying",
"coefficients",
"i",
".",
"e",
".",
"list",
"#i",
"gets",
"added",
"in",
"starting",
"at",
"position",
"2^",
"{",
"k",
"-",
"1",
"}",
"i",
".",
"Note",
"that",
"the",
"second",
"half",
"of",
"the",
"last",
"list",
"is",
"ignored",
"."
] | def _combine(L, m, k):
r"""
Assumes L is a list of length `2^m`, each entry a list of
length `2^k`. Combines together into a single list,
effectively inverting ``_split()``, but overlaying
coefficients, i.e. list #i gets added in starting at position
`2^{k-1} i`. Note that the second half of the last list is
ignored.
"""
M = 1 << m
half_K = 1 << (k-1)
return [L[0][j] for j in range(half_K)] + \
[L[i+1][j] + L[i][j+half_K] \
for i in range(M-1) for j in range(half_K)] | [
"def",
"_combine",
"(",
"L",
",",
"m",
",",
"k",
")",
":",
"M",
"=",
"1",
"<<",
"m",
"half_K",
"=",
"1",
"<<",
"(",
"k",
"-",
"1",
")",
"return",
"[",
"L",
"[",
"0",
"]",
"[",
"j",
"]",
"for",
"j",
"in",
"range",
"(",
"half_K",
")",
"]",
"+",
"[",
"L",
"[",
"i",
"+",
"1",
"]",
"[",
"j",
"]",
"+",
"L",
"[",
"i",
"]",
"[",
"j",
"+",
"half_K",
"]",
"for",
"i",
"in",
"range",
"(",
"M",
"-",
"1",
")",
"for",
"j",
"in",
"range",
"(",
"half_K",
")",
"]"
] | https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/rings/polynomial/convolution.py#L264-L277 |
|
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_flaskbb/lib/python2.7/site-packages/redis/client.py | python | StrictRedis.hexists | (self, name, key) | return self.execute_command('HEXISTS', name, key) | Returns a boolean indicating if ``key`` exists within hash ``name`` | Returns a boolean indicating if ``key`` exists within hash ``name`` | [
"Returns",
"a",
"boolean",
"indicating",
"if",
"key",
"exists",
"within",
"hash",
"name"
] | def hexists(self, name, key):
"Returns a boolean indicating if ``key`` exists within hash ``name``"
return self.execute_command('HEXISTS', name, key) | [
"def",
"hexists",
"(",
"self",
",",
"name",
",",
"key",
")",
":",
"return",
"self",
".",
"execute_command",
"(",
"'HEXISTS'",
",",
"name",
",",
"key",
")"
] | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/lib/python2.7/site-packages/redis/client.py#L1957-L1959 |
|
thunlp/HATT-Proto | 8630f048ecc52714dda45e3d731ec68156439b4f | fewshot_re_kit/framework.py | python | FewShotREFramework.eval | (self,
model,
B, N, K, Q,
eval_iter,
ckpt=None,
noise_rate=0) | return iter_right / iter_sample | model: a FewShotREModel instance
B: Batch size
N: Num of classes for each batch
K: Num of instances for each class in the support set
Q: Num of instances for each class in the query set
eval_iter: Num of iterations
ckpt: Checkpoint path. Set as None if using current model parameters.
return: Accuracy | model: a FewShotREModel instance
B: Batch size
N: Num of classes for each batch
K: Num of instances for each class in the support set
Q: Num of instances for each class in the query set
eval_iter: Num of iterations
ckpt: Checkpoint path. Set as None if using current model parameters.
return: Accuracy | [
"model",
":",
"a",
"FewShotREModel",
"instance",
"B",
":",
"Batch",
"size",
"N",
":",
"Num",
"of",
"classes",
"for",
"each",
"batch",
"K",
":",
"Num",
"of",
"instances",
"for",
"each",
"class",
"in",
"the",
"support",
"set",
"Q",
":",
"Num",
"of",
"instances",
"for",
"each",
"class",
"in",
"the",
"query",
"set",
"eval_iter",
":",
"Num",
"of",
"iterations",
"ckpt",
":",
"Checkpoint",
"path",
".",
"Set",
"as",
"None",
"if",
"using",
"current",
"model",
"parameters",
".",
"return",
":",
"Accuracy"
] | def eval(self,
model,
B, N, K, Q,
eval_iter,
ckpt=None,
noise_rate=0):
'''
model: a FewShotREModel instance
B: Batch size
N: Num of classes for each batch
K: Num of instances for each class in the support set
Q: Num of instances for each class in the query set
eval_iter: Num of iterations
ckpt: Checkpoint path. Set as None if using current model parameters.
return: Accuracy
'''
print("")
model.eval()
if ckpt is None:
eval_dataset = self.val_data_loader
else:
checkpoint = self.__load_model__(ckpt)
model.load_state_dict(checkpoint['state_dict'])
eval_dataset = self.test_data_loader
iter_right = 0.0
iter_sample = 0.0
for it in range(eval_iter):
support, query, label = eval_dataset.next_batch(B, N, K, Q, noise_rate=noise_rate)
logits, pred = model(support, query, N, K, Q)
right = model.accuracy(pred, label)
iter_right += self.item(right.data)
iter_sample += 1
sys.stdout.write('[EVAL] step: {0:4} | accuracy: {1:3.2f}%'.format(it + 1, 100 * iter_right / iter_sample) +'\r')
sys.stdout.flush()
print("")
return iter_right / iter_sample | [
"def",
"eval",
"(",
"self",
",",
"model",
",",
"B",
",",
"N",
",",
"K",
",",
"Q",
",",
"eval_iter",
",",
"ckpt",
"=",
"None",
",",
"noise_rate",
"=",
"0",
")",
":",
"print",
"(",
"\"\"",
")",
"model",
".",
"eval",
"(",
")",
"if",
"ckpt",
"is",
"None",
":",
"eval_dataset",
"=",
"self",
".",
"val_data_loader",
"else",
":",
"checkpoint",
"=",
"self",
".",
"__load_model__",
"(",
"ckpt",
")",
"model",
".",
"load_state_dict",
"(",
"checkpoint",
"[",
"'state_dict'",
"]",
")",
"eval_dataset",
"=",
"self",
".",
"test_data_loader",
"iter_right",
"=",
"0.0",
"iter_sample",
"=",
"0.0",
"for",
"it",
"in",
"range",
"(",
"eval_iter",
")",
":",
"support",
",",
"query",
",",
"label",
"=",
"eval_dataset",
".",
"next_batch",
"(",
"B",
",",
"N",
",",
"K",
",",
"Q",
",",
"noise_rate",
"=",
"noise_rate",
")",
"logits",
",",
"pred",
"=",
"model",
"(",
"support",
",",
"query",
",",
"N",
",",
"K",
",",
"Q",
")",
"right",
"=",
"model",
".",
"accuracy",
"(",
"pred",
",",
"label",
")",
"iter_right",
"+=",
"self",
".",
"item",
"(",
"right",
".",
"data",
")",
"iter_sample",
"+=",
"1",
"sys",
".",
"stdout",
".",
"write",
"(",
"'[EVAL] step: {0:4} | accuracy: {1:3.2f}%'",
".",
"format",
"(",
"it",
"+",
"1",
",",
"100",
"*",
"iter_right",
"/",
"iter_sample",
")",
"+",
"'\\r'",
")",
"sys",
".",
"stdout",
".",
"flush",
"(",
")",
"print",
"(",
"\"\"",
")",
"return",
"iter_right",
"/",
"iter_sample"
] | https://github.com/thunlp/HATT-Proto/blob/8630f048ecc52714dda45e3d731ec68156439b4f/fewshot_re_kit/framework.py#L182-L219 |
|
oilshell/oil | 94388e7d44a9ad879b12615f6203b38596b5a2d3 | oil_lang/funcs_builtin.py | python | _Append | (L, arg) | [] | def _Append(L, arg):
L.append(arg) | [
"def",
"_Append",
"(",
"L",
",",
"arg",
")",
":",
"L",
".",
"append",
"(",
"arg",
")"
] | https://github.com/oilshell/oil/blob/94388e7d44a9ad879b12615f6203b38596b5a2d3/oil_lang/funcs_builtin.py#L58-L59 |
||||
sao-eht/eat | 26062fe552fc304a3efc26274e8ed56ddbda323d | eat/aips/__init__.py | python | _source | (script, replace=None, update=True) | Source variables from a shell script
import them in the environment (if update==True) | Source variables from a shell script
import them in the environment (if update==True) | [
"Source",
"variables",
"from",
"a",
"shell",
"script",
"import",
"them",
"in",
"the",
"environment",
"(",
"if",
"update",
"==",
"True",
")"
] | def _source(script, replace=None, update=True):
"""
Source variables from a shell script
import them in the environment (if update==True)
"""
from subprocess import Popen, PIPE
from os import environ
import os
if os.path.isfile(script) is False:
errmsg = "'%s' is not available"%(script)
raise ValueError(errmsg)
else:
print((" Reading Environmental Variables from %s"%(script)))
pipe = Popen(". %s > /dev/null 2>&1; env" % script, stdout=PIPE, shell=True)
data = str(pipe.communicate()[0])
env = dict((line.split("=", 1) for line in data.splitlines()))
if replace is not None:
for key in list(env.keys()):
value = env[key]
if replace[0] in value:
env[key] = env[key].replace(replace[0], replace[1])
if update:
environ.update(env)
else:
return env | [
"def",
"_source",
"(",
"script",
",",
"replace",
"=",
"None",
",",
"update",
"=",
"True",
")",
":",
"from",
"subprocess",
"import",
"Popen",
",",
"PIPE",
"from",
"os",
"import",
"environ",
"import",
"os",
"if",
"os",
".",
"path",
".",
"isfile",
"(",
"script",
")",
"is",
"False",
":",
"errmsg",
"=",
"\"'%s' is not available\"",
"%",
"(",
"script",
")",
"raise",
"ValueError",
"(",
"errmsg",
")",
"else",
":",
"print",
"(",
"(",
"\" Reading Environmental Variables from %s\"",
"%",
"(",
"script",
")",
")",
")",
"pipe",
"=",
"Popen",
"(",
"\". %s > /dev/null 2>&1; env\"",
"%",
"script",
",",
"stdout",
"=",
"PIPE",
",",
"shell",
"=",
"True",
")",
"data",
"=",
"str",
"(",
"pipe",
".",
"communicate",
"(",
")",
"[",
"0",
"]",
")",
"env",
"=",
"dict",
"(",
"(",
"line",
".",
"split",
"(",
"\"=\"",
",",
"1",
")",
"for",
"line",
"in",
"data",
".",
"splitlines",
"(",
")",
")",
")",
"if",
"replace",
"is",
"not",
"None",
":",
"for",
"key",
"in",
"list",
"(",
"env",
".",
"keys",
"(",
")",
")",
":",
"value",
"=",
"env",
"[",
"key",
"]",
"if",
"replace",
"[",
"0",
"]",
"in",
"value",
":",
"env",
"[",
"key",
"]",
"=",
"env",
"[",
"key",
"]",
".",
"replace",
"(",
"replace",
"[",
"0",
"]",
",",
"replace",
"[",
"1",
"]",
")",
"if",
"update",
":",
"environ",
".",
"update",
"(",
"env",
")",
"else",
":",
"return",
"env"
] | https://github.com/sao-eht/eat/blob/26062fe552fc304a3efc26274e8ed56ddbda323d/eat/aips/__init__.py#L125-L152 |
||
dagwieers/mrepo | a55cbc737d8bade92070d38e4dbb9a24be4b477f | up2date_client/repoBackends/aptRepo.py | python | AptRepoSource.getHeader | (self, package, msgCallback = None, progressCallback = None) | return None | [] | def getHeader(self, package, msgCallback = None, progressCallback = None):
# there are weird cases where this can happen, mostly as a result of
# mucking with things in /var/spool/up2date
#
# not a particularly effiencent way to get the header, but we should
# not get hit very often
return None | [
"def",
"getHeader",
"(",
"self",
",",
"package",
",",
"msgCallback",
"=",
"None",
",",
"progressCallback",
"=",
"None",
")",
":",
"# there are weird cases where this can happen, mostly as a result of",
"# mucking with things in /var/spool/up2date",
"#",
"# not a particularly effiencent way to get the header, but we should",
"# not get hit very often",
"return",
"None"
] | https://github.com/dagwieers/mrepo/blob/a55cbc737d8bade92070d38e4dbb9a24be4b477f/up2date_client/repoBackends/aptRepo.py#L198-L205 |
|||
pyparallel/pyparallel | 11e8c6072d48c8f13641925d17b147bf36ee0ba3 | Lib/site-packages/ipython-4.0.0-py3.3.egg/IPython/utils/data.py | python | flatten | (seq) | return [x for subseq in seq for x in subseq] | Flatten a list of lists (NOT recursive, only works for 2d lists). | Flatten a list of lists (NOT recursive, only works for 2d lists). | [
"Flatten",
"a",
"list",
"of",
"lists",
"(",
"NOT",
"recursive",
"only",
"works",
"for",
"2d",
"lists",
")",
"."
] | def flatten(seq):
"""Flatten a list of lists (NOT recursive, only works for 2d lists)."""
return [x for subseq in seq for x in subseq] | [
"def",
"flatten",
"(",
"seq",
")",
":",
"return",
"[",
"x",
"for",
"subseq",
"in",
"seq",
"for",
"x",
"in",
"subseq",
"]"
] | https://github.com/pyparallel/pyparallel/blob/11e8c6072d48c8f13641925d17b147bf36ee0ba3/Lib/site-packages/ipython-4.0.0-py3.3.egg/IPython/utils/data.py#L27-L30 |
|
mozman/ezdxf | 59d0fc2ea63f5cf82293428f5931da7e9f9718e9 | src/ezdxf/audit.py | python | Auditor.purge | (self, codes: Set[int]) | Remove error messages defined by integer error `codes`.
This is useful to remove errors which are not important for a specific
file usage. | Remove error messages defined by integer error `codes`. | [
"Remove",
"error",
"messages",
"defined",
"by",
"integer",
"error",
"codes",
"."
] | def purge(self, codes: Set[int]):
"""Remove error messages defined by integer error `codes`.
This is useful to remove errors which are not important for a specific
file usage.
"""
self.errors = [err for err in self.errors if err.code in codes] | [
"def",
"purge",
"(",
"self",
",",
"codes",
":",
"Set",
"[",
"int",
"]",
")",
":",
"self",
".",
"errors",
"=",
"[",
"err",
"for",
"err",
"in",
"self",
".",
"errors",
"if",
"err",
".",
"code",
"in",
"codes",
"]"
] | https://github.com/mozman/ezdxf/blob/59d0fc2ea63f5cf82293428f5931da7e9f9718e9/src/ezdxf/audit.py#L219-L226 |
||
ZENGXH/DMM_Net | a6308688cbcf411db9072aa68efbe485dde02a9b | dmm/modules/match_model.py | python | MatchModel.forward | (self, proposed_feature, proposed_mask, template_feature, mask_last_occurence, proposal_score, targets=None) | return full_outmask, match_score, det_score, full_outmask, match_loss | matching layer compute the cost matrix and perform matching
Arguments"
proposed_feature: p,256; PHW
proposed_mask: p,h.w: PHW
template_feature: list of template_feature [o,256]:
mask_last_occurence: PHW
proposal_score
targets: OHW
return: | matching layer compute the cost matrix and perform matching
Arguments"
proposed_feature: p,256; PHW
proposed_mask: p,h.w: PHW
template_feature: list of template_feature [o,256]:
mask_last_occurence: PHW
proposal_score
targets: OHW
return: | [
"matching",
"layer",
"compute",
"the",
"cost",
"matrix",
"and",
"perform",
"matching",
"Arguments",
"proposed_feature",
":",
"p",
"256",
";",
"PHW",
"proposed_mask",
":",
"p",
"h",
".",
"w",
":",
"PHW",
"template_feature",
":",
"list",
"of",
"template_feature",
"[",
"o",
"256",
"]",
":",
"mask_last_occurence",
":",
"PHW",
"proposal_score",
"targets",
":",
"OHW",
"return",
":"
] | def forward(self, proposed_feature, proposed_mask, template_feature, mask_last_occurence, proposal_score, targets=None):
""" matching layer compute the cost matrix and perform matching
Arguments"
proposed_feature: p,256; PHW
proposed_mask: p,h.w: PHW
template_feature: list of template_feature [o,256]:
mask_last_occurence: PHW
proposal_score
targets: OHW
return:
"""
features = {'proposed': proposed_feature, # PXX
"template":template_feature} # OXX
mask = {'proposed': proposed_mask, # PHW
'template':mask_last_occurence}# OHW
scores = {'proposal_score':proposal_score}
# loss = {}
# compute Cosine Distance
target_sim_matrix, num_K, num_Q, match_loss = self.compute_cost_matrix(features, mask, scores, targets)
n_prop = proposed_mask.shape[0] # P,H,W
n_tplt = template_feature[0].shape[0] # O,D
full_outmask, match_score, det_score, logic_mask, assign_matrix = self.match_with_first_frame(
target_sim_matrix, n_prop, n_tplt, proposed_mask.float(), proposal_score, mask_last_occurence)
return full_outmask, match_score, det_score, full_outmask, match_loss | [
"def",
"forward",
"(",
"self",
",",
"proposed_feature",
",",
"proposed_mask",
",",
"template_feature",
",",
"mask_last_occurence",
",",
"proposal_score",
",",
"targets",
"=",
"None",
")",
":",
"features",
"=",
"{",
"'proposed'",
":",
"proposed_feature",
",",
"# PXX",
"\"template\"",
":",
"template_feature",
"}",
"# OXX ",
"mask",
"=",
"{",
"'proposed'",
":",
"proposed_mask",
",",
"# PHW ",
"'template'",
":",
"mask_last_occurence",
"}",
"# OHW",
"scores",
"=",
"{",
"'proposal_score'",
":",
"proposal_score",
"}",
"# loss = {}",
"# compute Cosine Distance ",
"target_sim_matrix",
",",
"num_K",
",",
"num_Q",
",",
"match_loss",
"=",
"self",
".",
"compute_cost_matrix",
"(",
"features",
",",
"mask",
",",
"scores",
",",
"targets",
")",
"n_prop",
"=",
"proposed_mask",
".",
"shape",
"[",
"0",
"]",
"# P,H,W",
"n_tplt",
"=",
"template_feature",
"[",
"0",
"]",
".",
"shape",
"[",
"0",
"]",
"# O,D",
"full_outmask",
",",
"match_score",
",",
"det_score",
",",
"logic_mask",
",",
"assign_matrix",
"=",
"self",
".",
"match_with_first_frame",
"(",
"target_sim_matrix",
",",
"n_prop",
",",
"n_tplt",
",",
"proposed_mask",
".",
"float",
"(",
")",
",",
"proposal_score",
",",
"mask_last_occurence",
")",
"return",
"full_outmask",
",",
"match_score",
",",
"det_score",
",",
"full_outmask",
",",
"match_loss"
] | https://github.com/ZENGXH/DMM_Net/blob/a6308688cbcf411db9072aa68efbe485dde02a9b/dmm/modules/match_model.py#L24-L47 |
|
Pymol-Scripts/Pymol-script-repo | bcd7bb7812dc6db1595953dfa4471fa15fb68c77 | modules/pdb2pqr/src/quatfit.py | python | qchichange | (initcoords, refcoords, angle) | return newcoords | Change the chiangle of the reference coordinate using the
initcoords and the given angle
Parameters
initcoords: Coordinates based on the point and basis atoms
(one dimensional list)
difchi : The angle to use (float)
refcoords : The atoms to analyze (list of many coordinates)
Returns
newcoords : The new coordinates of the atoms (list of many coords) | Change the chiangle of the reference coordinate using the
initcoords and the given angle | [
"Change",
"the",
"chiangle",
"of",
"the",
"reference",
"coordinate",
"using",
"the",
"initcoords",
"and",
"the",
"given",
"angle"
] | def qchichange(initcoords, refcoords, angle):
"""
Change the chiangle of the reference coordinate using the
initcoords and the given angle
Parameters
initcoords: Coordinates based on the point and basis atoms
(one dimensional list)
difchi : The angle to use (float)
refcoords : The atoms to analyze (list of many coordinates)
Returns
newcoords : The new coordinates of the atoms (list of many coords)
"""
# Initialize
L,R = [],[]
for i in range(3):
L.append(0.0)
R.append([0.0,0.0,0.0])
# Convert to radians and normalize
radangle = math.pi * angle/180.0
normalized = normalize(initcoords)
L[0] = normalized[0]
L[1] = normalized[1]
L[2] = normalized[2]
# Construct the rotation matrix
R[0][0] = math.cos(radangle) + L[0]*L[0] * (1.0 - math.cos(radangle))
R[1][1] = math.cos(radangle) + L[1]*L[1] * (1.0 - math.cos(radangle))
R[2][2] = math.cos(radangle) + L[2]*L[2] * (1.0 - math.cos(radangle))
R[1][0] = L[0]*L[1]*(1.0 - math.cos(radangle)) - L[2] * math.sin(radangle)
R[2][0] = L[0]*L[2]*(1.0 - math.cos(radangle)) + L[1] * math.sin(radangle)
R[0][1] = L[1]*L[0]*(1.0 - math.cos(radangle)) + L[2] * math.sin(radangle)
R[2][1] = L[1]*L[2]*(1.0 - math.cos(radangle)) - L[0] * math.sin(radangle)
R[0][2] = L[2]*L[0]*(1.0 - math.cos(radangle)) - L[1] * math.sin(radangle)
R[1][2] = L[2]*L[1]*(1.0 - math.cos(radangle)) + L[0] * math.sin(radangle)
numpoints = len(refcoords)
newcoords = rotmol(numpoints, refcoords, R)
return newcoords | [
"def",
"qchichange",
"(",
"initcoords",
",",
"refcoords",
",",
"angle",
")",
":",
"# Initialize",
"L",
",",
"R",
"=",
"[",
"]",
",",
"[",
"]",
"for",
"i",
"in",
"range",
"(",
"3",
")",
":",
"L",
".",
"append",
"(",
"0.0",
")",
"R",
".",
"append",
"(",
"[",
"0.0",
",",
"0.0",
",",
"0.0",
"]",
")",
"# Convert to radians and normalize",
"radangle",
"=",
"math",
".",
"pi",
"*",
"angle",
"/",
"180.0",
"normalized",
"=",
"normalize",
"(",
"initcoords",
")",
"L",
"[",
"0",
"]",
"=",
"normalized",
"[",
"0",
"]",
"L",
"[",
"1",
"]",
"=",
"normalized",
"[",
"1",
"]",
"L",
"[",
"2",
"]",
"=",
"normalized",
"[",
"2",
"]",
"# Construct the rotation matrix",
"R",
"[",
"0",
"]",
"[",
"0",
"]",
"=",
"math",
".",
"cos",
"(",
"radangle",
")",
"+",
"L",
"[",
"0",
"]",
"*",
"L",
"[",
"0",
"]",
"*",
"(",
"1.0",
"-",
"math",
".",
"cos",
"(",
"radangle",
")",
")",
"R",
"[",
"1",
"]",
"[",
"1",
"]",
"=",
"math",
".",
"cos",
"(",
"radangle",
")",
"+",
"L",
"[",
"1",
"]",
"*",
"L",
"[",
"1",
"]",
"*",
"(",
"1.0",
"-",
"math",
".",
"cos",
"(",
"radangle",
")",
")",
"R",
"[",
"2",
"]",
"[",
"2",
"]",
"=",
"math",
".",
"cos",
"(",
"radangle",
")",
"+",
"L",
"[",
"2",
"]",
"*",
"L",
"[",
"2",
"]",
"*",
"(",
"1.0",
"-",
"math",
".",
"cos",
"(",
"radangle",
")",
")",
"R",
"[",
"1",
"]",
"[",
"0",
"]",
"=",
"L",
"[",
"0",
"]",
"*",
"L",
"[",
"1",
"]",
"*",
"(",
"1.0",
"-",
"math",
".",
"cos",
"(",
"radangle",
")",
")",
"-",
"L",
"[",
"2",
"]",
"*",
"math",
".",
"sin",
"(",
"radangle",
")",
"R",
"[",
"2",
"]",
"[",
"0",
"]",
"=",
"L",
"[",
"0",
"]",
"*",
"L",
"[",
"2",
"]",
"*",
"(",
"1.0",
"-",
"math",
".",
"cos",
"(",
"radangle",
")",
")",
"+",
"L",
"[",
"1",
"]",
"*",
"math",
".",
"sin",
"(",
"radangle",
")",
"R",
"[",
"0",
"]",
"[",
"1",
"]",
"=",
"L",
"[",
"1",
"]",
"*",
"L",
"[",
"0",
"]",
"*",
"(",
"1.0",
"-",
"math",
".",
"cos",
"(",
"radangle",
")",
")",
"+",
"L",
"[",
"2",
"]",
"*",
"math",
".",
"sin",
"(",
"radangle",
")",
"R",
"[",
"2",
"]",
"[",
"1",
"]",
"=",
"L",
"[",
"1",
"]",
"*",
"L",
"[",
"2",
"]",
"*",
"(",
"1.0",
"-",
"math",
".",
"cos",
"(",
"radangle",
")",
")",
"-",
"L",
"[",
"0",
"]",
"*",
"math",
".",
"sin",
"(",
"radangle",
")",
"R",
"[",
"0",
"]",
"[",
"2",
"]",
"=",
"L",
"[",
"2",
"]",
"*",
"L",
"[",
"0",
"]",
"*",
"(",
"1.0",
"-",
"math",
".",
"cos",
"(",
"radangle",
")",
")",
"-",
"L",
"[",
"1",
"]",
"*",
"math",
".",
"sin",
"(",
"radangle",
")",
"R",
"[",
"1",
"]",
"[",
"2",
"]",
"=",
"L",
"[",
"2",
"]",
"*",
"L",
"[",
"1",
"]",
"*",
"(",
"1.0",
"-",
"math",
".",
"cos",
"(",
"radangle",
")",
")",
"+",
"L",
"[",
"0",
"]",
"*",
"math",
".",
"sin",
"(",
"radangle",
")",
"numpoints",
"=",
"len",
"(",
"refcoords",
")",
"newcoords",
"=",
"rotmol",
"(",
"numpoints",
",",
"refcoords",
",",
"R",
")",
"return",
"newcoords"
] | https://github.com/Pymol-Scripts/Pymol-script-repo/blob/bcd7bb7812dc6db1595953dfa4471fa15fb68c77/modules/pdb2pqr/src/quatfit.py#L137-L182 |
|
CPqD/RouteFlow | 3f406b9c1a0796f40a86eb1194990cdd2c955f4d | pox/pox/openflow/topology.py | python | OFSyncFlowTable._handle_FlowRemoved | (self, event) | return EventContinue | process a flow removed event -- remove the matching flow from the table. | process a flow removed event -- remove the matching flow from the table. | [
"process",
"a",
"flow",
"removed",
"event",
"--",
"remove",
"the",
"matching",
"flow",
"from",
"the",
"table",
"."
] | def _handle_FlowRemoved (self, event):
"""
process a flow removed event -- remove the matching flow from the table.
"""
flow_removed = event.ofp
for entry in self.flow_table.entries:
if (flow_removed.match == entry.match
and flow_removed.priority == entry.priority):
self.flow_table.remove_entry(entry)
self.raiseEvent(FlowTableModification(removed=[entry]))
return EventHalt
return EventContinue | [
"def",
"_handle_FlowRemoved",
"(",
"self",
",",
"event",
")",
":",
"flow_removed",
"=",
"event",
".",
"ofp",
"for",
"entry",
"in",
"self",
".",
"flow_table",
".",
"entries",
":",
"if",
"(",
"flow_removed",
".",
"match",
"==",
"entry",
".",
"match",
"and",
"flow_removed",
".",
"priority",
"==",
"entry",
".",
"priority",
")",
":",
"self",
".",
"flow_table",
".",
"remove_entry",
"(",
"entry",
")",
"self",
".",
"raiseEvent",
"(",
"FlowTableModification",
"(",
"removed",
"=",
"[",
"entry",
"]",
")",
")",
"return",
"EventHalt",
"return",
"EventContinue"
] | https://github.com/CPqD/RouteFlow/blob/3f406b9c1a0796f40a86eb1194990cdd2c955f4d/pox/pox/openflow/topology.py#L449-L460 |
|
krintoxi/NoobSec-Toolkit | 38738541cbc03cedb9a3b3ed13b629f781ad64f6 | NoobSecToolkit /scripts/sshbackdoors/backdoors/shell/pupy/pupy/modules/search.py | python | SearchModule.init_argparse | (self) | [] | def init_argparse(self):
self.arg_parser = PupyArgumentParser(prog="search", description=self.__doc__)
self.arg_parser.add_argument('path', help='path')
self.arg_parser.add_argument('-e','--extensions',metavar='ext1,ext2,...', help='limit to some extensions')
self.arg_parser.add_argument('strings', nargs='+', metavar='string', help='strings to search')
self.arg_parser.add_argument('-m','--max-size', type=int, default=None, help='max file size') | [
"def",
"init_argparse",
"(",
"self",
")",
":",
"self",
".",
"arg_parser",
"=",
"PupyArgumentParser",
"(",
"prog",
"=",
"\"search\"",
",",
"description",
"=",
"self",
".",
"__doc__",
")",
"self",
".",
"arg_parser",
".",
"add_argument",
"(",
"'path'",
",",
"help",
"=",
"'path'",
")",
"self",
".",
"arg_parser",
".",
"add_argument",
"(",
"'-e'",
",",
"'--extensions'",
",",
"metavar",
"=",
"'ext1,ext2,...'",
",",
"help",
"=",
"'limit to some extensions'",
")",
"self",
".",
"arg_parser",
".",
"add_argument",
"(",
"'strings'",
",",
"nargs",
"=",
"'+'",
",",
"metavar",
"=",
"'string'",
",",
"help",
"=",
"'strings to search'",
")",
"self",
".",
"arg_parser",
".",
"add_argument",
"(",
"'-m'",
",",
"'--max-size'",
",",
"type",
"=",
"int",
",",
"default",
"=",
"None",
",",
"help",
"=",
"'max file size'",
")"
] | https://github.com/krintoxi/NoobSec-Toolkit/blob/38738541cbc03cedb9a3b3ed13b629f781ad64f6/NoobSecToolkit /scripts/sshbackdoors/backdoors/shell/pupy/pupy/modules/search.py#L9-L14 |
||||
openshift/openshift-tools | 1188778e728a6e4781acf728123e5b356380fe6f | openshift/installer/vendored/openshift-ansible-3.9.40/roles/lib_openshift/library/oc_user.py | python | OCUser.group_update | (self) | return rval | update group membership | update group membership | [
"update",
"group",
"membership"
] | def group_update(self):
''' update group membership '''
rval = {'returncode': 0}
cmd = ['get', 'groups', '-o', 'json']
all_groups = self.openshift_cmd(cmd, output=True)
# pylint misindentifying all_groups['results']['items'] type
# pylint: disable=invalid-sequence-index
for group in all_groups['results']['items']:
# If we're supposed to be in this group
if group['metadata']['name'] in self.groups \
and (group['users'] is None or self.config.username not in group['users']):
cmd = ['groups', 'add-users', group['metadata']['name'],
self.config.username]
rval = self.openshift_cmd(cmd, oadm=True)
if rval['returncode'] != 0:
return rval
# else if we're in the group, but aren't supposed to be
elif group['users'] != None and self.config.username in group['users'] \
and group['metadata']['name'] not in self.groups:
cmd = ['groups', 'remove-users', group['metadata']['name'],
self.config.username]
rval = self.openshift_cmd(cmd, oadm=True)
if rval['returncode'] != 0:
return rval
return rval | [
"def",
"group_update",
"(",
"self",
")",
":",
"rval",
"=",
"{",
"'returncode'",
":",
"0",
"}",
"cmd",
"=",
"[",
"'get'",
",",
"'groups'",
",",
"'-o'",
",",
"'json'",
"]",
"all_groups",
"=",
"self",
".",
"openshift_cmd",
"(",
"cmd",
",",
"output",
"=",
"True",
")",
"# pylint misindentifying all_groups['results']['items'] type",
"# pylint: disable=invalid-sequence-index",
"for",
"group",
"in",
"all_groups",
"[",
"'results'",
"]",
"[",
"'items'",
"]",
":",
"# If we're supposed to be in this group",
"if",
"group",
"[",
"'metadata'",
"]",
"[",
"'name'",
"]",
"in",
"self",
".",
"groups",
"and",
"(",
"group",
"[",
"'users'",
"]",
"is",
"None",
"or",
"self",
".",
"config",
".",
"username",
"not",
"in",
"group",
"[",
"'users'",
"]",
")",
":",
"cmd",
"=",
"[",
"'groups'",
",",
"'add-users'",
",",
"group",
"[",
"'metadata'",
"]",
"[",
"'name'",
"]",
",",
"self",
".",
"config",
".",
"username",
"]",
"rval",
"=",
"self",
".",
"openshift_cmd",
"(",
"cmd",
",",
"oadm",
"=",
"True",
")",
"if",
"rval",
"[",
"'returncode'",
"]",
"!=",
"0",
":",
"return",
"rval",
"# else if we're in the group, but aren't supposed to be",
"elif",
"group",
"[",
"'users'",
"]",
"!=",
"None",
"and",
"self",
".",
"config",
".",
"username",
"in",
"group",
"[",
"'users'",
"]",
"and",
"group",
"[",
"'metadata'",
"]",
"[",
"'name'",
"]",
"not",
"in",
"self",
".",
"groups",
":",
"cmd",
"=",
"[",
"'groups'",
",",
"'remove-users'",
",",
"group",
"[",
"'metadata'",
"]",
"[",
"'name'",
"]",
",",
"self",
".",
"config",
".",
"username",
"]",
"rval",
"=",
"self",
".",
"openshift_cmd",
"(",
"cmd",
",",
"oadm",
"=",
"True",
")",
"if",
"rval",
"[",
"'returncode'",
"]",
"!=",
"0",
":",
"return",
"rval",
"return",
"rval"
] | https://github.com/openshift/openshift-tools/blob/1188778e728a6e4781acf728123e5b356380fe6f/openshift/installer/vendored/openshift-ansible-3.9.40/roles/lib_openshift/library/oc_user.py#L1629-L1655 |
|
mrkipling/maraschino | c6be9286937783ae01df2d6d8cebfc8b2734a7d7 | lib/werkzeug/debug/__init__.py | python | DebuggedApplication.get_resource | (self, request, filename) | return Response('Not Found', status=404) | Return a static resource from the shared folder. | Return a static resource from the shared folder. | [
"Return",
"a",
"static",
"resource",
"from",
"the",
"shared",
"folder",
"."
] | def get_resource(self, request, filename):
"""Return a static resource from the shared folder."""
filename = join(dirname(__file__), 'shared', basename(filename))
if isfile(filename):
mimetype = mimetypes.guess_type(filename)[0] \
or 'application/octet-stream'
f = file(filename, 'rb')
try:
return Response(f.read(), mimetype=mimetype)
finally:
f.close()
return Response('Not Found', status=404) | [
"def",
"get_resource",
"(",
"self",
",",
"request",
",",
"filename",
")",
":",
"filename",
"=",
"join",
"(",
"dirname",
"(",
"__file__",
")",
",",
"'shared'",
",",
"basename",
"(",
"filename",
")",
")",
"if",
"isfile",
"(",
"filename",
")",
":",
"mimetype",
"=",
"mimetypes",
".",
"guess_type",
"(",
"filename",
")",
"[",
"0",
"]",
"or",
"'application/octet-stream'",
"f",
"=",
"file",
"(",
"filename",
",",
"'rb'",
")",
"try",
":",
"return",
"Response",
"(",
"f",
".",
"read",
"(",
")",
",",
"mimetype",
"=",
"mimetype",
")",
"finally",
":",
"f",
".",
"close",
"(",
")",
"return",
"Response",
"(",
"'Not Found'",
",",
"status",
"=",
"404",
")"
] | https://github.com/mrkipling/maraschino/blob/c6be9286937783ae01df2d6d8cebfc8b2734a7d7/lib/werkzeug/debug/__init__.py#L146-L157 |
|
ntoll/drogulus | d74b78d0bf0220b91f075dbd3f9a06c2663b474e | drogulus/dht/validators.py | python | validate_timestamp | (val) | return (isinstance(val, float) and val >= 0.0) | Returns a boolean indication that a field is a valid timestamp - a
floating point number representing the time in seconds since the Epoch (so
called POSIX time, see https://en.wikipedia.org/wiki/Unix_time). | Returns a boolean indication that a field is a valid timestamp - a
floating point number representing the time in seconds since the Epoch (so
called POSIX time, see https://en.wikipedia.org/wiki/Unix_time). | [
"Returns",
"a",
"boolean",
"indication",
"that",
"a",
"field",
"is",
"a",
"valid",
"timestamp",
"-",
"a",
"floating",
"point",
"number",
"representing",
"the",
"time",
"in",
"seconds",
"since",
"the",
"Epoch",
"(",
"so",
"called",
"POSIX",
"time",
"see",
"https",
":",
"//",
"en",
".",
"wikipedia",
".",
"org",
"/",
"wiki",
"/",
"Unix_time",
")",
"."
] | def validate_timestamp(val):
"""
Returns a boolean indication that a field is a valid timestamp - a
floating point number representing the time in seconds since the Epoch (so
called POSIX time, see https://en.wikipedia.org/wiki/Unix_time).
"""
return (isinstance(val, float) and val >= 0.0) | [
"def",
"validate_timestamp",
"(",
"val",
")",
":",
"return",
"(",
"isinstance",
"(",
"val",
",",
"float",
")",
"and",
"val",
">=",
"0.0",
")"
] | https://github.com/ntoll/drogulus/blob/d74b78d0bf0220b91f075dbd3f9a06c2663b474e/drogulus/dht/validators.py#L8-L14 |
|
scikit-learn/scikit-learn | 1d1aadd0711b87d2a11c80aad15df6f8cf156712 | sklearn/tree/_classes.py | python | DecisionTreeRegressor.fit | (self, X, y, sample_weight=None, check_input=True) | return self | Build a decision tree regressor from the training set (X, y).
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csc_matrix``.
y : array-like of shape (n_samples,) or (n_samples, n_outputs)
The target values (real numbers). Use ``dtype=np.float64`` and
``order='C'`` for maximum efficiency.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits
that would create child nodes with net zero or negative weight are
ignored while searching for a split in each node.
check_input : bool, default=True
Allow to bypass several input checking.
Don't use this parameter unless you know what you do.
Returns
-------
self : DecisionTreeRegressor
Fitted estimator. | Build a decision tree regressor from the training set (X, y). | [
"Build",
"a",
"decision",
"tree",
"regressor",
"from",
"the",
"training",
"set",
"(",
"X",
"y",
")",
"."
] | def fit(self, X, y, sample_weight=None, check_input=True):
"""Build a decision tree regressor from the training set (X, y).
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csc_matrix``.
y : array-like of shape (n_samples,) or (n_samples, n_outputs)
The target values (real numbers). Use ``dtype=np.float64`` and
``order='C'`` for maximum efficiency.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits
that would create child nodes with net zero or negative weight are
ignored while searching for a split in each node.
check_input : bool, default=True
Allow to bypass several input checking.
Don't use this parameter unless you know what you do.
Returns
-------
self : DecisionTreeRegressor
Fitted estimator.
"""
super().fit(
X,
y,
sample_weight=sample_weight,
check_input=check_input,
)
return self | [
"def",
"fit",
"(",
"self",
",",
"X",
",",
"y",
",",
"sample_weight",
"=",
"None",
",",
"check_input",
"=",
"True",
")",
":",
"super",
"(",
")",
".",
"fit",
"(",
"X",
",",
"y",
",",
"sample_weight",
"=",
"sample_weight",
",",
"check_input",
"=",
"check_input",
",",
")",
"return",
"self"
] | https://github.com/scikit-learn/scikit-learn/blob/1d1aadd0711b87d2a11c80aad15df6f8cf156712/sklearn/tree/_classes.py#L1257-L1292 |
|
TencentCloud/tencentcloud-sdk-python | 3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2 | tencentcloud/cwp/v20180228/models.py | python | SecurityTrend.__init__ | (self) | r"""
:param Date: 事件时间。
:type Date: str
:param EventNum: 事件数量。
:type EventNum: int | r"""
:param Date: 事件时间。
:type Date: str
:param EventNum: 事件数量。
:type EventNum: int | [
"r",
":",
"param",
"Date",
":",
"事件时间。",
":",
"type",
"Date",
":",
"str",
":",
"param",
"EventNum",
":",
"事件数量。",
":",
"type",
"EventNum",
":",
"int"
] | def __init__(self):
r"""
:param Date: 事件时间。
:type Date: str
:param EventNum: 事件数量。
:type EventNum: int
"""
self.Date = None
self.EventNum = None | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"Date",
"=",
"None",
"self",
".",
"EventNum",
"=",
"None"
] | https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/cwp/v20180228/models.py#L18499-L18507 |
||
OCA/l10n-brazil | 6faefc04c7b0de3de3810a7ab137493d933fb579 | l10n_br_fiscal/models/tax.py | python | Tax.compute_taxes | (self, **kwargs) | return result_amounts | arguments:
company,
partner,
product,
price_unit,
quantity,
uom_id,
fiscal_price,
fiscal_quantity,
uot_id,
discount_value,
insurance_value,
other_value,
freight_value,
ncm,
nbs,
nbm,
cest,
operation_line,
icmssn_range,
icms_origin,
return
{
'amount_included': float
'amount_not_included': float
'amount_withholding': float
'taxes': dict
} | arguments:
company,
partner,
product,
price_unit,
quantity,
uom_id,
fiscal_price,
fiscal_quantity,
uot_id,
discount_value,
insurance_value,
other_value,
freight_value,
ncm,
nbs,
nbm,
cest,
operation_line,
icmssn_range,
icms_origin,
return
{
'amount_included': float
'amount_not_included': float
'amount_withholding': float
'taxes': dict
} | [
"arguments",
":",
"company",
"partner",
"product",
"price_unit",
"quantity",
"uom_id",
"fiscal_price",
"fiscal_quantity",
"uot_id",
"discount_value",
"insurance_value",
"other_value",
"freight_value",
"ncm",
"nbs",
"nbm",
"cest",
"operation_line",
"icmssn_range",
"icms_origin",
"return",
"{",
"amount_included",
":",
"float",
"amount_not_included",
":",
"float",
"amount_withholding",
":",
"float",
"taxes",
":",
"dict",
"}"
] | def compute_taxes(self, **kwargs):
"""
arguments:
company,
partner,
product,
price_unit,
quantity,
uom_id,
fiscal_price,
fiscal_quantity,
uot_id,
discount_value,
insurance_value,
other_value,
freight_value,
ncm,
nbs,
nbm,
cest,
operation_line,
icmssn_range,
icms_origin,
return
{
'amount_included': float
'amount_not_included': float
'amount_withholding': float
'taxes': dict
}
"""
result_amounts = {
"amount_included": 0.00,
"amount_not_included": 0.00,
"amount_withholding": 0.00,
"estimate_tax": 0.00,
"taxes": {},
}
taxes = {}
for tax in self.sorted(key=lambda t: t.compute_sequence):
taxes[tax.tax_domain] = dict(TAX_DICT_VALUES)
try:
# Define CST FROM TAX
operation_line = kwargs.get("operation_line")
fiscal_operation_type = (
operation_line.fiscal_operation_type or FISCAL_OUT
)
kwargs.update({"cst": tax.cst_from_tax(fiscal_operation_type)})
compute_method = getattr(self, "_compute_%s" % tax.tax_domain)
taxes[tax.tax_domain].update(compute_method(tax, taxes, **kwargs))
if taxes[tax.tax_domain]["tax_include"]:
result_amounts["amount_included"] += taxes[tax.tax_domain].get(
"tax_value", 0.00
)
else:
result_amounts["amount_not_included"] += taxes[tax.tax_domain].get(
"tax_value", 0.00
)
if taxes[tax.tax_domain]["tax_withholding"]:
result_amounts["amount_withholding"] += taxes[tax.tax_domain].get(
"tax_value", 0.00
)
except AttributeError:
taxes[tax.tax_domain].update(tax._compute_generic(tax, taxes, **kwargs))
# Caso não exista campos especificos dos impostos
# no documento fiscal, os mesmos são calculados.
continue
# Estimate taxes
result_amounts["estimate_tax"] = self._compute_estimate_taxes(**kwargs)
result_amounts["taxes"] = taxes
return result_amounts | [
"def",
"compute_taxes",
"(",
"self",
",",
"*",
"*",
"kwargs",
")",
":",
"result_amounts",
"=",
"{",
"\"amount_included\"",
":",
"0.00",
",",
"\"amount_not_included\"",
":",
"0.00",
",",
"\"amount_withholding\"",
":",
"0.00",
",",
"\"estimate_tax\"",
":",
"0.00",
",",
"\"taxes\"",
":",
"{",
"}",
",",
"}",
"taxes",
"=",
"{",
"}",
"for",
"tax",
"in",
"self",
".",
"sorted",
"(",
"key",
"=",
"lambda",
"t",
":",
"t",
".",
"compute_sequence",
")",
":",
"taxes",
"[",
"tax",
".",
"tax_domain",
"]",
"=",
"dict",
"(",
"TAX_DICT_VALUES",
")",
"try",
":",
"# Define CST FROM TAX",
"operation_line",
"=",
"kwargs",
".",
"get",
"(",
"\"operation_line\"",
")",
"fiscal_operation_type",
"=",
"(",
"operation_line",
".",
"fiscal_operation_type",
"or",
"FISCAL_OUT",
")",
"kwargs",
".",
"update",
"(",
"{",
"\"cst\"",
":",
"tax",
".",
"cst_from_tax",
"(",
"fiscal_operation_type",
")",
"}",
")",
"compute_method",
"=",
"getattr",
"(",
"self",
",",
"\"_compute_%s\"",
"%",
"tax",
".",
"tax_domain",
")",
"taxes",
"[",
"tax",
".",
"tax_domain",
"]",
".",
"update",
"(",
"compute_method",
"(",
"tax",
",",
"taxes",
",",
"*",
"*",
"kwargs",
")",
")",
"if",
"taxes",
"[",
"tax",
".",
"tax_domain",
"]",
"[",
"\"tax_include\"",
"]",
":",
"result_amounts",
"[",
"\"amount_included\"",
"]",
"+=",
"taxes",
"[",
"tax",
".",
"tax_domain",
"]",
".",
"get",
"(",
"\"tax_value\"",
",",
"0.00",
")",
"else",
":",
"result_amounts",
"[",
"\"amount_not_included\"",
"]",
"+=",
"taxes",
"[",
"tax",
".",
"tax_domain",
"]",
".",
"get",
"(",
"\"tax_value\"",
",",
"0.00",
")",
"if",
"taxes",
"[",
"tax",
".",
"tax_domain",
"]",
"[",
"\"tax_withholding\"",
"]",
":",
"result_amounts",
"[",
"\"amount_withholding\"",
"]",
"+=",
"taxes",
"[",
"tax",
".",
"tax_domain",
"]",
".",
"get",
"(",
"\"tax_value\"",
",",
"0.00",
")",
"except",
"AttributeError",
":",
"taxes",
"[",
"tax",
".",
"tax_domain",
"]",
".",
"update",
"(",
"tax",
".",
"_compute_generic",
"(",
"tax",
",",
"taxes",
",",
"*",
"*",
"kwargs",
")",
")",
"# Caso não exista campos especificos dos impostos",
"# no documento fiscal, os mesmos são calculados.",
"continue",
"# Estimate taxes",
"result_amounts",
"[",
"\"estimate_tax\"",
"]",
"=",
"self",
".",
"_compute_estimate_taxes",
"(",
"*",
"*",
"kwargs",
")",
"result_amounts",
"[",
"\"taxes\"",
"]",
"=",
"taxes",
"return",
"result_amounts"
] | https://github.com/OCA/l10n-brazil/blob/6faefc04c7b0de3de3810a7ab137493d933fb579/l10n_br_fiscal/models/tax.py#L658-L733 |
|
YutingZhang/lmdis-rep | e1b210477fec4010e4e47112a48e6239f2dc73c1 | net_modules/spatial_transformer.py | python | transformer | (U, theta, out_size, name='SpatialTransformer', **kwargs) | Spatial Transformer Layer
Implements a spatial transformer layer as described in [1]_.
Based on [2]_ and edited by David Dao for Tensorflow.
Parameters
----------
U : float
The output of a convolutional net should have the
shape [num_batch, height, width, num_channels].
theta: float
The output of the
localisation network should be [num_batch, 6].
out_size: tuple of two ints
The size of the output of the network (height, width)
References
----------
.. [1] Spatial Transformer Networks
Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu
Submitted on 5 Jun 2015
.. [2] https://github.com/skaae/transformer_network/blob/master/transformerlayer.py
Notes
-----
To initialize the network to the identity transform init
``theta`` to :
identity = np.array([[1., 0., 0.],
[0., 1., 0.]])
identity = identity.flatten()
theta = tf.Variable(initial_value=identity) | Spatial Transformer Layer | [
"Spatial",
"Transformer",
"Layer"
] | def transformer(U, theta, out_size, name='SpatialTransformer', **kwargs):
"""Spatial Transformer Layer
Implements a spatial transformer layer as described in [1]_.
Based on [2]_ and edited by David Dao for Tensorflow.
Parameters
----------
U : float
The output of a convolutional net should have the
shape [num_batch, height, width, num_channels].
theta: float
The output of the
localisation network should be [num_batch, 6].
out_size: tuple of two ints
The size of the output of the network (height, width)
References
----------
.. [1] Spatial Transformer Networks
Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu
Submitted on 5 Jun 2015
.. [2] https://github.com/skaae/transformer_network/blob/master/transformerlayer.py
Notes
-----
To initialize the network to the identity transform init
``theta`` to :
identity = np.array([[1., 0., 0.],
[0., 1., 0.]])
identity = identity.flatten()
theta = tf.Variable(initial_value=identity)
"""
with tf.variable_scope(name):
output = _transform(theta, U, out_size)
return output | [
"def",
"transformer",
"(",
"U",
",",
"theta",
",",
"out_size",
",",
"name",
"=",
"'SpatialTransformer'",
",",
"*",
"*",
"kwargs",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
"name",
")",
":",
"output",
"=",
"_transform",
"(",
"theta",
",",
"U",
",",
"out_size",
")",
"return",
"output"
] | https://github.com/YutingZhang/lmdis-rep/blob/e1b210477fec4010e4e47112a48e6239f2dc73c1/net_modules/spatial_transformer.py#L149-L186 |
||
ansible-collections/community.general | 3faffe8f47968a2400ba3c896c8901c03001a194 | plugins/modules/files/filesize.py | python | size_spec | (args) | return args['size_spec'] | Return a dictionary with size specifications, especially the size in
bytes (after rounding it to an integer number of blocks). | Return a dictionary with size specifications, especially the size in
bytes (after rounding it to an integer number of blocks). | [
"Return",
"a",
"dictionary",
"with",
"size",
"specifications",
"especially",
"the",
"size",
"in",
"bytes",
"(",
"after",
"rounding",
"it",
"to",
"an",
"integer",
"number",
"of",
"blocks",
")",
"."
] | def size_spec(args):
"""Return a dictionary with size specifications, especially the size in
bytes (after rounding it to an integer number of blocks).
"""
blocksize_in_bytes = split_size_unit(args['blocksize'], True)[2]
if blocksize_in_bytes == 0:
raise AssertionError("block size cannot be equal to zero")
size_value, size_unit, size_result = split_size_unit(args['size'])
if not size_unit:
blocks = int(math.ceil(size_value))
else:
blocksize_in_bytes = smart_blocksize(size_value, size_unit, size_result, blocksize_in_bytes)
blocks = int(math.ceil(size_result / blocksize_in_bytes))
args['size_diff'] = round_bytes = int(blocks * blocksize_in_bytes)
args['size_spec'] = dict(blocks=blocks, blocksize=blocksize_in_bytes, bytes=round_bytes,
iec=bytes_to_human(round_bytes, True),
si=bytes_to_human(round_bytes))
return args['size_spec'] | [
"def",
"size_spec",
"(",
"args",
")",
":",
"blocksize_in_bytes",
"=",
"split_size_unit",
"(",
"args",
"[",
"'blocksize'",
"]",
",",
"True",
")",
"[",
"2",
"]",
"if",
"blocksize_in_bytes",
"==",
"0",
":",
"raise",
"AssertionError",
"(",
"\"block size cannot be equal to zero\"",
")",
"size_value",
",",
"size_unit",
",",
"size_result",
"=",
"split_size_unit",
"(",
"args",
"[",
"'size'",
"]",
")",
"if",
"not",
"size_unit",
":",
"blocks",
"=",
"int",
"(",
"math",
".",
"ceil",
"(",
"size_value",
")",
")",
"else",
":",
"blocksize_in_bytes",
"=",
"smart_blocksize",
"(",
"size_value",
",",
"size_unit",
",",
"size_result",
",",
"blocksize_in_bytes",
")",
"blocks",
"=",
"int",
"(",
"math",
".",
"ceil",
"(",
"size_result",
"/",
"blocksize_in_bytes",
")",
")",
"args",
"[",
"'size_diff'",
"]",
"=",
"round_bytes",
"=",
"int",
"(",
"blocks",
"*",
"blocksize_in_bytes",
")",
"args",
"[",
"'size_spec'",
"]",
"=",
"dict",
"(",
"blocks",
"=",
"blocks",
",",
"blocksize",
"=",
"blocksize_in_bytes",
",",
"bytes",
"=",
"round_bytes",
",",
"iec",
"=",
"bytes_to_human",
"(",
"round_bytes",
",",
"True",
")",
",",
"si",
"=",
"bytes_to_human",
"(",
"round_bytes",
")",
")",
"return",
"args",
"[",
"'size_spec'",
"]"
] | https://github.com/ansible-collections/community.general/blob/3faffe8f47968a2400ba3c896c8901c03001a194/plugins/modules/files/filesize.py#L339-L358 |
|
tp4a/teleport | 1fafd34f1f775d2cf80ea4af6e44468d8e0b24ad | server/www/packages/packages-darwin/x64/psutil/_pswindows.py | python | sensors_battery | () | return _common.sbattery(percent, secsleft, power_plugged) | Return battery information. | Return battery information. | [
"Return",
"battery",
"information",
"."
] | def sensors_battery():
"""Return battery information."""
# For constants meaning see:
# https://msdn.microsoft.com/en-us/library/windows/desktop/
# aa373232(v=vs.85).aspx
acline_status, flags, percent, secsleft = cext.sensors_battery()
power_plugged = acline_status == 1
no_battery = bool(flags & 128)
charging = bool(flags & 8)
if no_battery:
return None
if power_plugged or charging:
secsleft = _common.POWER_TIME_UNLIMITED
elif secsleft == -1:
secsleft = _common.POWER_TIME_UNKNOWN
return _common.sbattery(percent, secsleft, power_plugged) | [
"def",
"sensors_battery",
"(",
")",
":",
"# For constants meaning see:",
"# https://msdn.microsoft.com/en-us/library/windows/desktop/",
"# aa373232(v=vs.85).aspx",
"acline_status",
",",
"flags",
",",
"percent",
",",
"secsleft",
"=",
"cext",
".",
"sensors_battery",
"(",
")",
"power_plugged",
"=",
"acline_status",
"==",
"1",
"no_battery",
"=",
"bool",
"(",
"flags",
"&",
"128",
")",
"charging",
"=",
"bool",
"(",
"flags",
"&",
"8",
")",
"if",
"no_battery",
":",
"return",
"None",
"if",
"power_plugged",
"or",
"charging",
":",
"secsleft",
"=",
"_common",
".",
"POWER_TIME_UNLIMITED",
"elif",
"secsleft",
"==",
"-",
"1",
":",
"secsleft",
"=",
"_common",
".",
"POWER_TIME_UNKNOWN",
"return",
"_common",
".",
"sbattery",
"(",
"percent",
",",
"secsleft",
",",
"power_plugged",
")"
] | https://github.com/tp4a/teleport/blob/1fafd34f1f775d2cf80ea4af6e44468d8e0b24ad/server/www/packages/packages-darwin/x64/psutil/_pswindows.py#L444-L461 |
|
xonsh/xonsh | b76d6f994f22a4078f602f8b386f4ec280c8461f | xonsh/parsers/base.py | python | BaseParser.p_expr | (self, p) | expr : xor_expr
| xor_expr pipe_xor_expr_list | expr : xor_expr
| xor_expr pipe_xor_expr_list | [
"expr",
":",
"xor_expr",
"|",
"xor_expr",
"pipe_xor_expr_list"
] | def p_expr(self, p):
"""
expr : xor_expr
| xor_expr pipe_xor_expr_list
"""
p[0] = self._binop_combine(p[1], p[2] if len(p) > 2 else None) | [
"def",
"p_expr",
"(",
"self",
",",
"p",
")",
":",
"p",
"[",
"0",
"]",
"=",
"self",
".",
"_binop_combine",
"(",
"p",
"[",
"1",
"]",
",",
"p",
"[",
"2",
"]",
"if",
"len",
"(",
"p",
")",
">",
"2",
"else",
"None",
")"
] | https://github.com/xonsh/xonsh/blob/b76d6f994f22a4078f602f8b386f4ec280c8461f/xonsh/parsers/base.py#L2008-L2013 |
||
optuna/optuna | 2c44c1a405ba059efd53f4b9c8e849d20fb95c0a | optuna/storages/_rdb/storage.py | python | RDBStorage._get_prepared_new_trial | (
self, study_id: int, template_trial: Optional[FrozenTrial], session: orm.Session
) | return trial | [] | def _get_prepared_new_trial(
self, study_id: int, template_trial: Optional[FrozenTrial], session: orm.Session
) -> models.TrialModel:
if template_trial is None:
trial = models.TrialModel(
study_id=study_id,
number=None,
state=TrialState.RUNNING,
datetime_start=datetime.now(),
)
else:
# Because only `RUNNING` trials can be updated,
# we temporarily set the state of the new trial to `RUNNING`.
# After all fields of the trial have been updated,
# the state is set to `template_trial.state`.
temp_state = TrialState.RUNNING
trial = models.TrialModel(
study_id=study_id,
number=None,
state=temp_state,
datetime_start=template_trial.datetime_start,
datetime_complete=template_trial.datetime_complete,
)
session.add(trial)
# Flush the session cache to reflect the above addition operation to
# the current RDB transaction.
#
# Without flushing, the following operations (e.g, `_set_trial_param_without_commit`)
# will fail because the target trial doesn't exist in the storage yet.
session.flush()
if template_trial is not None:
if template_trial.values is not None and len(template_trial.values) > 1:
for objective, value in enumerate(template_trial.values):
self._set_trial_value_without_commit(session, trial.trial_id, objective, value)
elif template_trial.value is not None:
self._set_trial_value_without_commit(
session, trial.trial_id, 0, template_trial.value
)
for param_name, param_value in template_trial.params.items():
distribution = template_trial.distributions[param_name]
param_value_in_internal_repr = distribution.to_internal_repr(param_value)
self._set_trial_param_without_commit(
session, trial.trial_id, param_name, param_value_in_internal_repr, distribution
)
for key, value in template_trial.user_attrs.items():
self._set_trial_user_attr_without_commit(session, trial.trial_id, key, value)
for key, value in template_trial.system_attrs.items():
self._set_trial_system_attr_without_commit(session, trial.trial_id, key, value)
for step, intermediate_value in template_trial.intermediate_values.items():
self._set_trial_intermediate_value_without_commit(
session, trial.trial_id, step, intermediate_value
)
trial.state = template_trial.state
trial.number = trial.count_past_trials(session)
session.add(trial)
return trial | [
"def",
"_get_prepared_new_trial",
"(",
"self",
",",
"study_id",
":",
"int",
",",
"template_trial",
":",
"Optional",
"[",
"FrozenTrial",
"]",
",",
"session",
":",
"orm",
".",
"Session",
")",
"->",
"models",
".",
"TrialModel",
":",
"if",
"template_trial",
"is",
"None",
":",
"trial",
"=",
"models",
".",
"TrialModel",
"(",
"study_id",
"=",
"study_id",
",",
"number",
"=",
"None",
",",
"state",
"=",
"TrialState",
".",
"RUNNING",
",",
"datetime_start",
"=",
"datetime",
".",
"now",
"(",
")",
",",
")",
"else",
":",
"# Because only `RUNNING` trials can be updated,",
"# we temporarily set the state of the new trial to `RUNNING`.",
"# After all fields of the trial have been updated,",
"# the state is set to `template_trial.state`.",
"temp_state",
"=",
"TrialState",
".",
"RUNNING",
"trial",
"=",
"models",
".",
"TrialModel",
"(",
"study_id",
"=",
"study_id",
",",
"number",
"=",
"None",
",",
"state",
"=",
"temp_state",
",",
"datetime_start",
"=",
"template_trial",
".",
"datetime_start",
",",
"datetime_complete",
"=",
"template_trial",
".",
"datetime_complete",
",",
")",
"session",
".",
"add",
"(",
"trial",
")",
"# Flush the session cache to reflect the above addition operation to",
"# the current RDB transaction.",
"#",
"# Without flushing, the following operations (e.g, `_set_trial_param_without_commit`)",
"# will fail because the target trial doesn't exist in the storage yet.",
"session",
".",
"flush",
"(",
")",
"if",
"template_trial",
"is",
"not",
"None",
":",
"if",
"template_trial",
".",
"values",
"is",
"not",
"None",
"and",
"len",
"(",
"template_trial",
".",
"values",
")",
">",
"1",
":",
"for",
"objective",
",",
"value",
"in",
"enumerate",
"(",
"template_trial",
".",
"values",
")",
":",
"self",
".",
"_set_trial_value_without_commit",
"(",
"session",
",",
"trial",
".",
"trial_id",
",",
"objective",
",",
"value",
")",
"elif",
"template_trial",
".",
"value",
"is",
"not",
"None",
":",
"self",
".",
"_set_trial_value_without_commit",
"(",
"session",
",",
"trial",
".",
"trial_id",
",",
"0",
",",
"template_trial",
".",
"value",
")",
"for",
"param_name",
",",
"param_value",
"in",
"template_trial",
".",
"params",
".",
"items",
"(",
")",
":",
"distribution",
"=",
"template_trial",
".",
"distributions",
"[",
"param_name",
"]",
"param_value_in_internal_repr",
"=",
"distribution",
".",
"to_internal_repr",
"(",
"param_value",
")",
"self",
".",
"_set_trial_param_without_commit",
"(",
"session",
",",
"trial",
".",
"trial_id",
",",
"param_name",
",",
"param_value_in_internal_repr",
",",
"distribution",
")",
"for",
"key",
",",
"value",
"in",
"template_trial",
".",
"user_attrs",
".",
"items",
"(",
")",
":",
"self",
".",
"_set_trial_user_attr_without_commit",
"(",
"session",
",",
"trial",
".",
"trial_id",
",",
"key",
",",
"value",
")",
"for",
"key",
",",
"value",
"in",
"template_trial",
".",
"system_attrs",
".",
"items",
"(",
")",
":",
"self",
".",
"_set_trial_system_attr_without_commit",
"(",
"session",
",",
"trial",
".",
"trial_id",
",",
"key",
",",
"value",
")",
"for",
"step",
",",
"intermediate_value",
"in",
"template_trial",
".",
"intermediate_values",
".",
"items",
"(",
")",
":",
"self",
".",
"_set_trial_intermediate_value_without_commit",
"(",
"session",
",",
"trial",
".",
"trial_id",
",",
"step",
",",
"intermediate_value",
")",
"trial",
".",
"state",
"=",
"template_trial",
".",
"state",
"trial",
".",
"number",
"=",
"trial",
".",
"count_past_trials",
"(",
"session",
")",
"session",
".",
"add",
"(",
"trial",
")",
"return",
"trial"
] | https://github.com/optuna/optuna/blob/2c44c1a405ba059efd53f4b9c8e849d20fb95c0a/optuna/storages/_rdb/storage.py#L565-L631 |
|||
mu-editor/mu | 5a5d7723405db588f67718a63a0ec0ecabebae33 | mu/modes/base.py | python | REPLConnection._on_serial_read | (self) | Called when data is ready to be send from the device | Called when data is ready to be send from the device | [
"Called",
"when",
"data",
"is",
"ready",
"to",
"be",
"send",
"from",
"the",
"device"
] | def _on_serial_read(self):
"""
Called when data is ready to be send from the device
"""
data = bytes(self.serial.readAll())
self.data_received.emit(data) | [
"def",
"_on_serial_read",
"(",
"self",
")",
":",
"data",
"=",
"bytes",
"(",
"self",
".",
"serial",
".",
"readAll",
"(",
")",
")",
"self",
".",
"data_received",
".",
"emit",
"(",
"data",
")"
] | https://github.com/mu-editor/mu/blob/5a5d7723405db588f67718a63a0ec0ecabebae33/mu/modes/base.py#L141-L146 |
||
610265158/face_landmark | cae5e3a4434c2d76974bc4dec28e7ead74feae76 | lib/core/base_trainer/net_work.py | python | Train.train_step | (self, inputs) | return loss | One train step.
Args:
inputs: one batch input.
Returns:
loss: Scaled loss. | One train step.
Args:
inputs: one batch input.
Returns:
loss: Scaled loss. | [
"One",
"train",
"step",
".",
"Args",
":",
"inputs",
":",
"one",
"batch",
"input",
".",
"Returns",
":",
"loss",
":",
"Scaled",
"loss",
"."
] | def train_step(self, inputs):
"""One train step.
Args:
inputs: one batch input.
Returns:
loss: Scaled loss.
"""
image, label = inputs
with tf.GradientTape() as tape:
predictions = self.model(image, training=True)
loss = self.compute_loss(predictions,label,training=True)
gradients = tape.gradient(loss, self.model.trainable_variables)
gradients = [(tf.clip_by_value(grad, -5.0, 5.0))
for grad in gradients]
self.optimizer.apply_gradients(zip(gradients,
self.model.trainable_variables))
return loss | [
"def",
"train_step",
"(",
"self",
",",
"inputs",
")",
":",
"image",
",",
"label",
"=",
"inputs",
"with",
"tf",
".",
"GradientTape",
"(",
")",
"as",
"tape",
":",
"predictions",
"=",
"self",
".",
"model",
"(",
"image",
",",
"training",
"=",
"True",
")",
"loss",
"=",
"self",
".",
"compute_loss",
"(",
"predictions",
",",
"label",
",",
"training",
"=",
"True",
")",
"gradients",
"=",
"tape",
".",
"gradient",
"(",
"loss",
",",
"self",
".",
"model",
".",
"trainable_variables",
")",
"gradients",
"=",
"[",
"(",
"tf",
".",
"clip_by_value",
"(",
"grad",
",",
"-",
"5.0",
",",
"5.0",
")",
")",
"for",
"grad",
"in",
"gradients",
"]",
"self",
".",
"optimizer",
".",
"apply_gradients",
"(",
"zip",
"(",
"gradients",
",",
"self",
".",
"model",
".",
"trainable_variables",
")",
")",
"return",
"loss"
] | https://github.com/610265158/face_landmark/blob/cae5e3a4434c2d76974bc4dec28e7ead74feae76/lib/core/base_trainer/net_work.py#L96-L116 |
|
chribsen/simple-machine-learning-examples | dc94e52a4cebdc8bb959ff88b81ff8cfeca25022 | venv/lib/python2.7/site-packages/sklearn/model_selection/_split.py | python | check_cv | (cv=3, y=None, classifier=False) | return cv | Input checker utility for building a cross-validator
Parameters
----------
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if classifier is True and ``y`` is either
binary or multiclass, :class:`StratifiedKFold` is used. In all other
cases, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
y : array-like, optional
The target variable for supervised learning problems.
classifier : boolean, optional, default False
Whether the task is a classification task, in which case
stratified KFold will be used.
Returns
-------
checked_cv : a cross-validator instance.
The return value is a cross-validator which generates the train/test
splits via the ``split`` method. | Input checker utility for building a cross-validator | [
"Input",
"checker",
"utility",
"for",
"building",
"a",
"cross",
"-",
"validator"
] | def check_cv(cv=3, y=None, classifier=False):
"""Input checker utility for building a cross-validator
Parameters
----------
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if classifier is True and ``y`` is either
binary or multiclass, :class:`StratifiedKFold` is used. In all other
cases, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
y : array-like, optional
The target variable for supervised learning problems.
classifier : boolean, optional, default False
Whether the task is a classification task, in which case
stratified KFold will be used.
Returns
-------
checked_cv : a cross-validator instance.
The return value is a cross-validator which generates the train/test
splits via the ``split`` method.
"""
if cv is None:
cv = 3
if isinstance(cv, numbers.Integral):
if (classifier and (y is not None) and
(type_of_target(y) in ('binary', 'multiclass'))):
return StratifiedKFold(cv)
else:
return KFold(cv)
if not hasattr(cv, 'split') or isinstance(cv, str):
if not isinstance(cv, Iterable) or isinstance(cv, str):
raise ValueError("Expected cv as an integer, cross-validation "
"object (from sklearn.model_selection) "
"or an iterable. Got %s." % cv)
return _CVIterableWrapper(cv)
return cv | [
"def",
"check_cv",
"(",
"cv",
"=",
"3",
",",
"y",
"=",
"None",
",",
"classifier",
"=",
"False",
")",
":",
"if",
"cv",
"is",
"None",
":",
"cv",
"=",
"3",
"if",
"isinstance",
"(",
"cv",
",",
"numbers",
".",
"Integral",
")",
":",
"if",
"(",
"classifier",
"and",
"(",
"y",
"is",
"not",
"None",
")",
"and",
"(",
"type_of_target",
"(",
"y",
")",
"in",
"(",
"'binary'",
",",
"'multiclass'",
")",
")",
")",
":",
"return",
"StratifiedKFold",
"(",
"cv",
")",
"else",
":",
"return",
"KFold",
"(",
"cv",
")",
"if",
"not",
"hasattr",
"(",
"cv",
",",
"'split'",
")",
"or",
"isinstance",
"(",
"cv",
",",
"str",
")",
":",
"if",
"not",
"isinstance",
"(",
"cv",
",",
"Iterable",
")",
"or",
"isinstance",
"(",
"cv",
",",
"str",
")",
":",
"raise",
"ValueError",
"(",
"\"Expected cv as an integer, cross-validation \"",
"\"object (from sklearn.model_selection) \"",
"\"or an iterable. Got %s.\"",
"%",
"cv",
")",
"return",
"_CVIterableWrapper",
"(",
"cv",
")",
"return",
"cv"
] | https://github.com/chribsen/simple-machine-learning-examples/blob/dc94e52a4cebdc8bb959ff88b81ff8cfeca25022/venv/lib/python2.7/site-packages/sklearn/model_selection/_split.py#L1546-L1596 |
|
blawar/nut | 2cf351400418399a70164987e28670309f6c9cb5 | Fs/File.py | python | BaseFile.write | (self, value, size=None) | return self.f.write(value) | [] | def write(self, value, size=None):
if size is not None:
value = value + '\0x00' * (size - len(value))
#Print.info('writing to ' + hex(self.f.tell()) + ' ' + self.f.__class__.__name__)
# Hex.dump(value)
return self.f.write(value) | [
"def",
"write",
"(",
"self",
",",
"value",
",",
"size",
"=",
"None",
")",
":",
"if",
"size",
"is",
"not",
"None",
":",
"value",
"=",
"value",
"+",
"'\\0x00'",
"*",
"(",
"size",
"-",
"len",
"(",
"value",
")",
")",
"#Print.info('writing to ' + hex(self.f.tell()) + ' ' + self.f.__class__.__name__)",
"# Hex.dump(value)",
"return",
"self",
".",
"f",
".",
"write",
"(",
"value",
")"
] | https://github.com/blawar/nut/blob/2cf351400418399a70164987e28670309f6c9cb5/Fs/File.py#L116-L121 |
|||
caiiiac/Machine-Learning-with-Python | 1a26c4467da41ca4ebc3d5bd789ea942ef79422f | MachineLearning/venv/lib/python3.5/site-packages/pip/_vendor/distlib/_backport/sysconfig.py | python | get_scheme_names | () | return tuple(sorted(_SCHEMES.sections())) | Return a tuple containing the schemes names. | Return a tuple containing the schemes names. | [
"Return",
"a",
"tuple",
"containing",
"the",
"schemes",
"names",
"."
] | def get_scheme_names():
"""Return a tuple containing the schemes names."""
return tuple(sorted(_SCHEMES.sections())) | [
"def",
"get_scheme_names",
"(",
")",
":",
"return",
"tuple",
"(",
"sorted",
"(",
"_SCHEMES",
".",
"sections",
"(",
")",
")",
")"
] | https://github.com/caiiiac/Machine-Learning-with-Python/blob/1a26c4467da41ca4ebc3d5bd789ea942ef79422f/MachineLearning/venv/lib/python3.5/site-packages/pip/_vendor/distlib/_backport/sysconfig.py#L431-L433 |
|
krintoxi/NoobSec-Toolkit | 38738541cbc03cedb9a3b3ed13b629f781ad64f6 | NoobSecToolkit /tools/inject/thirdparty/beautifulsoup/beautifulsoup.py | python | Tag.setString | (self, string) | Replace the contents of the tag with a string | Replace the contents of the tag with a string | [
"Replace",
"the",
"contents",
"of",
"the",
"tag",
"with",
"a",
"string"
] | def setString(self, string):
"""Replace the contents of the tag with a string"""
self.clear()
self.append(string) | [
"def",
"setString",
"(",
"self",
",",
"string",
")",
":",
"self",
".",
"clear",
"(",
")",
"self",
".",
"append",
"(",
"string",
")"
] | https://github.com/krintoxi/NoobSec-Toolkit/blob/38738541cbc03cedb9a3b3ed13b629f781ad64f6/NoobSecToolkit /tools/inject/thirdparty/beautifulsoup/beautifulsoup.py#L557-L560 |
||
andresriancho/w3af | cd22e5252243a87aaa6d0ddea47cf58dacfe00a9 | w3af/core/data/url/handlers/cache_backend/cached_response.py | python | CachedResponse.init | () | Takes all the actions needed for the CachedResponse class to work,
in most cases this means creating a file, directory or database. | Takes all the actions needed for the CachedResponse class to work,
in most cases this means creating a file, directory or database. | [
"Takes",
"all",
"the",
"actions",
"needed",
"for",
"the",
"CachedResponse",
"class",
"to",
"work",
"in",
"most",
"cases",
"this",
"means",
"creating",
"a",
"file",
"directory",
"or",
"database",
"."
] | def init():
"""
Takes all the actions needed for the CachedResponse class to work,
in most cases this means creating a file, directory or database.
"""
raise NotImplementedError | [
"def",
"init",
"(",
")",
":",
"raise",
"NotImplementedError"
] | https://github.com/andresriancho/w3af/blob/cd22e5252243a87aaa6d0ddea47cf58dacfe00a9/w3af/core/data/url/handlers/cache_backend/cached_response.py#L149-L154 |
||
getkeops/keops | fbe73a5de07dabc7c20df9cbb5a7e5e5ad360524 | pykeops/torch/lazytensor/LazyTensor.py | python | Vj | (x_or_ind, dim=None) | return Var(x_or_ind, dim, 1) | r"""
Simple wrapper that returns an instantiation of :class:`LazyTensor` of type 1. | r"""
Simple wrapper that returns an instantiation of :class:`LazyTensor` of type 1. | [
"r",
"Simple",
"wrapper",
"that",
"returns",
"an",
"instantiation",
"of",
":",
"class",
":",
"LazyTensor",
"of",
"type",
"1",
"."
] | def Vj(x_or_ind, dim=None):
r"""
Simple wrapper that returns an instantiation of :class:`LazyTensor` of type 1.
"""
return Var(x_or_ind, dim, 1) | [
"def",
"Vj",
"(",
"x_or_ind",
",",
"dim",
"=",
"None",
")",
":",
"return",
"Var",
"(",
"x_or_ind",
",",
"dim",
",",
"1",
")"
] | https://github.com/getkeops/keops/blob/fbe73a5de07dabc7c20df9cbb5a7e5e5ad360524/pykeops/torch/lazytensor/LazyTensor.py#L26-L30 |
|
jymcheong/AutoTTP | 617128fe71537de4579176d7170a3e8f1680b6a6 | EmpireAPIWrapper/wrapper.py | python | modules.module_search | (self, srch_str) | return utilties._postURL(self, full_url, data) | Search modules for passed term
\n:param srch_str: Search term
\n:type srch_str: str
\n:rtype: dict | Search modules for passed term
\n:param srch_str: Search term
\n:type srch_str: str
\n:rtype: dict | [
"Search",
"modules",
"for",
"passed",
"term",
"\\",
"n",
":",
"param",
"srch_str",
":",
"Search",
"term",
"\\",
"n",
":",
"type",
"srch_str",
":",
"str",
"\\",
"n",
":",
"rtype",
":",
"dict"
] | def module_search(self, srch_str):
"""
Search modules for passed term
\n:param srch_str: Search term
\n:type srch_str: str
\n:rtype: dict
"""
full_url = '/api/modules/search'
data = {'term': srch_str}
return utilties._postURL(self, full_url, data) | [
"def",
"module_search",
"(",
"self",
",",
"srch_str",
")",
":",
"full_url",
"=",
"'/api/modules/search'",
"data",
"=",
"{",
"'term'",
":",
"srch_str",
"}",
"return",
"utilties",
".",
"_postURL",
"(",
"self",
",",
"full_url",
",",
"data",
")"
] | https://github.com/jymcheong/AutoTTP/blob/617128fe71537de4579176d7170a3e8f1680b6a6/EmpireAPIWrapper/wrapper.py#L247-L256 |
|
angr/angr | 4b04d56ace135018083d36d9083805be8146688b | angr/engines/vex/claripy/ccall.py | python | armg_calculate_flag_n | (state, cc_op, cc_dep1, cc_dep2, cc_dep3) | [] | def armg_calculate_flag_n(state, cc_op, cc_dep1, cc_dep2, cc_dep3):
concrete_op = op_concretize(cc_op)
flag = None
if concrete_op == ARMG_CC_OP_COPY:
flag = claripy.LShR(cc_dep1, ARMG_CC_SHIFT_N) & 1
elif concrete_op == ARMG_CC_OP_ADD:
res = cc_dep1 + cc_dep2
flag = claripy.LShR(res, 31)
elif concrete_op == ARMG_CC_OP_SUB:
res = cc_dep1 - cc_dep2
flag = claripy.LShR(res, 31)
elif concrete_op == ARMG_CC_OP_ADC:
res = cc_dep1 + cc_dep2 + cc_dep3
flag = claripy.LShR(res, 31)
elif concrete_op == ARMG_CC_OP_SBB:
res = cc_dep1 - cc_dep2 - (cc_dep3^1)
flag = claripy.LShR(res, 31)
elif concrete_op == ARMG_CC_OP_LOGIC:
flag = claripy.LShR(cc_dep1, 31)
elif concrete_op == ARMG_CC_OP_MUL:
flag = claripy.LShR(cc_dep1, 31)
elif concrete_op == ARMG_CC_OP_MULL:
flag = claripy.LShR(cc_dep2, 31)
if flag is not None:
return flag
l.error("Unknown cc_op %s (armg_calculate_flag_n)", cc_op)
raise SimCCallError("Unknown cc_op %s" % cc_op) | [
"def",
"armg_calculate_flag_n",
"(",
"state",
",",
"cc_op",
",",
"cc_dep1",
",",
"cc_dep2",
",",
"cc_dep3",
")",
":",
"concrete_op",
"=",
"op_concretize",
"(",
"cc_op",
")",
"flag",
"=",
"None",
"if",
"concrete_op",
"==",
"ARMG_CC_OP_COPY",
":",
"flag",
"=",
"claripy",
".",
"LShR",
"(",
"cc_dep1",
",",
"ARMG_CC_SHIFT_N",
")",
"&",
"1",
"elif",
"concrete_op",
"==",
"ARMG_CC_OP_ADD",
":",
"res",
"=",
"cc_dep1",
"+",
"cc_dep2",
"flag",
"=",
"claripy",
".",
"LShR",
"(",
"res",
",",
"31",
")",
"elif",
"concrete_op",
"==",
"ARMG_CC_OP_SUB",
":",
"res",
"=",
"cc_dep1",
"-",
"cc_dep2",
"flag",
"=",
"claripy",
".",
"LShR",
"(",
"res",
",",
"31",
")",
"elif",
"concrete_op",
"==",
"ARMG_CC_OP_ADC",
":",
"res",
"=",
"cc_dep1",
"+",
"cc_dep2",
"+",
"cc_dep3",
"flag",
"=",
"claripy",
".",
"LShR",
"(",
"res",
",",
"31",
")",
"elif",
"concrete_op",
"==",
"ARMG_CC_OP_SBB",
":",
"res",
"=",
"cc_dep1",
"-",
"cc_dep2",
"-",
"(",
"cc_dep3",
"^",
"1",
")",
"flag",
"=",
"claripy",
".",
"LShR",
"(",
"res",
",",
"31",
")",
"elif",
"concrete_op",
"==",
"ARMG_CC_OP_LOGIC",
":",
"flag",
"=",
"claripy",
".",
"LShR",
"(",
"cc_dep1",
",",
"31",
")",
"elif",
"concrete_op",
"==",
"ARMG_CC_OP_MUL",
":",
"flag",
"=",
"claripy",
".",
"LShR",
"(",
"cc_dep1",
",",
"31",
")",
"elif",
"concrete_op",
"==",
"ARMG_CC_OP_MULL",
":",
"flag",
"=",
"claripy",
".",
"LShR",
"(",
"cc_dep2",
",",
"31",
")",
"if",
"flag",
"is",
"not",
"None",
":",
"return",
"flag",
"l",
".",
"error",
"(",
"\"Unknown cc_op %s (armg_calculate_flag_n)\"",
",",
"cc_op",
")",
"raise",
"SimCCallError",
"(",
"\"Unknown cc_op %s\"",
"%",
"cc_op",
")"
] | https://github.com/angr/angr/blob/4b04d56ace135018083d36d9083805be8146688b/angr/engines/vex/claripy/ccall.py#L1419-L1447 |
||||
linxid/Machine_Learning_Study_Path | 558e82d13237114bbb8152483977806fc0c222af | Machine Learning In Action/Chapter5-LogisticRegression/venv/Lib/site-packages/pip/_vendor/pyparsing.py | python | And.__str__ | ( self ) | return self.strRepr | [] | def __str__( self ):
if hasattr(self,"name"):
return self.name
if self.strRepr is None:
self.strRepr = "{" + " ".join(_ustr(e) for e in self.exprs) + "}"
return self.strRepr | [
"def",
"__str__",
"(",
"self",
")",
":",
"if",
"hasattr",
"(",
"self",
",",
"\"name\"",
")",
":",
"return",
"self",
".",
"name",
"if",
"self",
".",
"strRepr",
"is",
"None",
":",
"self",
".",
"strRepr",
"=",
"\"{\"",
"+",
"\" \"",
".",
"join",
"(",
"_ustr",
"(",
"e",
")",
"for",
"e",
"in",
"self",
".",
"exprs",
")",
"+",
"\"}\"",
"return",
"self",
".",
"strRepr"
] | https://github.com/linxid/Machine_Learning_Study_Path/blob/558e82d13237114bbb8152483977806fc0c222af/Machine Learning In Action/Chapter5-LogisticRegression/venv/Lib/site-packages/pip/_vendor/pyparsing.py#L3393-L3400 |
|||
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_flaskbb/lib/python2.7/site-packages/pip/_vendor/distlib/resources.py | python | ZipResourceFinder.__init__ | (self, module) | [] | def __init__(self, module):
super(ZipResourceFinder, self).__init__(module)
archive = self.loader.archive
self.prefix_len = 1 + len(archive)
# PyPy doesn't have a _files attr on zipimporter, and you can't set one
if hasattr(self.loader, '_files'):
self._files = self.loader._files
else:
self._files = zipimport._zip_directory_cache[archive]
self.index = sorted(self._files) | [
"def",
"__init__",
"(",
"self",
",",
"module",
")",
":",
"super",
"(",
"ZipResourceFinder",
",",
"self",
")",
".",
"__init__",
"(",
"module",
")",
"archive",
"=",
"self",
".",
"loader",
".",
"archive",
"self",
".",
"prefix_len",
"=",
"1",
"+",
"len",
"(",
"archive",
")",
"# PyPy doesn't have a _files attr on zipimporter, and you can't set one",
"if",
"hasattr",
"(",
"self",
".",
"loader",
",",
"'_files'",
")",
":",
"self",
".",
"_files",
"=",
"self",
".",
"loader",
".",
"_files",
"else",
":",
"self",
".",
"_files",
"=",
"zipimport",
".",
"_zip_directory_cache",
"[",
"archive",
"]",
"self",
".",
"index",
"=",
"sorted",
"(",
"self",
".",
"_files",
")"
] | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/lib/python2.7/site-packages/pip/_vendor/distlib/resources.py#L213-L222 |
||||
oracle/graalpython | 577e02da9755d916056184ec441c26e00b70145c | graalpython/lib-python/3/idlelib/redirector.py | python | OriginalCommand.__repr__ | (self) | return "%s(%r, %r)" % (self.__class__.__name__,
self.redir, self.operation) | [] | def __repr__(self):
return "%s(%r, %r)" % (self.__class__.__name__,
self.redir, self.operation) | [
"def",
"__repr__",
"(",
"self",
")",
":",
"return",
"\"%s(%r, %r)\"",
"%",
"(",
"self",
".",
"__class__",
".",
"__name__",
",",
"self",
".",
"redir",
",",
"self",
".",
"operation",
")"
] | https://github.com/oracle/graalpython/blob/577e02da9755d916056184ec441c26e00b70145c/graalpython/lib-python/3/idlelib/redirector.py#L145-L147 |
|||
securesystemslab/zippy | ff0e84ac99442c2c55fe1d285332cfd4e185e089 | zippy/benchmarks/src/benchmarks/whoosh/src/whoosh/codec/whoosh3.py | python | _vecfield | (fieldname) | return "_%s_vec" % fieldname | [] | def _vecfield(fieldname):
return "_%s_vec" % fieldname | [
"def",
"_vecfield",
"(",
"fieldname",
")",
":",
"return",
"\"_%s_vec\"",
"%",
"fieldname"
] | https://github.com/securesystemslab/zippy/blob/ff0e84ac99442c2c55fe1d285332cfd4e185e089/zippy/benchmarks/src/benchmarks/whoosh/src/whoosh/codec/whoosh3.py#L144-L145 |
|||
microsoft/nni | 31f11f51249660930824e888af0d4e022823285c | nni/algorithms/hpo/networkmorphism_tuner/graph.py | python | Graph.produce_onnx_model | (self) | return ONNXModel(self) | Build a new ONNX model based on the current graph. | Build a new ONNX model based on the current graph. | [
"Build",
"a",
"new",
"ONNX",
"model",
"based",
"on",
"the",
"current",
"graph",
"."
] | def produce_onnx_model(self):
"""Build a new ONNX model based on the current graph."""
return ONNXModel(self) | [
"def",
"produce_onnx_model",
"(",
"self",
")",
":",
"return",
"ONNXModel",
"(",
"self",
")"
] | https://github.com/microsoft/nni/blob/31f11f51249660930824e888af0d4e022823285c/nni/algorithms/hpo/networkmorphism_tuner/graph.py#L645-L647 |
|
triaquae/triaquae | bbabf736b3ba56a0c6498e7f04e16c13b8b8f2b9 | TriAquae/hosts/views.py | python | runCmd | (request) | return HttpResponse('{"TrackMark":%s, "TotalNum":%s}' %(track_mark, task_num)) | [] | def runCmd(request):
track_mark = MultiRunCounter.AddNumber()
user_input = request.POST['command']
user_account = request.POST['UserName']
iplists = request.POST['IPLists'].split(',')
task_num = len(set(iplists))
print "user input command is: %s and username is:%s and iplists are: %s" %(user_input,user_account,' '.join(iplists))
cmd = "python %s/TriAquae/backend/multiprocessing_runCMD2.py %s '%s' '%s' %s &" % (tri_config.Working_dir,track_mark,' '.join(iplists),user_input,user_account)
os.system(cmd)
return HttpResponse('{"TrackMark":%s, "TotalNum":%s}' %(track_mark, task_num)) | [
"def",
"runCmd",
"(",
"request",
")",
":",
"track_mark",
"=",
"MultiRunCounter",
".",
"AddNumber",
"(",
")",
"user_input",
"=",
"request",
".",
"POST",
"[",
"'command'",
"]",
"user_account",
"=",
"request",
".",
"POST",
"[",
"'UserName'",
"]",
"iplists",
"=",
"request",
".",
"POST",
"[",
"'IPLists'",
"]",
".",
"split",
"(",
"','",
")",
"task_num",
"=",
"len",
"(",
"set",
"(",
"iplists",
")",
")",
"print",
"\"user input command is: %s and username is:%s and iplists are: %s\"",
"%",
"(",
"user_input",
",",
"user_account",
",",
"' '",
".",
"join",
"(",
"iplists",
")",
")",
"cmd",
"=",
"\"python %s/TriAquae/backend/multiprocessing_runCMD2.py %s '%s' '%s' %s &\"",
"%",
"(",
"tri_config",
".",
"Working_dir",
",",
"track_mark",
",",
"' '",
".",
"join",
"(",
"iplists",
")",
",",
"user_input",
",",
"user_account",
")",
"os",
".",
"system",
"(",
"cmd",
")",
"return",
"HttpResponse",
"(",
"'{\"TrackMark\":%s, \"TotalNum\":%s}'",
"%",
"(",
"track_mark",
",",
"task_num",
")",
")"
] | https://github.com/triaquae/triaquae/blob/bbabf736b3ba56a0c6498e7f04e16c13b8b8f2b9/TriAquae/hosts/views.py#L252-L262 |
|||
inkandswitch/livebook | 93c8d467734787366ad084fc3566bf5cbe249c51 | public/pypyjs/modules/numpy/random/mtrand.py | python | RandomState.logistic | (self, loc=0.0, scale=1.0, size=None) | return cont2_array(self.internal_state, _mtrand.rk_logistic, size, oloc, oscale) | logistic(loc=0.0, scale=1.0, size=None)
Draw samples from a Logistic distribution.
Samples are drawn from a Logistic distribution with specified
parameters, loc (location or mean, also median), and scale (>0).
Parameters
----------
loc : float
scale : float > 0.
size : {tuple, int}
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn.
Returns
-------
samples : {ndarray, scalar}
where the values are all integers in [0, n].
See Also
--------
scipy.stats.distributions.logistic : probability density function,
distribution or cumulative density function, etc.
Notes
-----
The probability density for the Logistic distribution is
.. math:: P(x) = P(x) = \\frac{e^{-(x-\\mu)/s}}{s(1+e^{-(x-\\mu)/s})^2},
where :math:`\\mu` = location and :math:`s` = scale.
The Logistic distribution is used in Extreme Value problems where it
can act as a mixture of Gumbel distributions, in Epidemiology, and by
the World Chess Federation (FIDE) where it is used in the Elo ranking
system, assuming the performance of each player is a logistically
distributed random variable.
References
----------
.. [1] Reiss, R.-D. and Thomas M. (2001), Statistical Analysis of Extreme
Values, from Insurance, Finance, Hydrology and Other Fields,
Birkhauser Verlag, Basel, pp 132-133.
.. [2] Weisstein, Eric W. "Logistic Distribution." From
MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/LogisticDistribution.html
.. [3] Wikipedia, "Logistic-distribution",
http://en.wikipedia.org/wiki/Logistic-distribution
Examples
--------
Draw samples from the distribution:
>>> loc, scale = 10, 1
>>> s = np.random.logistic(loc, scale, 10000)
>>> count, bins, ignored = plt.hist(s, bins=50)
# plot against distribution
>>> def logist(x, loc, scale):
... return exp((loc-x)/scale)/(scale*(1+exp((loc-x)/scale))**2)
>>> plt.plot(bins, logist(bins, loc, scale)*count.max()/\\
... logist(bins, loc, scale).max())
>>> plt.show() | logistic(loc=0.0, scale=1.0, size=None) | [
"logistic",
"(",
"loc",
"=",
"0",
".",
"0",
"scale",
"=",
"1",
".",
"0",
"size",
"=",
"None",
")"
] | def logistic(self, loc=0.0, scale=1.0, size=None):
"""
logistic(loc=0.0, scale=1.0, size=None)
Draw samples from a Logistic distribution.
Samples are drawn from a Logistic distribution with specified
parameters, loc (location or mean, also median), and scale (>0).
Parameters
----------
loc : float
scale : float > 0.
size : {tuple, int}
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn.
Returns
-------
samples : {ndarray, scalar}
where the values are all integers in [0, n].
See Also
--------
scipy.stats.distributions.logistic : probability density function,
distribution or cumulative density function, etc.
Notes
-----
The probability density for the Logistic distribution is
.. math:: P(x) = P(x) = \\frac{e^{-(x-\\mu)/s}}{s(1+e^{-(x-\\mu)/s})^2},
where :math:`\\mu` = location and :math:`s` = scale.
The Logistic distribution is used in Extreme Value problems where it
can act as a mixture of Gumbel distributions, in Epidemiology, and by
the World Chess Federation (FIDE) where it is used in the Elo ranking
system, assuming the performance of each player is a logistically
distributed random variable.
References
----------
.. [1] Reiss, R.-D. and Thomas M. (2001), Statistical Analysis of Extreme
Values, from Insurance, Finance, Hydrology and Other Fields,
Birkhauser Verlag, Basel, pp 132-133.
.. [2] Weisstein, Eric W. "Logistic Distribution." From
MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/LogisticDistribution.html
.. [3] Wikipedia, "Logistic-distribution",
http://en.wikipedia.org/wiki/Logistic-distribution
Examples
--------
Draw samples from the distribution:
>>> loc, scale = 10, 1
>>> s = np.random.logistic(loc, scale, 10000)
>>> count, bins, ignored = plt.hist(s, bins=50)
# plot against distribution
>>> def logist(x, loc, scale):
... return exp((loc-x)/scale)/(scale*(1+exp((loc-x)/scale))**2)
>>> plt.plot(bins, logist(bins, loc, scale)*count.max()/\\
... logist(bins, loc, scale).max())
>>> plt.show()
"""
try:
floc = float(loc)
fscale = float(scale)
except:
pass
else:
if fscale <= 0:
raise ValueError("scale <= 0")
return cont2_array_sc(self.internal_state, _mtrand.rk_logistic, size, floc, fscale)
oloc = np.array(loc, np.float64) # aligned?
oscale = np.array(scale, np.float64) # aligned?
if np.any(np.less_equal(oscale, 0.0)):
raise ValueError("scale <= 0")
return cont2_array(self.internal_state, _mtrand.rk_logistic, size, oloc, oscale) | [
"def",
"logistic",
"(",
"self",
",",
"loc",
"=",
"0.0",
",",
"scale",
"=",
"1.0",
",",
"size",
"=",
"None",
")",
":",
"try",
":",
"floc",
"=",
"float",
"(",
"loc",
")",
"fscale",
"=",
"float",
"(",
"scale",
")",
"except",
":",
"pass",
"else",
":",
"if",
"fscale",
"<=",
"0",
":",
"raise",
"ValueError",
"(",
"\"scale <= 0\"",
")",
"return",
"cont2_array_sc",
"(",
"self",
".",
"internal_state",
",",
"_mtrand",
".",
"rk_logistic",
",",
"size",
",",
"floc",
",",
"fscale",
")",
"oloc",
"=",
"np",
".",
"array",
"(",
"loc",
",",
"np",
".",
"float64",
")",
"# aligned?",
"oscale",
"=",
"np",
".",
"array",
"(",
"scale",
",",
"np",
".",
"float64",
")",
"# aligned?",
"if",
"np",
".",
"any",
"(",
"np",
".",
"less_equal",
"(",
"oscale",
",",
"0.0",
")",
")",
":",
"raise",
"ValueError",
"(",
"\"scale <= 0\"",
")",
"return",
"cont2_array",
"(",
"self",
".",
"internal_state",
",",
"_mtrand",
".",
"rk_logistic",
",",
"size",
",",
"oloc",
",",
"oscale",
")"
] | https://github.com/inkandswitch/livebook/blob/93c8d467734787366ad084fc3566bf5cbe249c51/public/pypyjs/modules/numpy/random/mtrand.py#L2496-L2580 |
|
pculture/miro | d8e4594441939514dd2ac29812bf37087bb3aea5 | tv/lib/libdaap/pybonjour.py | python | DNSServiceRef.close | (self) | Close the connection to the mDNS daemon and terminate any
associated browse, resolve, etc. operations. | [] | def close(self):
"""
Close the connection to the mDNS daemon and terminate any
associated browse, resolve, etc. operations.
"""
if self._valid():
for ref in self._record_refs:
ref._invalidate()
del self._record_refs
_global_lock.acquire()
try:
_DNSServiceRefDeallocate(self)
finally:
_global_lock.release()
self._invalidate()
del self._callbacks | [
"def",
"close",
"(",
"self",
")",
":",
"if",
"self",
".",
"_valid",
"(",
")",
":",
"for",
"ref",
"in",
"self",
".",
"_record_refs",
":",
"ref",
".",
"_invalidate",
"(",
")",
"del",
"self",
".",
"_record_refs",
"_global_lock",
".",
"acquire",
"(",
")",
"try",
":",
"_DNSServiceRefDeallocate",
"(",
"self",
")",
"finally",
":",
"_global_lock",
".",
"release",
"(",
")",
"self",
".",
"_invalidate",
"(",
")",
"del",
"self",
".",
"_callbacks"
] | https://github.com/pculture/miro/blob/d8e4594441939514dd2ac29812bf37087bb3aea5/tv/lib/libdaap/pybonjour.py#L452-L472 |
|||
naftaliharris/tauthon | 5587ceec329b75f7caf6d65a036db61ac1bae214 | Lib/pydoc.py | python | TextDoc.docclass | (self, object, name=None, mod=None, *ignored) | return title + '\n' + self.indent(rstrip(contents), ' | ') + '\n' | Produce text documentation for a given class object. | Produce text documentation for a given class object. | [
"Produce",
"text",
"documentation",
"for",
"a",
"given",
"class",
"object",
"."
] | def docclass(self, object, name=None, mod=None, *ignored):
"""Produce text documentation for a given class object."""
realname = object.__name__
name = name or realname
bases = object.__bases__
def makename(c, m=object.__module__):
return classname(c, m)
if name == realname:
title = 'class ' + self.bold(realname)
else:
title = self.bold(name) + ' = class ' + realname
if bases:
parents = map(makename, bases)
title = title + '(%s)' % join(parents, ', ')
doc = getdoc(object)
contents = doc and [doc + '\n'] or []
push = contents.append
# List the mro, if non-trivial.
mro = deque(inspect.getmro(object))
if len(mro) > 2:
push("Method resolution order:")
for base in mro:
push(' ' + makename(base))
push('')
# Cute little class to pump out a horizontal rule between sections.
class HorizontalRule:
def __init__(self):
self.needone = 0
def maybe(self):
if self.needone:
push('-' * 70)
self.needone = 1
hr = HorizontalRule()
def spill(msg, attrs, predicate):
ok, attrs = _split_list(attrs, predicate)
if ok:
hr.maybe()
push(msg)
for name, kind, homecls, value in ok:
try:
value = getattr(object, name)
except Exception:
# Some descriptors may meet a failure in their __get__.
# (bug #1785)
push(self._docdescriptor(name, value, mod))
else:
push(self.document(value,
name, mod, object))
return attrs
def spilldescriptors(msg, attrs, predicate):
ok, attrs = _split_list(attrs, predicate)
if ok:
hr.maybe()
push(msg)
for name, kind, homecls, value in ok:
push(self._docdescriptor(name, value, mod))
return attrs
def spilldata(msg, attrs, predicate):
ok, attrs = _split_list(attrs, predicate)
if ok:
hr.maybe()
push(msg)
for name, kind, homecls, value in ok:
if (hasattr(value, '__call__') or
inspect.isdatadescriptor(value)):
doc = getdoc(value)
else:
doc = None
push(self.docother(getattr(object, name),
name, mod, maxlen=70, doc=doc) + '\n')
return attrs
attrs = filter(lambda data: visiblename(data[0], obj=object),
classify_class_attrs(object))
while attrs:
if mro:
thisclass = mro.popleft()
else:
thisclass = attrs[0][2]
attrs, inherited = _split_list(attrs, lambda t: t[2] is thisclass)
if thisclass is __builtin__.object:
attrs = inherited
continue
elif thisclass is object:
tag = "defined here"
else:
tag = "inherited from %s" % classname(thisclass,
object.__module__)
# Sort attrs by name.
attrs.sort()
# Pump out the attrs, segregated by kind.
attrs = spill("Methods %s:\n" % tag, attrs,
lambda t: t[1] == 'method')
attrs = spill("Class methods %s:\n" % tag, attrs,
lambda t: t[1] == 'class method')
attrs = spill("Static methods %s:\n" % tag, attrs,
lambda t: t[1] == 'static method')
attrs = spilldescriptors("Data descriptors %s:\n" % tag, attrs,
lambda t: t[1] == 'data descriptor')
attrs = spilldata("Data and other attributes %s:\n" % tag, attrs,
lambda t: t[1] == 'data')
assert attrs == []
attrs = inherited
contents = '\n'.join(contents)
if not contents:
return title + '\n'
return title + '\n' + self.indent(rstrip(contents), ' | ') + '\n' | [
"def",
"docclass",
"(",
"self",
",",
"object",
",",
"name",
"=",
"None",
",",
"mod",
"=",
"None",
",",
"*",
"ignored",
")",
":",
"realname",
"=",
"object",
".",
"__name__",
"name",
"=",
"name",
"or",
"realname",
"bases",
"=",
"object",
".",
"__bases__",
"def",
"makename",
"(",
"c",
",",
"m",
"=",
"object",
".",
"__module__",
")",
":",
"return",
"classname",
"(",
"c",
",",
"m",
")",
"if",
"name",
"==",
"realname",
":",
"title",
"=",
"'class '",
"+",
"self",
".",
"bold",
"(",
"realname",
")",
"else",
":",
"title",
"=",
"self",
".",
"bold",
"(",
"name",
")",
"+",
"' = class '",
"+",
"realname",
"if",
"bases",
":",
"parents",
"=",
"map",
"(",
"makename",
",",
"bases",
")",
"title",
"=",
"title",
"+",
"'(%s)'",
"%",
"join",
"(",
"parents",
",",
"', '",
")",
"doc",
"=",
"getdoc",
"(",
"object",
")",
"contents",
"=",
"doc",
"and",
"[",
"doc",
"+",
"'\\n'",
"]",
"or",
"[",
"]",
"push",
"=",
"contents",
".",
"append",
"# List the mro, if non-trivial.",
"mro",
"=",
"deque",
"(",
"inspect",
".",
"getmro",
"(",
"object",
")",
")",
"if",
"len",
"(",
"mro",
")",
">",
"2",
":",
"push",
"(",
"\"Method resolution order:\"",
")",
"for",
"base",
"in",
"mro",
":",
"push",
"(",
"' '",
"+",
"makename",
"(",
"base",
")",
")",
"push",
"(",
"''",
")",
"# Cute little class to pump out a horizontal rule between sections.",
"class",
"HorizontalRule",
":",
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"needone",
"=",
"0",
"def",
"maybe",
"(",
"self",
")",
":",
"if",
"self",
".",
"needone",
":",
"push",
"(",
"'-'",
"*",
"70",
")",
"self",
".",
"needone",
"=",
"1",
"hr",
"=",
"HorizontalRule",
"(",
")",
"def",
"spill",
"(",
"msg",
",",
"attrs",
",",
"predicate",
")",
":",
"ok",
",",
"attrs",
"=",
"_split_list",
"(",
"attrs",
",",
"predicate",
")",
"if",
"ok",
":",
"hr",
".",
"maybe",
"(",
")",
"push",
"(",
"msg",
")",
"for",
"name",
",",
"kind",
",",
"homecls",
",",
"value",
"in",
"ok",
":",
"try",
":",
"value",
"=",
"getattr",
"(",
"object",
",",
"name",
")",
"except",
"Exception",
":",
"# Some descriptors may meet a failure in their __get__.",
"# (bug #1785)",
"push",
"(",
"self",
".",
"_docdescriptor",
"(",
"name",
",",
"value",
",",
"mod",
")",
")",
"else",
":",
"push",
"(",
"self",
".",
"document",
"(",
"value",
",",
"name",
",",
"mod",
",",
"object",
")",
")",
"return",
"attrs",
"def",
"spilldescriptors",
"(",
"msg",
",",
"attrs",
",",
"predicate",
")",
":",
"ok",
",",
"attrs",
"=",
"_split_list",
"(",
"attrs",
",",
"predicate",
")",
"if",
"ok",
":",
"hr",
".",
"maybe",
"(",
")",
"push",
"(",
"msg",
")",
"for",
"name",
",",
"kind",
",",
"homecls",
",",
"value",
"in",
"ok",
":",
"push",
"(",
"self",
".",
"_docdescriptor",
"(",
"name",
",",
"value",
",",
"mod",
")",
")",
"return",
"attrs",
"def",
"spilldata",
"(",
"msg",
",",
"attrs",
",",
"predicate",
")",
":",
"ok",
",",
"attrs",
"=",
"_split_list",
"(",
"attrs",
",",
"predicate",
")",
"if",
"ok",
":",
"hr",
".",
"maybe",
"(",
")",
"push",
"(",
"msg",
")",
"for",
"name",
",",
"kind",
",",
"homecls",
",",
"value",
"in",
"ok",
":",
"if",
"(",
"hasattr",
"(",
"value",
",",
"'__call__'",
")",
"or",
"inspect",
".",
"isdatadescriptor",
"(",
"value",
")",
")",
":",
"doc",
"=",
"getdoc",
"(",
"value",
")",
"else",
":",
"doc",
"=",
"None",
"push",
"(",
"self",
".",
"docother",
"(",
"getattr",
"(",
"object",
",",
"name",
")",
",",
"name",
",",
"mod",
",",
"maxlen",
"=",
"70",
",",
"doc",
"=",
"doc",
")",
"+",
"'\\n'",
")",
"return",
"attrs",
"attrs",
"=",
"filter",
"(",
"lambda",
"data",
":",
"visiblename",
"(",
"data",
"[",
"0",
"]",
",",
"obj",
"=",
"object",
")",
",",
"classify_class_attrs",
"(",
"object",
")",
")",
"while",
"attrs",
":",
"if",
"mro",
":",
"thisclass",
"=",
"mro",
".",
"popleft",
"(",
")",
"else",
":",
"thisclass",
"=",
"attrs",
"[",
"0",
"]",
"[",
"2",
"]",
"attrs",
",",
"inherited",
"=",
"_split_list",
"(",
"attrs",
",",
"lambda",
"t",
":",
"t",
"[",
"2",
"]",
"is",
"thisclass",
")",
"if",
"thisclass",
"is",
"__builtin__",
".",
"object",
":",
"attrs",
"=",
"inherited",
"continue",
"elif",
"thisclass",
"is",
"object",
":",
"tag",
"=",
"\"defined here\"",
"else",
":",
"tag",
"=",
"\"inherited from %s\"",
"%",
"classname",
"(",
"thisclass",
",",
"object",
".",
"__module__",
")",
"# Sort attrs by name.",
"attrs",
".",
"sort",
"(",
")",
"# Pump out the attrs, segregated by kind.",
"attrs",
"=",
"spill",
"(",
"\"Methods %s:\\n\"",
"%",
"tag",
",",
"attrs",
",",
"lambda",
"t",
":",
"t",
"[",
"1",
"]",
"==",
"'method'",
")",
"attrs",
"=",
"spill",
"(",
"\"Class methods %s:\\n\"",
"%",
"tag",
",",
"attrs",
",",
"lambda",
"t",
":",
"t",
"[",
"1",
"]",
"==",
"'class method'",
")",
"attrs",
"=",
"spill",
"(",
"\"Static methods %s:\\n\"",
"%",
"tag",
",",
"attrs",
",",
"lambda",
"t",
":",
"t",
"[",
"1",
"]",
"==",
"'static method'",
")",
"attrs",
"=",
"spilldescriptors",
"(",
"\"Data descriptors %s:\\n\"",
"%",
"tag",
",",
"attrs",
",",
"lambda",
"t",
":",
"t",
"[",
"1",
"]",
"==",
"'data descriptor'",
")",
"attrs",
"=",
"spilldata",
"(",
"\"Data and other attributes %s:\\n\"",
"%",
"tag",
",",
"attrs",
",",
"lambda",
"t",
":",
"t",
"[",
"1",
"]",
"==",
"'data'",
")",
"assert",
"attrs",
"==",
"[",
"]",
"attrs",
"=",
"inherited",
"contents",
"=",
"'\\n'",
".",
"join",
"(",
"contents",
")",
"if",
"not",
"contents",
":",
"return",
"title",
"+",
"'\\n'",
"return",
"title",
"+",
"'\\n'",
"+",
"self",
".",
"indent",
"(",
"rstrip",
"(",
"contents",
")",
",",
"' | '",
")",
"+",
"'\\n'"
] | https://github.com/naftaliharris/tauthon/blob/5587ceec329b75f7caf6d65a036db61ac1bae214/Lib/pydoc.py#L1176-L1294 |
|
pika/pika | 12dcdf15d0932c388790e0fa990810bfd21b1a32 | pika/channel.py | python | Channel.add_on_close_callback | (self, callback) | Pass a callback function that will be called when the channel is
closed. The callback function will receive the channel and an exception
describing why the channel was closed.
If the channel is closed by broker via Channel.Close, the callback will
receive `ChannelClosedByBroker` as the reason.
If graceful user-initiated channel closing completes successfully (
either directly of indirectly by closing a connection containing the
channel) and closing concludes gracefully without Channel.Close from the
broker and without loss of connection, the callback will receive
`ChannelClosedByClient` exception as reason.
If channel was closed due to loss of connection, the callback will
receive another exception type describing the failure.
:param callable callback: The callback, having the signature:
callback(Channel, Exception reason) | Pass a callback function that will be called when the channel is
closed. The callback function will receive the channel and an exception
describing why the channel was closed. | [
"Pass",
"a",
"callback",
"function",
"that",
"will",
"be",
"called",
"when",
"the",
"channel",
"is",
"closed",
".",
"The",
"callback",
"function",
"will",
"receive",
"the",
"channel",
"and",
"an",
"exception",
"describing",
"why",
"the",
"channel",
"was",
"closed",
"."
] | def add_on_close_callback(self, callback):
"""Pass a callback function that will be called when the channel is
closed. The callback function will receive the channel and an exception
describing why the channel was closed.
If the channel is closed by broker via Channel.Close, the callback will
receive `ChannelClosedByBroker` as the reason.
If graceful user-initiated channel closing completes successfully (
either directly of indirectly by closing a connection containing the
channel) and closing concludes gracefully without Channel.Close from the
broker and without loss of connection, the callback will receive
`ChannelClosedByClient` exception as reason.
If channel was closed due to loss of connection, the callback will
receive another exception type describing the failure.
:param callable callback: The callback, having the signature:
callback(Channel, Exception reason)
"""
self.callbacks.add(self.channel_number, '_on_channel_close', callback,
False, self) | [
"def",
"add_on_close_callback",
"(",
"self",
",",
"callback",
")",
":",
"self",
".",
"callbacks",
".",
"add",
"(",
"self",
".",
"channel_number",
",",
"'_on_channel_close'",
",",
"callback",
",",
"False",
",",
"self",
")"
] | https://github.com/pika/pika/blob/12dcdf15d0932c388790e0fa990810bfd21b1a32/pika/channel.py#L134-L156 |
||
slinderman/pyhawkes | 0df433a40c5e6d8c1dcdb98ffc88fe3a403ac223 | pyhawkes/internals/impulses.py | python | ContinuousTimeImpulseResponses.rvs | (self, size=[]) | Sample random variables from the Dirichlet impulse response distribution.
:param size:
:return: | Sample random variables from the Dirichlet impulse response distribution.
:param size:
:return: | [
"Sample",
"random",
"variables",
"from",
"the",
"Dirichlet",
"impulse",
"response",
"distribution",
".",
":",
"param",
"size",
":",
":",
"return",
":"
] | def rvs(self, size=[]):
"""
Sample random variables from the Dirichlet impulse response distribution.
:param size:
:return:
"""
pass | [
"def",
"rvs",
"(",
"self",
",",
"size",
"=",
"[",
"]",
")",
":",
"pass"
] | https://github.com/slinderman/pyhawkes/blob/0df433a40c5e6d8c1dcdb98ffc88fe3a403ac223/pyhawkes/internals/impulses.py#L374-L380 |
||
sagemath/sage | f9b2db94f675ff16963ccdefba4f1a3393b3fe0d | src/sage/categories/coxeter_groups.py | python | CoxeterGroups.additional_structure | (self) | return None | r"""
Return ``None``.
Indeed, all the structure Coxeter groups have in addition to
groups (simple reflections, ...) is already defined in the
super category.
.. SEEALSO:: :meth:`Category.additional_structure`
EXAMPLES::
sage: CoxeterGroups().additional_structure() | r"""
Return ``None``. | [
"r",
"Return",
"None",
"."
] | def additional_structure(self):
r"""
Return ``None``.
Indeed, all the structure Coxeter groups have in addition to
groups (simple reflections, ...) is already defined in the
super category.
.. SEEALSO:: :meth:`Category.additional_structure`
EXAMPLES::
sage: CoxeterGroups().additional_structure()
"""
return None | [
"def",
"additional_structure",
"(",
"self",
")",
":",
"return",
"None"
] | https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/categories/coxeter_groups.py#L112-L126 |
|
Azure/azure-cli | 6c1b085a0910c6c2139006fcbd8ade44006eb6dd | src/azure-cli/azure/cli/command_modules/acs/custom.py | python | dcos_install_cli | (cmd, install_location=None, client_version='1.8') | Downloads the dcos command line from Mesosphere | Downloads the dcos command line from Mesosphere | [
"Downloads",
"the",
"dcos",
"command",
"line",
"from",
"Mesosphere"
] | def dcos_install_cli(cmd, install_location=None, client_version='1.8'):
"""
Downloads the dcos command line from Mesosphere
"""
system = platform.system()
if not install_location:
raise CLIError(
"No install location specified and it could not be determined from the current platform '{}'".format(
system))
base_url = 'https://downloads.dcos.io/binaries/cli/{}/x86-64/dcos-{}/{}'
if system == 'Windows':
file_url = base_url.format('windows', client_version, 'dcos.exe')
elif system == 'Linux':
# TODO Support ARM CPU here
file_url = base_url.format('linux', client_version, 'dcos')
elif system == 'Darwin':
file_url = base_url.format('darwin', client_version, 'dcos')
else:
raise CLIError(
'Proxy server ({}) does not exist on the cluster.'.format(system))
logger.warning('Downloading client to %s', install_location)
try:
_urlretrieve(file_url, install_location)
os.chmod(install_location,
os.stat(install_location).st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
except IOError as err:
raise CLIError(
'Connection error while attempting to download client ({})'.format(err)) | [
"def",
"dcos_install_cli",
"(",
"cmd",
",",
"install_location",
"=",
"None",
",",
"client_version",
"=",
"'1.8'",
")",
":",
"system",
"=",
"platform",
".",
"system",
"(",
")",
"if",
"not",
"install_location",
":",
"raise",
"CLIError",
"(",
"\"No install location specified and it could not be determined from the current platform '{}'\"",
".",
"format",
"(",
"system",
")",
")",
"base_url",
"=",
"'https://downloads.dcos.io/binaries/cli/{}/x86-64/dcos-{}/{}'",
"if",
"system",
"==",
"'Windows'",
":",
"file_url",
"=",
"base_url",
".",
"format",
"(",
"'windows'",
",",
"client_version",
",",
"'dcos.exe'",
")",
"elif",
"system",
"==",
"'Linux'",
":",
"# TODO Support ARM CPU here",
"file_url",
"=",
"base_url",
".",
"format",
"(",
"'linux'",
",",
"client_version",
",",
"'dcos'",
")",
"elif",
"system",
"==",
"'Darwin'",
":",
"file_url",
"=",
"base_url",
".",
"format",
"(",
"'darwin'",
",",
"client_version",
",",
"'dcos'",
")",
"else",
":",
"raise",
"CLIError",
"(",
"'Proxy server ({}) does not exist on the cluster.'",
".",
"format",
"(",
"system",
")",
")",
"logger",
".",
"warning",
"(",
"'Downloading client to %s'",
",",
"install_location",
")",
"try",
":",
"_urlretrieve",
"(",
"file_url",
",",
"install_location",
")",
"os",
".",
"chmod",
"(",
"install_location",
",",
"os",
".",
"stat",
"(",
"install_location",
")",
".",
"st_mode",
"|",
"stat",
".",
"S_IXUSR",
"|",
"stat",
".",
"S_IXGRP",
"|",
"stat",
".",
"S_IXOTH",
")",
"except",
"IOError",
"as",
"err",
":",
"raise",
"CLIError",
"(",
"'Connection error while attempting to download client ({})'",
".",
"format",
"(",
"err",
")",
")"
] | https://github.com/Azure/azure-cli/blob/6c1b085a0910c6c2139006fcbd8ade44006eb6dd/src/azure-cli/azure/cli/command_modules/acs/custom.py#L324-L353 |
||
AppScale/gts | 46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9 | AppServer/lib/django-1.4/django/views/defaults.py | python | page_not_found | (request, template_name='404.html') | return http.HttpResponseNotFound(t.render(RequestContext(request, {'request_path': request.path}))) | Default 404 handler.
Templates: `404.html`
Context:
request_path
The path of the requested URL (e.g., '/app/pages/bad_page/') | Default 404 handler. | [
"Default",
"404",
"handler",
"."
] | def page_not_found(request, template_name='404.html'):
"""
Default 404 handler.
Templates: `404.html`
Context:
request_path
The path of the requested URL (e.g., '/app/pages/bad_page/')
"""
t = loader.get_template(template_name) # You need to create a 404.html template.
return http.HttpResponseNotFound(t.render(RequestContext(request, {'request_path': request.path}))) | [
"def",
"page_not_found",
"(",
"request",
",",
"template_name",
"=",
"'404.html'",
")",
":",
"t",
"=",
"loader",
".",
"get_template",
"(",
"template_name",
")",
"# You need to create a 404.html template.",
"return",
"http",
".",
"HttpResponseNotFound",
"(",
"t",
".",
"render",
"(",
"RequestContext",
"(",
"request",
",",
"{",
"'request_path'",
":",
"request",
".",
"path",
"}",
")",
")",
")"
] | https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/lib/django-1.4/django/views/defaults.py#L11-L21 |