code
stringlengths
70
11.9k
docstring
stringlengths
4
7.08k
text
stringlengths
128
15k
def predict_proba(self, X): y_probas = [] bce_logits_loss = isinstance( self.criterion_, torch.nn.BCEWithLogitsLoss) for yp in self.forward_iter(X, training=False): yp = yp[0] if isinstance(yp, tuple) else yp if bce_logits_loss: yp = torch.sigmoid(yp) y_probas.append(to_numpy(yp)) y_proba = np.concatenate(y_probas, 0) return y_proba
Where applicable, return probability estimates for samples. If the module's forward method returns multiple outputs as a tuple, it is assumed that the first output contains the relevant information and the other values are ignored. If all values are relevant, consider using :func:`~skorch.NeuralNet.forward` instead. Parameters ---------- X : input data, compatible with skorch.dataset.Dataset By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series * scipy sparse CSR matrices * a dictionary of the former three * a list/tuple of the former three * a Dataset If this doesn't work with your data, you have to pass a ``Dataset`` that can deal with the data. Returns ------- y_proba : numpy ndarray
### Input: Where applicable, return probability estimates for samples. If the module's forward method returns multiple outputs as a tuple, it is assumed that the first output contains the relevant information and the other values are ignored. If all values are relevant, consider using :func:`~skorch.NeuralNet.forward` instead. Parameters ---------- X : input data, compatible with skorch.dataset.Dataset By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series * scipy sparse CSR matrices * a dictionary of the former three * a list/tuple of the former three * a Dataset If this doesn't work with your data, you have to pass a ``Dataset`` that can deal with the data. Returns ------- y_proba : numpy ndarray ### Response: def predict_proba(self, X): y_probas = [] bce_logits_loss = isinstance( self.criterion_, torch.nn.BCEWithLogitsLoss) for yp in self.forward_iter(X, training=False): yp = yp[0] if isinstance(yp, tuple) else yp if bce_logits_loss: yp = torch.sigmoid(yp) y_probas.append(to_numpy(yp)) y_proba = np.concatenate(y_probas, 0) return y_proba
def common_parent(coords, parent_zoom): parent = None for coord in coords: assert parent_zoom <= coord.zoom coord_parent = coord.zoomTo(parent_zoom).container() if parent is None: parent = coord_parent else: assert parent == coord_parent assert parent is not None, return parent
Return the common parent for coords Also check that all coords do indeed share the same parent coordinate.
### Input: Return the common parent for coords Also check that all coords do indeed share the same parent coordinate. ### Response: def common_parent(coords, parent_zoom): parent = None for coord in coords: assert parent_zoom <= coord.zoom coord_parent = coord.zoomTo(parent_zoom).container() if parent is None: parent = coord_parent else: assert parent == coord_parent assert parent is not None, return parent
def _set_auto_config_backup(self, v, load=False): if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=auto_config_backup.auto_config_backup, is_container=, presence=False, yang_name="auto-config-backup", rest_name="auto-config-backup", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u: {u: u, u: u, u: u}}, namespace=, defining_module=, yang_type=, is_config=True) except (TypeError, ValueError): raise ValueError({ : , : "container", : , }) self.__auto_config_backup = t if hasattr(self, ): self._set()
Setter method for auto_config_backup, mapped from YANG variable /vcs/auto_config_backup (container) If this variable is read-only (config: false) in the source YANG file, then _set_auto_config_backup is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_auto_config_backup() directly. YANG Description: Vcs Auto Configuration Backup
### Input: Setter method for auto_config_backup, mapped from YANG variable /vcs/auto_config_backup (container) If this variable is read-only (config: false) in the source YANG file, then _set_auto_config_backup is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_auto_config_backup() directly. YANG Description: Vcs Auto Configuration Backup ### Response: def _set_auto_config_backup(self, v, load=False): if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=auto_config_backup.auto_config_backup, is_container=, presence=False, yang_name="auto-config-backup", rest_name="auto-config-backup", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u: {u: u, u: u, u: u}}, namespace=, defining_module=, yang_type=, is_config=True) except (TypeError, ValueError): raise ValueError({ : , : "container", : , }) self.__auto_config_backup = t if hasattr(self, ): self._set()
def patch_context(self, context): context.__class__ = PatchedContext
Patches the context to add utility functions Sets up the base_url, and the get_url() utility function.
### Input: Patches the context to add utility functions Sets up the base_url, and the get_url() utility function. ### Response: def patch_context(self, context): context.__class__ = PatchedContext
def LDA_discriminants(x, labels): try: x = np.array(x) except: raise ValueError() eigen_values, eigen_vectors = LDA_base(x, labels) return eigen_values[(-eigen_values).argsort()]
Linear Discriminant Analysis helper for determination how many columns of data should be reduced. **Args:** * `x` : input matrix (2d array), every row represents new sample * `labels` : list of labels (iterable), every item should be label for \ sample with corresponding index **Returns:** * `discriminants` : array of eigenvalues sorted in descending order
### Input: Linear Discriminant Analysis helper for determination how many columns of data should be reduced. **Args:** * `x` : input matrix (2d array), every row represents new sample * `labels` : list of labels (iterable), every item should be label for \ sample with corresponding index **Returns:** * `discriminants` : array of eigenvalues sorted in descending order ### Response: def LDA_discriminants(x, labels): try: x = np.array(x) except: raise ValueError() eigen_values, eigen_vectors = LDA_base(x, labels) return eigen_values[(-eigen_values).argsort()]
def main(): args = command.parse_args() with btrfs.FileSystem(args.dir) as mount: fInfo = mount.FS_INFO() pprint.pprint(fInfo) vols = mount.subvolumes for vol in vols: print(vol) return 0
Main program.
### Input: Main program. ### Response: def main(): args = command.parse_args() with btrfs.FileSystem(args.dir) as mount: fInfo = mount.FS_INFO() pprint.pprint(fInfo) vols = mount.subvolumes for vol in vols: print(vol) return 0
async def handle_frame(self, frame): if not isinstance(frame, FrameSetNodeNameConfirmation): return False self.success = frame.status == SetNodeNameConfirmationStatus.OK return True
Handle incoming API frame, return True if this was the expected frame.
### Input: Handle incoming API frame, return True if this was the expected frame. ### Response: async def handle_frame(self, frame): if not isinstance(frame, FrameSetNodeNameConfirmation): return False self.success = frame.status == SetNodeNameConfirmationStatus.OK return True
def log(message: str, *args: str, category: str=, logger_name: str=): global _DEBUG_ENABLED if _DEBUG_ENABLED: level = logging.INFO else: level = logging.CRITICAL + 1 with _create_logger(logger_name, level) as logger: log_fn = getattr(logger, category, None) if log_fn is None: raise ValueError(.format(category)) log_fn(message, *args)
Log a message to the given logger. If debug has not been enabled, this method will not log a message. Parameters ---------- message: str Message, with or without formatters, to print. args: Any Arguments to use with the message. args must either be a series of arguments that match up with anonymous formatters (i.e. "%<FORMAT-CHARACTER>") in the format string, or a dictionary with key-value pairs that match up with named formatters (i.e. "%(key)s") in the format string. logger_name: str Name of logger to which the message should be logged.
### Input: Log a message to the given logger. If debug has not been enabled, this method will not log a message. Parameters ---------- message: str Message, with or without formatters, to print. args: Any Arguments to use with the message. args must either be a series of arguments that match up with anonymous formatters (i.e. "%<FORMAT-CHARACTER>") in the format string, or a dictionary with key-value pairs that match up with named formatters (i.e. "%(key)s") in the format string. logger_name: str Name of logger to which the message should be logged. ### Response: def log(message: str, *args: str, category: str=, logger_name: str=): global _DEBUG_ENABLED if _DEBUG_ENABLED: level = logging.INFO else: level = logging.CRITICAL + 1 with _create_logger(logger_name, level) as logger: log_fn = getattr(logger, category, None) if log_fn is None: raise ValueError(.format(category)) log_fn(message, *args)
def add_homogeneous_model(self, magnitude, phase=0): if self.assignments[] is not None: print() magnitude_model = np.ones(self.grid.nr_of_elements) * magnitude phase_model = np.ones(self.grid.nr_of_elements) * phase pid_mag = self.parman.add_data(magnitude_model) pid_pha = self.parman.add_data(phase_model) self.assignments[] = [pid_mag, pid_pha] return pid_mag, pid_pha
Add a homogeneous resistivity model to the tomodir. This is useful for synthetic measurements. Parameters ---------- magnitude : float magnitude [Ohm m] value of the homogeneous model phase : float, optional phase [mrad] value of the homogeneous model Returns ------- pid_mag : int ID value of the parameter set of the magnitude model pid_pha : int ID value of the parameter set of the phase model Note that the parameter sets are automatically registered as the forward models for magnitude and phase values.
### Input: Add a homogeneous resistivity model to the tomodir. This is useful for synthetic measurements. Parameters ---------- magnitude : float magnitude [Ohm m] value of the homogeneous model phase : float, optional phase [mrad] value of the homogeneous model Returns ------- pid_mag : int ID value of the parameter set of the magnitude model pid_pha : int ID value of the parameter set of the phase model Note that the parameter sets are automatically registered as the forward models for magnitude and phase values. ### Response: def add_homogeneous_model(self, magnitude, phase=0): if self.assignments[] is not None: print() magnitude_model = np.ones(self.grid.nr_of_elements) * magnitude phase_model = np.ones(self.grid.nr_of_elements) * phase pid_mag = self.parman.add_data(magnitude_model) pid_pha = self.parman.add_data(phase_model) self.assignments[] = [pid_mag, pid_pha] return pid_mag, pid_pha
def word_probability(self, word, total_words=None): if total_words is None: total_words = self._word_frequency.total_words return self._word_frequency.dictionary[word] / total_words
Calculate the probability of the `word` being the desired, correct word Args: word (str): The word for which the word probability is \ calculated total_words (int): The total number of words to use in the \ calculation; use the default for using the whole word \ frequency Returns: float: The probability that the word is the correct word
### Input: Calculate the probability of the `word` being the desired, correct word Args: word (str): The word for which the word probability is \ calculated total_words (int): The total number of words to use in the \ calculation; use the default for using the whole word \ frequency Returns: float: The probability that the word is the correct word ### Response: def word_probability(self, word, total_words=None): if total_words is None: total_words = self._word_frequency.total_words return self._word_frequency.dictionary[word] / total_words
def save(self, filename, garbage=0, clean=0, deflate=0, incremental=0, ascii=0, expand=0, linear=0, pretty=0, decrypt=1): if self.isClosed or self.isEncrypted: raise ValueError("operation illegal for closed / encrypted doc") if type(filename) == str: pass elif type(filename) == unicode: filename = filename.encode() else: raise TypeError("filename must be a string") if filename == self.name and not incremental: raise ValueError("save to original must be incremental") if self.pageCount < 1: raise ValueError("cannot save with zero pages") if incremental: if self.name != filename or self.stream: raise ValueError("incremental needs original file") return _fitz.Document_save(self, filename, garbage, clean, deflate, incremental, ascii, expand, linear, pretty, decrypt)
save(self, filename, garbage=0, clean=0, deflate=0, incremental=0, ascii=0, expand=0, linear=0, pretty=0, decrypt=1) -> PyObject *
### Input: save(self, filename, garbage=0, clean=0, deflate=0, incremental=0, ascii=0, expand=0, linear=0, pretty=0, decrypt=1) -> PyObject * ### Response: def save(self, filename, garbage=0, clean=0, deflate=0, incremental=0, ascii=0, expand=0, linear=0, pretty=0, decrypt=1): if self.isClosed or self.isEncrypted: raise ValueError("operation illegal for closed / encrypted doc") if type(filename) == str: pass elif type(filename) == unicode: filename = filename.encode() else: raise TypeError("filename must be a string") if filename == self.name and not incremental: raise ValueError("save to original must be incremental") if self.pageCount < 1: raise ValueError("cannot save with zero pages") if incremental: if self.name != filename or self.stream: raise ValueError("incremental needs original file") return _fitz.Document_save(self, filename, garbage, clean, deflate, incremental, ascii, expand, linear, pretty, decrypt)
def export_users(self, format=): pl = self.__basepl(content=, format=format) return self._call_api(pl, )[0]
Export the users of the Project Notes ----- Each user will have the following keys: * ``'firstname'`` : User's first name * ``'lastname'`` : User's last name * ``'email'`` : Email address * ``'username'`` : User's username * ``'expiration'`` : Project access expiration date * ``'data_access_group'`` : data access group ID * ``'data_export'`` : (0=no access, 2=De-Identified, 1=Full Data Set) * ``'forms'`` : a list of dicts with a single key as the form name and value is an integer describing that user's form rights, where: 0=no access, 1=view records/responses and edit records (survey responses are read-only), 2=read only, and 3=edit survey responses, Parameters ---------- format : (``'json'``), ``'csv'``, ``'xml'`` response return format Returns ------- users: list, str list of users dicts when ``'format'='json'``, otherwise a string
### Input: Export the users of the Project Notes ----- Each user will have the following keys: * ``'firstname'`` : User's first name * ``'lastname'`` : User's last name * ``'email'`` : Email address * ``'username'`` : User's username * ``'expiration'`` : Project access expiration date * ``'data_access_group'`` : data access group ID * ``'data_export'`` : (0=no access, 2=De-Identified, 1=Full Data Set) * ``'forms'`` : a list of dicts with a single key as the form name and value is an integer describing that user's form rights, where: 0=no access, 1=view records/responses and edit records (survey responses are read-only), 2=read only, and 3=edit survey responses, Parameters ---------- format : (``'json'``), ``'csv'``, ``'xml'`` response return format Returns ------- users: list, str list of users dicts when ``'format'='json'``, otherwise a string ### Response: def export_users(self, format=): pl = self.__basepl(content=, format=format) return self._call_api(pl, )[0]
def init_celery(app, celery): celery.conf.update(app.config) TaskBase = celery.Task class ContextTask(TaskBase): abstract = True def __call__(self, *args, **kwargs): with app.app_context(): return TaskBase.__call__(self, *args, **kwargs) celery.Task = ContextTask return celery
Initialise Celery and set up logging :param app: Flask app :param celery: Celery instance
### Input: Initialise Celery and set up logging :param app: Flask app :param celery: Celery instance ### Response: def init_celery(app, celery): celery.conf.update(app.config) TaskBase = celery.Task class ContextTask(TaskBase): abstract = True def __call__(self, *args, **kwargs): with app.app_context(): return TaskBase.__call__(self, *args, **kwargs) celery.Task = ContextTask return celery
def get_versioning_status(self, headers=None): response = self.connection.make_request(, self.name, query_args=, headers=headers) body = response.read() boto.log.debug(body) if response.status == 200: d = {} ver = re.search(self.VersionRE, body) if ver: d[] = ver.group(1) mfa = re.search(self.MFADeleteRE, body) if mfa: d[] = mfa.group(1) return d else: raise self.connection.provider.storage_response_error( response.status, response.reason, body)
Returns the current status of versioning on the bucket. :rtype: dict :returns: A dictionary containing a key named 'Versioning' that can have a value of either Enabled, Disabled, or Suspended. Also, if MFADelete has ever been enabled on the bucket, the dictionary will contain a key named 'MFADelete' which will have a value of either Enabled or Suspended.
### Input: Returns the current status of versioning on the bucket. :rtype: dict :returns: A dictionary containing a key named 'Versioning' that can have a value of either Enabled, Disabled, or Suspended. Also, if MFADelete has ever been enabled on the bucket, the dictionary will contain a key named 'MFADelete' which will have a value of either Enabled or Suspended. ### Response: def get_versioning_status(self, headers=None): response = self.connection.make_request(, self.name, query_args=, headers=headers) body = response.read() boto.log.debug(body) if response.status == 200: d = {} ver = re.search(self.VersionRE, body) if ver: d[] = ver.group(1) mfa = re.search(self.MFADeleteRE, body) if mfa: d[] = mfa.group(1) return d else: raise self.connection.provider.storage_response_error( response.status, response.reason, body)
def startLogin(): flask.session["state"] = oic.oauth2.rndstr(SECRET_KEY_LENGTH) flask.session["nonce"] = oic.oauth2.rndstr(SECRET_KEY_LENGTH) args = { "client_id": app.oidcClient.client_id, "response_type": "code", "scope": ["openid", "profile"], "nonce": flask.session["nonce"], "redirect_uri": app.oidcClient.redirect_uris[0], "state": flask.session["state"] } result = app.oidcClient.do_authorization_request( request_args=args, state=flask.session["state"]) return flask.redirect(result.url)
If we are not logged in, this generates the redirect URL to the OIDC provider and returns the redirect response :return: A redirect response to the OIDC provider
### Input: If we are not logged in, this generates the redirect URL to the OIDC provider and returns the redirect response :return: A redirect response to the OIDC provider ### Response: def startLogin(): flask.session["state"] = oic.oauth2.rndstr(SECRET_KEY_LENGTH) flask.session["nonce"] = oic.oauth2.rndstr(SECRET_KEY_LENGTH) args = { "client_id": app.oidcClient.client_id, "response_type": "code", "scope": ["openid", "profile"], "nonce": flask.session["nonce"], "redirect_uri": app.oidcClient.redirect_uris[0], "state": flask.session["state"] } result = app.oidcClient.do_authorization_request( request_args=args, state=flask.session["state"]) return flask.redirect(result.url)
def encrypt(self, key, iv="", cek="", **kwargs): _msg = as_bytes(self.msg) _args = self._dict try: _args["kid"] = kwargs["kid"] except KeyError: pass jwe = JWEnc(**_args) iv = self._generate_iv(self["enc"], iv) cek = self._generate_key(self["enc"], cek) if isinstance(key, SYMKey): try: kek = key.key.encode() except AttributeError: kek = key.key elif isinstance(key, bytes): kek = key else: kek = intarr2str(key) jek = aes_key_wrap(kek, cek, default_backend()) _enc = self["enc"] _auth_data = jwe.b64_encode_header() ctxt, tag, cek = self.enc_setup(_enc, _msg, auth_data=_auth_data, key=cek, iv=iv) return jwe.pack(parts=[jek, iv, ctxt, tag])
Produces a JWE as defined in RFC7516 using symmetric keys :param key: Shared symmetric key :param iv: Initialization vector :param cek: Content master key :param kwargs: Extra keyword arguments, just ignore for now. :return:
### Input: Produces a JWE as defined in RFC7516 using symmetric keys :param key: Shared symmetric key :param iv: Initialization vector :param cek: Content master key :param kwargs: Extra keyword arguments, just ignore for now. :return: ### Response: def encrypt(self, key, iv="", cek="", **kwargs): _msg = as_bytes(self.msg) _args = self._dict try: _args["kid"] = kwargs["kid"] except KeyError: pass jwe = JWEnc(**_args) iv = self._generate_iv(self["enc"], iv) cek = self._generate_key(self["enc"], cek) if isinstance(key, SYMKey): try: kek = key.key.encode() except AttributeError: kek = key.key elif isinstance(key, bytes): kek = key else: kek = intarr2str(key) jek = aes_key_wrap(kek, cek, default_backend()) _enc = self["enc"] _auth_data = jwe.b64_encode_header() ctxt, tag, cek = self.enc_setup(_enc, _msg, auth_data=_auth_data, key=cek, iv=iv) return jwe.pack(parts=[jek, iv, ctxt, tag])
def save_file(data_file, data, dry_run=None): if dry_run: return with open(data_file, , encoding=) as f: if sys.version_info > (3, 0): f.write(json.dumps(data)) else: f.write(json.dumps(data).decode())
Writes JSON data to data file.
### Input: Writes JSON data to data file. ### Response: def save_file(data_file, data, dry_run=None): if dry_run: return with open(data_file, , encoding=) as f: if sys.version_info > (3, 0): f.write(json.dumps(data)) else: f.write(json.dumps(data).decode())
def form_valid(self, form): form.cleaned_data.pop(, None) self.request.session[EMAIL_VALIDATION_STR] = {: form.cleaned_data} return HttpResponseRedirect(reverse())
Pass form data to the confirmation view
### Input: Pass form data to the confirmation view ### Response: def form_valid(self, form): form.cleaned_data.pop(, None) self.request.session[EMAIL_VALIDATION_STR] = {: form.cleaned_data} return HttpResponseRedirect(reverse())
def files_with_ext(extension, directory=, recursive=False): if recursive: log.info(.format(directory, extension)) for dirname, subdirnames, filenames in os.walk(directory): for filename in filenames: filepath = os.path.join(dirname, filename) _root, ext = os.path.splitext(filepath) if extension.lower() == ext.lower(): yield filepath else: log.info(.format(directory, extension)) for name in os.listdir(directory): filepath = os.path.join(directory, name) if not os.path.isfile(filepath): continue _root, ext = os.path.splitext(filepath) if extension.lower() == ext.lower(): yield filepath
Generator function that will iterate over all files in the specified directory and return a path to the files which possess a matching extension. You should include the period in your extension, and matching is not case sensitive: '.xml' will also match '.XML' and vice versa. An empty string passed to extension will match extensionless files.
### Input: Generator function that will iterate over all files in the specified directory and return a path to the files which possess a matching extension. You should include the period in your extension, and matching is not case sensitive: '.xml' will also match '.XML' and vice versa. An empty string passed to extension will match extensionless files. ### Response: def files_with_ext(extension, directory=, recursive=False): if recursive: log.info(.format(directory, extension)) for dirname, subdirnames, filenames in os.walk(directory): for filename in filenames: filepath = os.path.join(dirname, filename) _root, ext = os.path.splitext(filepath) if extension.lower() == ext.lower(): yield filepath else: log.info(.format(directory, extension)) for name in os.listdir(directory): filepath = os.path.join(directory, name) if not os.path.isfile(filepath): continue _root, ext = os.path.splitext(filepath) if extension.lower() == ext.lower(): yield filepath
def html(content, **kwargs): if hasattr(content, ): return content elif hasattr(content, ): return content.render().encode() return str(content).encode()
HTML (Hypertext Markup Language)
### Input: HTML (Hypertext Markup Language) ### Response: def html(content, **kwargs): if hasattr(content, ): return content elif hasattr(content, ): return content.render().encode() return str(content).encode()
def PrintExtractionStatusHeader(self, processing_status): self._output_writer.Write( .format(self._source_path)) self._output_writer.Write( .format(self._source_type)) if self._artifact_filters: artifacts_string = .join(self._artifact_filters) self._output_writer.Write(.format( artifacts_string)) if self._filter_file: self._output_writer.Write(.format( self._filter_file)) self._PrintProcessingTime(processing_status) self._PrintTasksStatus(processing_status) self._output_writer.Write()
Prints the extraction status header. Args: processing_status (ProcessingStatus): processing status.
### Input: Prints the extraction status header. Args: processing_status (ProcessingStatus): processing status. ### Response: def PrintExtractionStatusHeader(self, processing_status): self._output_writer.Write( .format(self._source_path)) self._output_writer.Write( .format(self._source_type)) if self._artifact_filters: artifacts_string = .join(self._artifact_filters) self._output_writer.Write(.format( artifacts_string)) if self._filter_file: self._output_writer.Write(.format( self._filter_file)) self._PrintProcessingTime(processing_status) self._PrintTasksStatus(processing_status) self._output_writer.Write()
def makeExecutable(fp): mode = ((os.stat(fp).st_mode) | 0o555) & 0o7777 setup_log.info("Adding executable bit to %s (mode is now %o)", fp, mode) os.chmod(fp, mode)
Adds the executable bit to the file at filepath `fp`
### Input: Adds the executable bit to the file at filepath `fp` ### Response: def makeExecutable(fp): mode = ((os.stat(fp).st_mode) | 0o555) & 0o7777 setup_log.info("Adding executable bit to %s (mode is now %o)", fp, mode) os.chmod(fp, mode)
def recursive_processing(self, base_dir, target_dir, it): try: file_dir, dirs, files = next(it) except StopIteration: return , [] readme_files = {, , } if readme_files.intersection(files): foutdir = file_dir.replace(base_dir, target_dir) create_dirs(foutdir) this_nbps = [ NotebookProcessor( infile=f, outfile=os.path.join(foutdir, os.path.basename(f)), disable_warnings=self.disable_warnings, preprocess=( (self.preprocess is True or f in self.preprocess) and not (self.dont_preprocess is True or f in self.dont_preprocess)), clear=((self.clear is True or f in self.clear) and not (self.dont_clear is True or f in self.dont_clear)), code_example=self.code_examples.get(f), supplementary_files=self.supplementary_files.get(f), other_supplementary_files=self.osf.get(f), thumbnail_figure=self.thumbnail_figures.get(f), url=self.get_url(f.replace(base_dir, )), **self._nbp_kws) for f in map(lambda f: os.path.join(file_dir, f), filter(self.pattern.match, files))] readme_file = next(iter(readme_files.intersection(files))) else: return , [] labels = OrderedDict() this_label = + foutdir.replace(os.path.sep, ) if this_label.endswith(): this_label = this_label[:-1] for d in dirs: label, nbps = self.recursive_processing( base_dir, target_dir, it) if label: labels[label] = nbps s = ".. _%s:\n\n" % this_label with open(os.path.join(file_dir, readme_file)) as f: s += f.read().rstrip() + s += "\n\n.. toctree::\n\n" s += .join( % os.path.splitext(os.path.basename( nbp.get_out_file()))[0] for nbp in this_nbps) for d in dirs: findex = os.path.join(d, ) if os.path.exists(os.path.join(foutdir, findex)): s += % os.path.splitext(findex)[0] s += for nbp in this_nbps: code_div = nbp.code_div if code_div is not None: s += code_div + else: s += nbp.thumbnail_div + s += "\n.. raw:: html\n\n <div style=></div>\n" for label, nbps in labels.items(): s += % ( label) for nbp in nbps: code_div = nbp.code_div if code_div is not None: s += code_div + else: s += nbp.thumbnail_div + s += "\n.. raw:: html\n\n <div style=></div>\n" s += with open(os.path.join(foutdir, ), ) as f: f.write(s) return this_label, list(chain(this_nbps, *labels.values()))
Method to recursivly process the notebooks in the `base_dir` Parameters ---------- base_dir: str Path to the base example directory (see the `examples_dir` parameter for the :class:`Gallery` class) target_dir: str Path to the output directory for the rst files (see the `gallery_dirs` parameter for the :class:`Gallery` class) it: iterable The iterator over the subdirectories and files in `base_dir` generated by the :func:`os.walk` function
### Input: Method to recursivly process the notebooks in the `base_dir` Parameters ---------- base_dir: str Path to the base example directory (see the `examples_dir` parameter for the :class:`Gallery` class) target_dir: str Path to the output directory for the rst files (see the `gallery_dirs` parameter for the :class:`Gallery` class) it: iterable The iterator over the subdirectories and files in `base_dir` generated by the :func:`os.walk` function ### Response: def recursive_processing(self, base_dir, target_dir, it): try: file_dir, dirs, files = next(it) except StopIteration: return , [] readme_files = {, , } if readme_files.intersection(files): foutdir = file_dir.replace(base_dir, target_dir) create_dirs(foutdir) this_nbps = [ NotebookProcessor( infile=f, outfile=os.path.join(foutdir, os.path.basename(f)), disable_warnings=self.disable_warnings, preprocess=( (self.preprocess is True or f in self.preprocess) and not (self.dont_preprocess is True or f in self.dont_preprocess)), clear=((self.clear is True or f in self.clear) and not (self.dont_clear is True or f in self.dont_clear)), code_example=self.code_examples.get(f), supplementary_files=self.supplementary_files.get(f), other_supplementary_files=self.osf.get(f), thumbnail_figure=self.thumbnail_figures.get(f), url=self.get_url(f.replace(base_dir, )), **self._nbp_kws) for f in map(lambda f: os.path.join(file_dir, f), filter(self.pattern.match, files))] readme_file = next(iter(readme_files.intersection(files))) else: return , [] labels = OrderedDict() this_label = + foutdir.replace(os.path.sep, ) if this_label.endswith(): this_label = this_label[:-1] for d in dirs: label, nbps = self.recursive_processing( base_dir, target_dir, it) if label: labels[label] = nbps s = ".. _%s:\n\n" % this_label with open(os.path.join(file_dir, readme_file)) as f: s += f.read().rstrip() + s += "\n\n.. toctree::\n\n" s += .join( % os.path.splitext(os.path.basename( nbp.get_out_file()))[0] for nbp in this_nbps) for d in dirs: findex = os.path.join(d, ) if os.path.exists(os.path.join(foutdir, findex)): s += % os.path.splitext(findex)[0] s += for nbp in this_nbps: code_div = nbp.code_div if code_div is not None: s += code_div + else: s += nbp.thumbnail_div + s += "\n.. raw:: html\n\n <div style=></div>\n" for label, nbps in labels.items(): s += % ( label) for nbp in nbps: code_div = nbp.code_div if code_div is not None: s += code_div + else: s += nbp.thumbnail_div + s += "\n.. raw:: html\n\n <div style=></div>\n" s += with open(os.path.join(foutdir, ), ) as f: f.write(s) return this_label, list(chain(this_nbps, *labels.values()))
def export(self, nidm_version, export_dir): if self.expl_mean_sq_file is None: fstat_img = nib.load(self.stat_file) fstat = fstat_img.get_data() sigma_sq_img = nib.load(self.sigma_sq_file) sigma_sq = sigma_sq_img.get_data() expl_mean_sq = nib.Nifti1Image( fstat*sigma_sq, fstat_img.get_qform()) self.filename = ("ContrastExplainedMeanSquareMap" + self.num + ".nii.gz") self.expl_mean_sq_file = os.path.join( export_dir, self.filename) nib.save(expl_mean_sq, self.expl_mean_sq_file) self.file = NIDMFile(self.id, self.expl_mean_sq_file, filename=self.filename, sha=self.sha, fmt=self.fmt) path, filename = os.path.split(self.expl_mean_sq_file) self.add_attributes(( (PROV[], self.type), (NIDM_IN_COORDINATE_SPACE, self.coord_space.id), (PROV[], self.label)))
Create prov graph.
### Input: Create prov graph. ### Response: def export(self, nidm_version, export_dir): if self.expl_mean_sq_file is None: fstat_img = nib.load(self.stat_file) fstat = fstat_img.get_data() sigma_sq_img = nib.load(self.sigma_sq_file) sigma_sq = sigma_sq_img.get_data() expl_mean_sq = nib.Nifti1Image( fstat*sigma_sq, fstat_img.get_qform()) self.filename = ("ContrastExplainedMeanSquareMap" + self.num + ".nii.gz") self.expl_mean_sq_file = os.path.join( export_dir, self.filename) nib.save(expl_mean_sq, self.expl_mean_sq_file) self.file = NIDMFile(self.id, self.expl_mean_sq_file, filename=self.filename, sha=self.sha, fmt=self.fmt) path, filename = os.path.split(self.expl_mean_sq_file) self.add_attributes(( (PROV[], self.type), (NIDM_IN_COORDINATE_SPACE, self.coord_space.id), (PROV[], self.label)))
def step(self, key, chain): if chain == "sending": self.__previous_sending_chain_length = self.sending_chain_length self.__sending_chain = self.__SendingChain(key) if chain == "receiving": self.__receiving_chain = self.__ReceivingChain(key)
Perform a rachted step, replacing one of the internally managed chains with a new one. :param key: A bytes-like object encoding the key to initialize the replacement chain with. :param chain: The chain to replace. This parameter must be one of the two strings "sending" and "receiving".
### Input: Perform a rachted step, replacing one of the internally managed chains with a new one. :param key: A bytes-like object encoding the key to initialize the replacement chain with. :param chain: The chain to replace. This parameter must be one of the two strings "sending" and "receiving". ### Response: def step(self, key, chain): if chain == "sending": self.__previous_sending_chain_length = self.sending_chain_length self.__sending_chain = self.__SendingChain(key) if chain == "receiving": self.__receiving_chain = self.__ReceivingChain(key)
def _precesion(date): t = date.change_scale().julian_century zeta = (2306.2181 * t + 0.30188 * t ** 2 + 0.017998 * t ** 3) / 3600. theta = (2004.3109 * t - 0.42665 * t ** 2 - 0.041833 * t ** 3) / 3600. z = (2306.2181 * t + 1.09468 * t ** 2 + 0.018203 * t ** 3) / 3600. return zeta, theta, z
Precession in degrees
### Input: Precession in degrees ### Response: def _precesion(date): t = date.change_scale().julian_century zeta = (2306.2181 * t + 0.30188 * t ** 2 + 0.017998 * t ** 3) / 3600. theta = (2004.3109 * t - 0.42665 * t ** 2 - 0.041833 * t ** 3) / 3600. z = (2306.2181 * t + 1.09468 * t ** 2 + 0.018203 * t ** 3) / 3600. return zeta, theta, z
def compact_name(self, hashsize=6): s = self.compact_name_core(hashsize, t_max=True) s += "_ID%d-%d" % (self.ID, self.EID) return s
Compact representation of all simulation parameters
### Input: Compact representation of all simulation parameters ### Response: def compact_name(self, hashsize=6): s = self.compact_name_core(hashsize, t_max=True) s += "_ID%d-%d" % (self.ID, self.EID) return s
def move_leadership(self, partition, new_leader): new_state = copy(self) source = new_state.replicas[partition][0] new_leader_index = self.replicas[partition].index(new_leader) new_state.replicas = tuple_alter( self.replicas, (partition, lambda replicas: tuple_replace( replicas, (0, replicas[new_leader_index]), (new_leader_index, replicas[0]), )), ) new_state.pending_partitions = self.pending_partitions + (partition, ) new_state.broker_leader_counts = tuple_alter( self.broker_leader_counts, (source, lambda leader_count: leader_count - 1), (new_leader, lambda leader_count: leader_count + 1), ) partition_weight = self.partition_weights[partition] new_state.broker_leader_weights = tuple_alter( self.broker_leader_weights, (source, lambda leader_weight: leader_weight - partition_weight), (new_leader, lambda leader_weight: leader_weight + partition_weight), ) new_state.leader_movement_count += 1 return new_state
Return a new state that is the result of changing the leadership of a single partition. :param partition: The partition index of the partition to change the leadership of. :param new_leader: The broker index of the new leader replica.
### Input: Return a new state that is the result of changing the leadership of a single partition. :param partition: The partition index of the partition to change the leadership of. :param new_leader: The broker index of the new leader replica. ### Response: def move_leadership(self, partition, new_leader): new_state = copy(self) source = new_state.replicas[partition][0] new_leader_index = self.replicas[partition].index(new_leader) new_state.replicas = tuple_alter( self.replicas, (partition, lambda replicas: tuple_replace( replicas, (0, replicas[new_leader_index]), (new_leader_index, replicas[0]), )), ) new_state.pending_partitions = self.pending_partitions + (partition, ) new_state.broker_leader_counts = tuple_alter( self.broker_leader_counts, (source, lambda leader_count: leader_count - 1), (new_leader, lambda leader_count: leader_count + 1), ) partition_weight = self.partition_weights[partition] new_state.broker_leader_weights = tuple_alter( self.broker_leader_weights, (source, lambda leader_weight: leader_weight - partition_weight), (new_leader, lambda leader_weight: leader_weight + partition_weight), ) new_state.leader_movement_count += 1 return new_state
def extract_cookies(self, response, request): _debug("extract_cookies: %s", response.info()) self._cookies_lock.acquire() try: self._policy._now = self._now = int(time.time()) for cookie in self.make_cookies(response, request): if self._policy.set_ok(cookie, request): _debug(" setting cookie: %s", cookie) self.set_cookie(cookie) finally: self._cookies_lock.release()
Extract cookies from response, where allowable given the request.
### Input: Extract cookies from response, where allowable given the request. ### Response: def extract_cookies(self, response, request): _debug("extract_cookies: %s", response.info()) self._cookies_lock.acquire() try: self._policy._now = self._now = int(time.time()) for cookie in self.make_cookies(response, request): if self._policy.set_ok(cookie, request): _debug(" setting cookie: %s", cookie) self.set_cookie(cookie) finally: self._cookies_lock.release()
def current_branch(cwd, user=None, password=None, ignore_retcode=False, output_encoding=None): cwd = _expand_path(cwd, user) command = [, , , ] return _git_run(command, cwd=cwd, user=user, password=password, ignore_retcode=ignore_retcode, output_encoding=output_encoding)[]
Returns the current branch name of a local checkout. If HEAD is detached, return the SHA1 of the revision which is currently checked out. cwd The path to the git checkout user User under which to run the git command. By default, the command is run by the user under which the minion is running. password Windows only. Required when specifying ``user``. This parameter will be ignored on non-Windows platforms. .. versionadded:: 2016.3.4 ignore_retcode : False If ``True``, do not log an error to the minion log if the git command returns a nonzero exit status. .. versionadded:: 2015.8.0 output_encoding Use this option to specify which encoding to use to decode the output from any git commands which are run. This should not be needed in most cases. .. note:: This should only be needed if the files in the repository were created with filenames using an encoding other than UTF-8 to handle Unicode characters. .. versionadded:: 2018.3.1 CLI Example: .. code-block:: bash salt myminion git.current_branch /path/to/repo
### Input: Returns the current branch name of a local checkout. If HEAD is detached, return the SHA1 of the revision which is currently checked out. cwd The path to the git checkout user User under which to run the git command. By default, the command is run by the user under which the minion is running. password Windows only. Required when specifying ``user``. This parameter will be ignored on non-Windows platforms. .. versionadded:: 2016.3.4 ignore_retcode : False If ``True``, do not log an error to the minion log if the git command returns a nonzero exit status. .. versionadded:: 2015.8.0 output_encoding Use this option to specify which encoding to use to decode the output from any git commands which are run. This should not be needed in most cases. .. note:: This should only be needed if the files in the repository were created with filenames using an encoding other than UTF-8 to handle Unicode characters. .. versionadded:: 2018.3.1 CLI Example: .. code-block:: bash salt myminion git.current_branch /path/to/repo ### Response: def current_branch(cwd, user=None, password=None, ignore_retcode=False, output_encoding=None): cwd = _expand_path(cwd, user) command = [, , , ] return _git_run(command, cwd=cwd, user=user, password=password, ignore_retcode=ignore_retcode, output_encoding=output_encoding)[]
def _setContent(self): kwstring = tCheck = % self.inputName bindArgs = if self.encodingStyle is not None: bindArgs = %self.encodingStyle if self.useWSA: wsactionIn = % self.inputAction wsactionOut = % self.outputAction bindArgs += responseArgs = else: wsactionIn = wsactionOut = responseArgs = bindArgs += if self.do_extended: inputName = self.getOperation().getInputMessage().name wrap_str = "" partsList = self.getOperation().getInputMessage().parts.values() try: subNames = GetPartsSubNames(partsList, self._wsdl) except TypeError, ex: raise Wsdl2PythonError,\ "Extended generation failure: only supports doc/lit, "\ +"and all element attributes (<message><part element="\ +"\"my:GED\"></message>) must refer to single global "\ +"element declaration with complexType content. "\ +"\n\n**** TRY WITHOUT EXTENDED ****\n" args = [] for pa in subNames: args += pa for arg in args: wrap_str += "%srequest.%s = %s\n" % (ID2, self.getAttributeName(arg), self.mangle(arg)) argsStr = ",".join(args) if len(argsStr) > 1: argsStr = ", " + argsStr method = [ % (ID1, self.getOperation().getInputMessage()), % (ID1, self.name, argsStr), % (ID2, self.inputName), % (wrap_str), % (ID2, kwstring), % (ID2, wsactionIn), \ %(ID2, self.soapaction, bindArgs), ] elif self.soap_input_headers: method = [ % (ID1, self.name), % (ID1, self.name), % (ID2, tCheck), %(ID3, ), % (ID2, wsactionIn), % (ID2), \ %(ID2, self.soapaction, bindArgs), ] else: method = [ % (ID1, self.name), % (ID1, self.name), % (ID2, tCheck), %(ID3, ), % (ID2, wsactionIn), \ %(ID2, self.soapaction, bindArgs), ] if not self.outputName: method.append( %(ID2,)) method.append( % (ID2,)) self.writeArray(method) return response = [ % (ID2, wsactionOut),] if self.isRPC() and not self.isLiteral(): response.append(\ %( ID2, self.outputName, self.outputName) ) response.append(\ %( ID2, responseArgs) ) else: response.append(\ %( ID2, self.outputName, responseArgs) ) if self.soap_output_headers: sh = for shb in self.soap_output_headers: shb.message shb.part try: msg = self._wsdl.messages[shb.message] part = msg.parts[shb.part] if part.element is not None: sh += %str(part.element) else: warnings.warn( %str(msg)) except: raise WSDLFormatError( + %( shb.message, shb.part) ) sh += if len(sh) > 2: response.append(\ %(ID2, sh) ) if self.outputSimpleType: response.append( %(ID2, self.outputName)) else: if self.do_extended: partsList = self.getOperation().getOutputMessage().parts.values() subNames = GetPartsSubNames(partsList, self._wsdl) args = [] for pa in subNames: args += pa for arg in args: response.append( % (ID2, self.mangle(arg), self.getAttributeName(arg)) ) margs = ",".join(args) response.append("%sreturn %s" % (ID2, margs) ) else: response.append( %ID2) method += response self.writeArray(method)
create string representation of operation.
### Input: create string representation of operation. ### Response: def _setContent(self): kwstring = tCheck = % self.inputName bindArgs = if self.encodingStyle is not None: bindArgs = %self.encodingStyle if self.useWSA: wsactionIn = % self.inputAction wsactionOut = % self.outputAction bindArgs += responseArgs = else: wsactionIn = wsactionOut = responseArgs = bindArgs += if self.do_extended: inputName = self.getOperation().getInputMessage().name wrap_str = "" partsList = self.getOperation().getInputMessage().parts.values() try: subNames = GetPartsSubNames(partsList, self._wsdl) except TypeError, ex: raise Wsdl2PythonError,\ "Extended generation failure: only supports doc/lit, "\ +"and all element attributes (<message><part element="\ +"\"my:GED\"></message>) must refer to single global "\ +"element declaration with complexType content. "\ +"\n\n**** TRY WITHOUT EXTENDED ****\n" args = [] for pa in subNames: args += pa for arg in args: wrap_str += "%srequest.%s = %s\n" % (ID2, self.getAttributeName(arg), self.mangle(arg)) argsStr = ",".join(args) if len(argsStr) > 1: argsStr = ", " + argsStr method = [ % (ID1, self.getOperation().getInputMessage()), % (ID1, self.name, argsStr), % (ID2, self.inputName), % (wrap_str), % (ID2, kwstring), % (ID2, wsactionIn), \ %(ID2, self.soapaction, bindArgs), ] elif self.soap_input_headers: method = [ % (ID1, self.name), % (ID1, self.name), % (ID2, tCheck), %(ID3, ), % (ID2, wsactionIn), % (ID2), \ %(ID2, self.soapaction, bindArgs), ] else: method = [ % (ID1, self.name), % (ID1, self.name), % (ID2, tCheck), %(ID3, ), % (ID2, wsactionIn), \ %(ID2, self.soapaction, bindArgs), ] if not self.outputName: method.append( %(ID2,)) method.append( % (ID2,)) self.writeArray(method) return response = [ % (ID2, wsactionOut),] if self.isRPC() and not self.isLiteral(): response.append(\ %( ID2, self.outputName, self.outputName) ) response.append(\ %( ID2, responseArgs) ) else: response.append(\ %( ID2, self.outputName, responseArgs) ) if self.soap_output_headers: sh = for shb in self.soap_output_headers: shb.message shb.part try: msg = self._wsdl.messages[shb.message] part = msg.parts[shb.part] if part.element is not None: sh += %str(part.element) else: warnings.warn( %str(msg)) except: raise WSDLFormatError( + %( shb.message, shb.part) ) sh += if len(sh) > 2: response.append(\ %(ID2, sh) ) if self.outputSimpleType: response.append( %(ID2, self.outputName)) else: if self.do_extended: partsList = self.getOperation().getOutputMessage().parts.values() subNames = GetPartsSubNames(partsList, self._wsdl) args = [] for pa in subNames: args += pa for arg in args: response.append( % (ID2, self.mangle(arg), self.getAttributeName(arg)) ) margs = ",".join(args) response.append("%sreturn %s" % (ID2, margs) ) else: response.append( %ID2) method += response self.writeArray(method)
def _interval_to_double_bound_points(xarray, yarray): xarray1 = np.array([x.left for x in xarray]) xarray2 = np.array([x.right for x in xarray]) xarray = list(itertools.chain.from_iterable(zip(xarray1, xarray2))) yarray = list(itertools.chain.from_iterable(zip(yarray, yarray))) return xarray, yarray
Helper function to deal with a xarray consisting of pd.Intervals. Each interval is replaced with both boundaries. I.e. the length of xarray doubles. yarray is modified so it matches the new shape of xarray.
### Input: Helper function to deal with a xarray consisting of pd.Intervals. Each interval is replaced with both boundaries. I.e. the length of xarray doubles. yarray is modified so it matches the new shape of xarray. ### Response: def _interval_to_double_bound_points(xarray, yarray): xarray1 = np.array([x.left for x in xarray]) xarray2 = np.array([x.right for x in xarray]) xarray = list(itertools.chain.from_iterable(zip(xarray1, xarray2))) yarray = list(itertools.chain.from_iterable(zip(yarray, yarray))) return xarray, yarray
def fetch_csv_dataframe( download_url, filename=None, subdir=None, **pandas_kwargs): path = fetch_file( download_url=download_url, filename=filename, decompress=True, subdir=subdir) return pd.read_csv(path, **pandas_kwargs)
Download a remote file from `download_url` and save it locally as `filename`. Load that local file as a CSV into Pandas using extra keyword arguments such as sep='\t'.
### Input: Download a remote file from `download_url` and save it locally as `filename`. Load that local file as a CSV into Pandas using extra keyword arguments such as sep='\t'. ### Response: def fetch_csv_dataframe( download_url, filename=None, subdir=None, **pandas_kwargs): path = fetch_file( download_url=download_url, filename=filename, decompress=True, subdir=subdir) return pd.read_csv(path, **pandas_kwargs)
def broadcast_change(): _, res = win32gui.SendMessageTimeout( win32con.HWND_BROADCAST, win32con.WM_SETTINGCHANGE, 0, 0, win32con.SMTO_ABORTIFHUNG, 5000) return not bool(res)
Refresh the windows environment. .. note:: This will only effect new processes and windows. Services will not see the change until the system restarts. Returns: bool: True if successful, otherwise False Usage: .. code-block:: python import salt.utils.win_reg winreg.broadcast_change()
### Input: Refresh the windows environment. .. note:: This will only effect new processes and windows. Services will not see the change until the system restarts. Returns: bool: True if successful, otherwise False Usage: .. code-block:: python import salt.utils.win_reg winreg.broadcast_change() ### Response: def broadcast_change(): _, res = win32gui.SendMessageTimeout( win32con.HWND_BROADCAST, win32con.WM_SETTINGCHANGE, 0, 0, win32con.SMTO_ABORTIFHUNG, 5000) return not bool(res)
def from_user_config(cls): global _USER_CONFIG_TASKMANAGER if _USER_CONFIG_TASKMANAGER is not None: return _USER_CONFIG_TASKMANAGER path = os.path.join(os.getcwd(), cls.YAML_FILE) if not os.path.exists(path): path = os.path.join(cls.USER_CONFIG_DIR, cls.YAML_FILE) if not os.path.exists(path): raise RuntimeError(colored( "\nCannot locate %s neither in current directory nor in %s\n" "!!! PLEASE READ THIS: !!!\n" "To use AbiPy to run jobs this file must be present\n" "It provides a description of the cluster/computer you are running on\n" "Examples are provided in abipy/data/managers." % (cls.YAML_FILE, path), color="red")) _USER_CONFIG_TASKMANAGER = cls.from_file(path) return _USER_CONFIG_TASKMANAGER
Initialize the :class:`TaskManager` from the YAML file 'manager.yaml'. Search first in the working directory and then in the AbiPy configuration directory. Raises: RuntimeError if file is not found.
### Input: Initialize the :class:`TaskManager` from the YAML file 'manager.yaml'. Search first in the working directory and then in the AbiPy configuration directory. Raises: RuntimeError if file is not found. ### Response: def from_user_config(cls): global _USER_CONFIG_TASKMANAGER if _USER_CONFIG_TASKMANAGER is not None: return _USER_CONFIG_TASKMANAGER path = os.path.join(os.getcwd(), cls.YAML_FILE) if not os.path.exists(path): path = os.path.join(cls.USER_CONFIG_DIR, cls.YAML_FILE) if not os.path.exists(path): raise RuntimeError(colored( "\nCannot locate %s neither in current directory nor in %s\n" "!!! PLEASE READ THIS: !!!\n" "To use AbiPy to run jobs this file must be present\n" "It provides a description of the cluster/computer you are running on\n" "Examples are provided in abipy/data/managers." % (cls.YAML_FILE, path), color="red")) _USER_CONFIG_TASKMANAGER = cls.from_file(path) return _USER_CONFIG_TASKMANAGER
def add_neuroml_components(self, nml_doc): is_nest = False print_v("Adding NeuroML cells to: %s"%nml_doc.id) for c in self.pop_comp_info: info = self.pop_comp_info[c] model_template = info[] if in info else \ (info[][] if in info else info[]) print_v(" - Adding %s: %s"%(model_template, info)) if info[] == and model_template == : is_nest = True from neuroml import IF_curr_alpha pynn0 = IF_curr_alpha(id=c, cm=info[][]/1000.0, i_offset="0", tau_m=info[][], tau_refrac=info[][], tau_syn_E="1", tau_syn_I="1", v_init=, v_reset=info[][], v_rest=info[][], v_thresh=info[][]) nml_doc.IF_curr_alpha.append(pynn0) elif info[] == and model_template == : contents = %(c, info[][]*1000, info[][]*1000) cell_file_name = %c cell_file = open(cell_file_name,) cell_file.write(contents) cell_file.close() self.nml_includes.append(cell_file_name) self.nml_includes.append() else: from neuroml import IafRefCell IafRefCell0 = IafRefCell(id=DUMMY_CELL, C=".2 nF", thresh = "1mV", reset="0mV", refract="3ms", leak_conductance="1.2 nS", leak_reversal="0mV") print_v(" - Adding: %s"%IafRefCell0) nml_doc.iaf_ref_cells.append(IafRefCell0) print_v("Adding NeuroML synapses to: %s"%nml_doc.id) for s in self.syn_comp_info: dyn_params = self.syn_comp_info[s][] print_v(" - Syn: %s: %s"%(s, dyn_params)) if in dyn_params and dyn_params[] == : from neuroml import ExpTwoSynapse syn = ExpTwoSynapse(id=s, gbase="1nS", erev="%smV"%dyn_params[], tau_rise="%sms"%dyn_params[], tau_decay="%sms"%dyn_params[]) nml_doc.exp_two_synapses.append(syn) elif in dyn_params and dyn_params[] == : contents = %(s) syn_file_name = %s syn_file = open(syn_file_name,) syn_file.write(contents) syn_file.close() self.nml_includes.append(syn_file_name) else: from neuroml import AlphaCurrSynapse pynnSynn0 = AlphaCurrSynapse(id=s, tau_syn="2") nml_doc.alpha_curr_synapses.append(pynnSynn0) print_v("Adding NeuroML inputs to: %s"%nml_doc.id) for input in self.input_comp_info: for input_type in self.input_comp_info[input]: if input_type == : for comp_id in self.input_comp_info[input][input_type]: info = self.input_comp_info[input][input_type][comp_id] print_v("Adding input %s: %s"%(comp_id, info.keys())) nest_syn = _get_default_nest_syn(nml_doc) from neuroml import TimedSynapticInput, Spike tsi = TimedSynapticInput(id=comp_id, synapse=nest_syn.id, spike_target="./%s"%nest_syn.id) nml_doc.timed_synaptic_inputs.append(tsi) for ti in range(len(info[])): tsi.spikes.append(Spike(id=ti, time=%info[][ti])) elif input_type == : from neuroml import PulseGenerator for comp_id in self.input_comp_info[input][input_type]: info = self.input_comp_info[input][input_type][comp_id] amp_template = if is_nest else pg = PulseGenerator(id=comp_id,delay=%info[],duration=%info[],amplitude=amp_template%info[]) nml_doc.pulse_generators.append(pg)
Based on cell & synapse properties found, create the corresponding NeuroML components
### Input: Based on cell & synapse properties found, create the corresponding NeuroML components ### Response: def add_neuroml_components(self, nml_doc): is_nest = False print_v("Adding NeuroML cells to: %s"%nml_doc.id) for c in self.pop_comp_info: info = self.pop_comp_info[c] model_template = info[] if in info else \ (info[][] if in info else info[]) print_v(" - Adding %s: %s"%(model_template, info)) if info[] == and model_template == : is_nest = True from neuroml import IF_curr_alpha pynn0 = IF_curr_alpha(id=c, cm=info[][]/1000.0, i_offset="0", tau_m=info[][], tau_refrac=info[][], tau_syn_E="1", tau_syn_I="1", v_init=, v_reset=info[][], v_rest=info[][], v_thresh=info[][]) nml_doc.IF_curr_alpha.append(pynn0) elif info[] == and model_template == : contents = %(c, info[][]*1000, info[][]*1000) cell_file_name = %c cell_file = open(cell_file_name,) cell_file.write(contents) cell_file.close() self.nml_includes.append(cell_file_name) self.nml_includes.append() else: from neuroml import IafRefCell IafRefCell0 = IafRefCell(id=DUMMY_CELL, C=".2 nF", thresh = "1mV", reset="0mV", refract="3ms", leak_conductance="1.2 nS", leak_reversal="0mV") print_v(" - Adding: %s"%IafRefCell0) nml_doc.iaf_ref_cells.append(IafRefCell0) print_v("Adding NeuroML synapses to: %s"%nml_doc.id) for s in self.syn_comp_info: dyn_params = self.syn_comp_info[s][] print_v(" - Syn: %s: %s"%(s, dyn_params)) if in dyn_params and dyn_params[] == : from neuroml import ExpTwoSynapse syn = ExpTwoSynapse(id=s, gbase="1nS", erev="%smV"%dyn_params[], tau_rise="%sms"%dyn_params[], tau_decay="%sms"%dyn_params[]) nml_doc.exp_two_synapses.append(syn) elif in dyn_params and dyn_params[] == : contents = %(s) syn_file_name = %s syn_file = open(syn_file_name,) syn_file.write(contents) syn_file.close() self.nml_includes.append(syn_file_name) else: from neuroml import AlphaCurrSynapse pynnSynn0 = AlphaCurrSynapse(id=s, tau_syn="2") nml_doc.alpha_curr_synapses.append(pynnSynn0) print_v("Adding NeuroML inputs to: %s"%nml_doc.id) for input in self.input_comp_info: for input_type in self.input_comp_info[input]: if input_type == : for comp_id in self.input_comp_info[input][input_type]: info = self.input_comp_info[input][input_type][comp_id] print_v("Adding input %s: %s"%(comp_id, info.keys())) nest_syn = _get_default_nest_syn(nml_doc) from neuroml import TimedSynapticInput, Spike tsi = TimedSynapticInput(id=comp_id, synapse=nest_syn.id, spike_target="./%s"%nest_syn.id) nml_doc.timed_synaptic_inputs.append(tsi) for ti in range(len(info[])): tsi.spikes.append(Spike(id=ti, time=%info[][ti])) elif input_type == : from neuroml import PulseGenerator for comp_id in self.input_comp_info[input][input_type]: info = self.input_comp_info[input][input_type][comp_id] amp_template = if is_nest else pg = PulseGenerator(id=comp_id,delay=%info[],duration=%info[],amplitude=amp_template%info[]) nml_doc.pulse_generators.append(pg)
def remove_from_gallery(self): url = self._imgur._base_url + "/3/gallery/{0}".format(self.id) self._imgur._send_request(url, needs_auth=True, method=) if isinstance(self, Image): item = self._imgur.get_image(self.id) else: item = self._imgur.get_album(self.id) _change_object(self, item) return self
Remove this image from the gallery.
### Input: Remove this image from the gallery. ### Response: def remove_from_gallery(self): url = self._imgur._base_url + "/3/gallery/{0}".format(self.id) self._imgur._send_request(url, needs_auth=True, method=) if isinstance(self, Image): item = self._imgur.get_image(self.id) else: item = self._imgur.get_album(self.id) _change_object(self, item) return self
def html_entities_to_unicode(text, space_padding=False, safe_only=False): &amp;& def convert_entities(match): x = match.group(1) if safe_only and x not in ENTITIES_THAT_ARE_SAFE_TO_STRING_PAD: return u % x if x in name2codepoint: return unichr(name2codepoint[x]) elif x in XML_ENTITIES_TO_SPECIAL_CHARS: return XML_ENTITIES_TO_SPECIAL_CHARS[x] elif len(x) > 0 and x[0] == : if len(x) > 1 and x[1] == : return unichr(int(x[2:], 16)) else: return unichr(int(x[1:])) else: return u % x def convert_to_padded_entitites(match): converted_string = convert_entities(match) num_spaces_needed = len(match.group(0)) - len(converted_string) assert num_spaces_needed >= 0, \ % (converted_string, match.group(0)) if space_padding: return tags.sub( convert_to_padded_entitites, text) else: return tags.sub( convert_entities, text)
Convert any HTML, XML, or numeric entities in the attribute values. For example '&amp;' becomes '&'. This is adapted from BeautifulSoup, which should be able to do the same thing when called like this --- but this fails to convert everything for some bug. text = unicode(BeautifulStoneSoup(text, convertEntities=BeautifulStoneSoup.XML_ENTITIES))
### Input: Convert any HTML, XML, or numeric entities in the attribute values. For example '&amp;' becomes '&'. This is adapted from BeautifulSoup, which should be able to do the same thing when called like this --- but this fails to convert everything for some bug. text = unicode(BeautifulStoneSoup(text, convertEntities=BeautifulStoneSoup.XML_ENTITIES)) ### Response: def html_entities_to_unicode(text, space_padding=False, safe_only=False): &amp;& def convert_entities(match): x = match.group(1) if safe_only and x not in ENTITIES_THAT_ARE_SAFE_TO_STRING_PAD: return u % x if x in name2codepoint: return unichr(name2codepoint[x]) elif x in XML_ENTITIES_TO_SPECIAL_CHARS: return XML_ENTITIES_TO_SPECIAL_CHARS[x] elif len(x) > 0 and x[0] == : if len(x) > 1 and x[1] == : return unichr(int(x[2:], 16)) else: return unichr(int(x[1:])) else: return u % x def convert_to_padded_entitites(match): converted_string = convert_entities(match) num_spaces_needed = len(match.group(0)) - len(converted_string) assert num_spaces_needed >= 0, \ % (converted_string, match.group(0)) if space_padding: return tags.sub( convert_to_padded_entitites, text) else: return tags.sub( convert_entities, text)
def get_compaction_options(model): if not model.__compaction__: return {} result = {:model.__compaction__} def setter(key, limited_to_strategy = None): mkey = "__compaction_{}__".format(key) tmp = getattr(model, mkey) if tmp and limited_to_strategy and limited_to_strategy != model.__compaction__: raise CQLEngineException("{} is limited to {}".format(key, limited_to_strategy)) if tmp: result[key] = str(tmp) setter() setter() setter(, SizeTieredCompactionStrategy) setter(, SizeTieredCompactionStrategy) setter(, SizeTieredCompactionStrategy) setter(, SizeTieredCompactionStrategy) setter(, SizeTieredCompactionStrategy) setter(, LeveledCompactionStrategy) return result
Generates dictionary (later converted to a string) for creating and altering tables with compaction strategy :param model: :return:
### Input: Generates dictionary (later converted to a string) for creating and altering tables with compaction strategy :param model: :return: ### Response: def get_compaction_options(model): if not model.__compaction__: return {} result = {:model.__compaction__} def setter(key, limited_to_strategy = None): mkey = "__compaction_{}__".format(key) tmp = getattr(model, mkey) if tmp and limited_to_strategy and limited_to_strategy != model.__compaction__: raise CQLEngineException("{} is limited to {}".format(key, limited_to_strategy)) if tmp: result[key] = str(tmp) setter() setter() setter(, SizeTieredCompactionStrategy) setter(, SizeTieredCompactionStrategy) setter(, SizeTieredCompactionStrategy) setter(, SizeTieredCompactionStrategy) setter(, SizeTieredCompactionStrategy) setter(, LeveledCompactionStrategy) return result
def incver(self): d = {} for p in self.__mapper__.attrs: if p.key in [,,, , ]: continue if p.key == : d[p.key] = self.revision + 1 else: d[p.key] = getattr(self, p.key) n = Dataset(**d) return n
Increment all of the version numbers
### Input: Increment all of the version numbers ### Response: def incver(self): d = {} for p in self.__mapper__.attrs: if p.key in [,,, , ]: continue if p.key == : d[p.key] = self.revision + 1 else: d[p.key] = getattr(self, p.key) n = Dataset(**d) return n
def send_exit_status(self, status): m.add_boolean(False) m.add_int(status) self.transport._send_user_message(m)
Send the exit status of an executed command to the client. (This really only makes sense in server mode.) Many clients expect to get some sort of status code back from an executed command after it completes. @param status: the exit code of the process @type status: int @since: 1.2
### Input: Send the exit status of an executed command to the client. (This really only makes sense in server mode.) Many clients expect to get some sort of status code back from an executed command after it completes. @param status: the exit code of the process @type status: int @since: 1.2 ### Response: def send_exit_status(self, status): m.add_boolean(False) m.add_int(status) self.transport._send_user_message(m)
def hazard_items(dic, mesh, *extras, **kw): for item in kw.items(): yield item arr = dic[next(iter(dic))] dtlist = [(str(field), arr.dtype) for field in sorted(dic)] for field, dtype, values in extras: dtlist.append((str(field), dtype)) array = numpy.zeros(arr.shape, dtlist) for field in dic: array[field] = dic[field] for field, dtype, values in extras: array[field] = values yield , util.compose_arrays(mesh, array)
:param dic: dictionary of arrays of the same shape :param mesh: a mesh array with lon, lat fields of the same length :param extras: optional triples (field, dtype, values) :param kw: dictionary of parameters (like investigation_time) :returns: a list of pairs (key, value) suitable for storage in .npz format
### Input: :param dic: dictionary of arrays of the same shape :param mesh: a mesh array with lon, lat fields of the same length :param extras: optional triples (field, dtype, values) :param kw: dictionary of parameters (like investigation_time) :returns: a list of pairs (key, value) suitable for storage in .npz format ### Response: def hazard_items(dic, mesh, *extras, **kw): for item in kw.items(): yield item arr = dic[next(iter(dic))] dtlist = [(str(field), arr.dtype) for field in sorted(dic)] for field, dtype, values in extras: dtlist.append((str(field), dtype)) array = numpy.zeros(arr.shape, dtlist) for field in dic: array[field] = dic[field] for field, dtype, values in extras: array[field] = values yield , util.compose_arrays(mesh, array)
def create(self, name): if not isinstance(name, basestring): raise ValueError("Invalid name: %s" % repr(name)) response = self.post(__conf=name) if response.status == 303: return self[name] elif response.status == 201: return ConfigurationFile(self.service, PATH_CONF % name, item=Stanza, state={: name}) else: raise ValueError("Unexpected status code %s returned from creating a stanza" % response.status)
Creates a configuration file named *name*. If there is already a configuration file with that name, the existing file is returned. :param name: The name of the configuration file. :type name: ``string`` :return: The :class:`ConfigurationFile` object.
### Input: Creates a configuration file named *name*. If there is already a configuration file with that name, the existing file is returned. :param name: The name of the configuration file. :type name: ``string`` :return: The :class:`ConfigurationFile` object. ### Response: def create(self, name): if not isinstance(name, basestring): raise ValueError("Invalid name: %s" % repr(name)) response = self.post(__conf=name) if response.status == 303: return self[name] elif response.status == 201: return ConfigurationFile(self.service, PATH_CONF % name, item=Stanza, state={: name}) else: raise ValueError("Unexpected status code %s returned from creating a stanza" % response.status)
def _parse_study(self, fname, node_types): if not os.path.exists(os.path.join(self._dir, fname)): return None nodes = {} with open(os.path.join(self._dir, fname), "rU") as in_handle: reader = csv.reader(in_handle, dialect="excel-tab") header = self._swap_synonyms(next(reader)) hgroups = self._collapse_header(header) htypes = self._characterize_header(header, hgroups) for node_type in node_types: try: name_index = header.index(node_type) except ValueError: name_index = None if name_index is None: continue in_handle.seek(0, 0) for line in reader: name = line[name_index] node_index = self._build_node_index(node_type,name) if name in header: continue if (not name): continue try: node = nodes[node_index] except KeyError: node = NodeRecord(name, node_type) node.metadata = collections.defaultdict(set) nodes[node_index] = node attrs = self._line_keyvals(line, header, hgroups, htypes, node.metadata) nodes[node_index].metadata = attrs return dict([(k, self._finalize_metadata(v)) for k, v in nodes.items()])
Parse study or assay row oriented file around the supplied base node.
### Input: Parse study or assay row oriented file around the supplied base node. ### Response: def _parse_study(self, fname, node_types): if not os.path.exists(os.path.join(self._dir, fname)): return None nodes = {} with open(os.path.join(self._dir, fname), "rU") as in_handle: reader = csv.reader(in_handle, dialect="excel-tab") header = self._swap_synonyms(next(reader)) hgroups = self._collapse_header(header) htypes = self._characterize_header(header, hgroups) for node_type in node_types: try: name_index = header.index(node_type) except ValueError: name_index = None if name_index is None: continue in_handle.seek(0, 0) for line in reader: name = line[name_index] node_index = self._build_node_index(node_type,name) if name in header: continue if (not name): continue try: node = nodes[node_index] except KeyError: node = NodeRecord(name, node_type) node.metadata = collections.defaultdict(set) nodes[node_index] = node attrs = self._line_keyvals(line, header, hgroups, htypes, node.metadata) nodes[node_index].metadata = attrs return dict([(k, self._finalize_metadata(v)) for k, v in nodes.items()])
def MessageToRepr(msg, multiline=False, **kwargs): indent = kwargs.get(, 0) def IndentKwargs(kwargs): kwargs = dict(kwargs) kwargs[] = kwargs.get(, 0) + 4 return kwargs if isinstance(msg, list): s = for item in msg: if multiline: s += + * (indent + 4) s += MessageToRepr( item, multiline=multiline, **IndentKwargs(kwargs)) + if multiline: s += + * indent s += return s if isinstance(msg, messages.Message): s = type(msg).__name__ + if not kwargs.get(): s = msg.__module__ + + s names = sorted([field.name for field in msg.all_fields()]) for name in names: field = msg.field_by_name(name) if multiline: s += + * (indent + 4) value = getattr(msg, field.name) s += field.name + + MessageToRepr( value, multiline=multiline, **IndentKwargs(kwargs)) + if multiline: s += + * indent s += return s if isinstance(msg, six.string_types): if kwargs.get() and len(msg) > 100: msg = msg[:100] if isinstance(msg, datetime.datetime): class SpecialTZInfo(datetime.tzinfo): def __init__(self, offset): super(SpecialTZInfo, self).__init__() self.offset = offset def __repr__(self): s = + repr(self.offset) + if not kwargs.get(): s = + s return s msg = datetime.datetime( msg.year, msg.month, msg.day, msg.hour, msg.minute, msg.second, msg.microsecond, SpecialTZInfo(msg.tzinfo.utcoffset(0))) return repr(msg)
Return a repr-style string for a protorpc message. protorpc.Message.__repr__ does not return anything that could be considered python code. Adding this function lets us print a protorpc message in such a way that it could be pasted into code later, and used to compare against other things. Args: msg: protorpc.Message, the message to be repr'd. multiline: bool, True if the returned string should have each field assignment on its own line. **kwargs: {str:str}, Additional flags for how to format the string. Known **kwargs: shortstrings: bool, True if all string values should be truncated at 100 characters, since when mocking the contents typically don't matter except for IDs, and IDs are usually less than 100 characters. no_modules: bool, True if the long module name should not be printed with each type. Returns: str, A string of valid python (assuming the right imports have been made) that recreates the message passed into this function.
### Input: Return a repr-style string for a protorpc message. protorpc.Message.__repr__ does not return anything that could be considered python code. Adding this function lets us print a protorpc message in such a way that it could be pasted into code later, and used to compare against other things. Args: msg: protorpc.Message, the message to be repr'd. multiline: bool, True if the returned string should have each field assignment on its own line. **kwargs: {str:str}, Additional flags for how to format the string. Known **kwargs: shortstrings: bool, True if all string values should be truncated at 100 characters, since when mocking the contents typically don't matter except for IDs, and IDs are usually less than 100 characters. no_modules: bool, True if the long module name should not be printed with each type. Returns: str, A string of valid python (assuming the right imports have been made) that recreates the message passed into this function. ### Response: def MessageToRepr(msg, multiline=False, **kwargs): indent = kwargs.get(, 0) def IndentKwargs(kwargs): kwargs = dict(kwargs) kwargs[] = kwargs.get(, 0) + 4 return kwargs if isinstance(msg, list): s = for item in msg: if multiline: s += + * (indent + 4) s += MessageToRepr( item, multiline=multiline, **IndentKwargs(kwargs)) + if multiline: s += + * indent s += return s if isinstance(msg, messages.Message): s = type(msg).__name__ + if not kwargs.get(): s = msg.__module__ + + s names = sorted([field.name for field in msg.all_fields()]) for name in names: field = msg.field_by_name(name) if multiline: s += + * (indent + 4) value = getattr(msg, field.name) s += field.name + + MessageToRepr( value, multiline=multiline, **IndentKwargs(kwargs)) + if multiline: s += + * indent s += return s if isinstance(msg, six.string_types): if kwargs.get() and len(msg) > 100: msg = msg[:100] if isinstance(msg, datetime.datetime): class SpecialTZInfo(datetime.tzinfo): def __init__(self, offset): super(SpecialTZInfo, self).__init__() self.offset = offset def __repr__(self): s = + repr(self.offset) + if not kwargs.get(): s = + s return s msg = datetime.datetime( msg.year, msg.month, msg.day, msg.hour, msg.minute, msg.second, msg.microsecond, SpecialTZInfo(msg.tzinfo.utcoffset(0))) return repr(msg)
def load_html(self, mode, html): self._html_loaded_flag = False if mode == HTML_FILE_MODE: self.setUrl(QtCore.QUrl.fromLocalFile(html)) elif mode == HTML_STR_MODE: self.setHtml(html) else: raise InvalidParameterError() counter = 0 sleep_period = 0.1 timeout = 20 while not self._html_loaded_flag and counter < timeout: counter += sleep_period time.sleep(sleep_period) QgsApplication.processEvents()
Load HTML to this class with the mode specified. There are two modes that can be used: * HTML_FILE_MODE: Directly from a local HTML file. * HTML_STR_MODE: From a valid HTML string. :param mode: The mode. :type mode: int :param html: The html that will be loaded. If the mode is a file, then it should be a path to the htm lfile. If the mode is a string, then it should be a valid HTML string. :type html: str
### Input: Load HTML to this class with the mode specified. There are two modes that can be used: * HTML_FILE_MODE: Directly from a local HTML file. * HTML_STR_MODE: From a valid HTML string. :param mode: The mode. :type mode: int :param html: The html that will be loaded. If the mode is a file, then it should be a path to the htm lfile. If the mode is a string, then it should be a valid HTML string. :type html: str ### Response: def load_html(self, mode, html): self._html_loaded_flag = False if mode == HTML_FILE_MODE: self.setUrl(QtCore.QUrl.fromLocalFile(html)) elif mode == HTML_STR_MODE: self.setHtml(html) else: raise InvalidParameterError() counter = 0 sleep_period = 0.1 timeout = 20 while not self._html_loaded_flag and counter < timeout: counter += sleep_period time.sleep(sleep_period) QgsApplication.processEvents()
def set_fd_value(tag, value): if tag.VR == or tag.VR == : value = struct.pack(, value) tag.value = value
Setters for data that also work with implicit transfersyntax :param value: the value to set on the tag :param tag: the tag to read
### Input: Setters for data that also work with implicit transfersyntax :param value: the value to set on the tag :param tag: the tag to read ### Response: def set_fd_value(tag, value): if tag.VR == or tag.VR == : value = struct.pack(, value) tag.value = value
def _process_field_queries(field_dictionary): def field_item(field): return { "match": { field: field_dictionary[field] } } return [field_item(field) for field in field_dictionary]
We have a field_dictionary - we want to match the values for an elasticsearch "match" query This is only potentially useful when trying to tune certain search operations
### Input: We have a field_dictionary - we want to match the values for an elasticsearch "match" query This is only potentially useful when trying to tune certain search operations ### Response: def _process_field_queries(field_dictionary): def field_item(field): return { "match": { field: field_dictionary[field] } } return [field_item(field) for field in field_dictionary]
def get_anyhline(config): if config.BOOLEAN_STATES[config.config.get(, )] or\ config.BOOLEAN_STATES[config.config.get(, )]: return Window( width=LayoutDimension.exact(1), height=LayoutDimension.exact(1), content=FillControl(, token=Token.Line)) return get_empty()
if there is a line between descriptions and example
### Input: if there is a line between descriptions and example ### Response: def get_anyhline(config): if config.BOOLEAN_STATES[config.config.get(, )] or\ config.BOOLEAN_STATES[config.config.get(, )]: return Window( width=LayoutDimension.exact(1), height=LayoutDimension.exact(1), content=FillControl(, token=Token.Line)) return get_empty()
def collection_choices(): from invenio_collections.models import Collection return [(0, _())] + [ (c.id, c.name) for c in Collection.query.all() ]
Return collection choices.
### Input: Return collection choices. ### Response: def collection_choices(): from invenio_collections.models import Collection return [(0, _())] + [ (c.id, c.name) for c in Collection.query.all() ]
def add_element_list(self, elt_list, **kwargs): for e in elt_list: self.add_element(Element(e, **kwargs))
Helper to add a list of similar elements to the current section. Element names will be used as an identifier.
### Input: Helper to add a list of similar elements to the current section. Element names will be used as an identifier. ### Response: def add_element_list(self, elt_list, **kwargs): for e in elt_list: self.add_element(Element(e, **kwargs))
def supports_import( self, exported_configs, service_intents, endpoint_props ): return self._get_or_create_container( exported_configs, service_intents, endpoint_props )
Method called by rsa.export_service to ask if this ImportDistributionProvider supports import for given exported_configs (list), service_intents (list), and export_props (dict). If a ImportContainer instance is returned then it is used to import the service. If None is returned, then this distribution provider will not be used to import the service. The default implementation returns self._get_or_create_container.
### Input: Method called by rsa.export_service to ask if this ImportDistributionProvider supports import for given exported_configs (list), service_intents (list), and export_props (dict). If a ImportContainer instance is returned then it is used to import the service. If None is returned, then this distribution provider will not be used to import the service. The default implementation returns self._get_or_create_container. ### Response: def supports_import( self, exported_configs, service_intents, endpoint_props ): return self._get_or_create_container( exported_configs, service_intents, endpoint_props )
def createOutputBuffer(file, encoding): ret = libxml2mod.xmlCreateOutputBuffer(file, encoding) if ret is None:raise treeError() return outputBuffer(_obj=ret)
Create a libxml2 output buffer from a Python file
### Input: Create a libxml2 output buffer from a Python file ### Response: def createOutputBuffer(file, encoding): ret = libxml2mod.xmlCreateOutputBuffer(file, encoding) if ret is None:raise treeError() return outputBuffer(_obj=ret)
def dpu(self, hash=None, historics_id=None): if hash: return self.request.get(, params=dict(hash=hash)) if historics_id: return self.request.get(, params=dict(historics_id=historics_id))
Calculate the DPU cost of consuming a stream. Uses API documented at http://dev.datasift.com/docs/api/rest-api/endpoints/dpu :param hash: target CSDL filter hash :type hash: str :returns: dict with extra response data :rtype: :class:`~datasift.request.DictResponse` :raises: :class:`~datasift.exceptions.DataSiftApiException`, :class:`requests.exceptions.HTTPError`
### Input: Calculate the DPU cost of consuming a stream. Uses API documented at http://dev.datasift.com/docs/api/rest-api/endpoints/dpu :param hash: target CSDL filter hash :type hash: str :returns: dict with extra response data :rtype: :class:`~datasift.request.DictResponse` :raises: :class:`~datasift.exceptions.DataSiftApiException`, :class:`requests.exceptions.HTTPError` ### Response: def dpu(self, hash=None, historics_id=None): if hash: return self.request.get(, params=dict(hash=hash)) if historics_id: return self.request.get(, params=dict(historics_id=historics_id))
def s3_cache_readonly(self): return coerce_boolean(self.get(property_name=, environment_variable=, configuration_option=, default=False))
Whether the Amazon S3 bucket is considered read only. If this is :data:`True` then the Amazon S3 bucket will only be used for :class:`~pip_accel.caches.s3.S3CacheBackend.get()` operations (all :class:`~pip_accel.caches.s3.S3CacheBackend.put()` operations will be disabled). - Environment variable: ``$PIP_ACCEL_S3_READONLY`` (refer to :func:`~humanfriendly.coerce_boolean()` for details on how the value of the environment variable is interpreted) - Configuration option: ``s3-readonly`` (also parsed using :func:`~humanfriendly.coerce_boolean()`) - Default: :data:`False` For details please refer to the :mod:`pip_accel.caches.s3` module.
### Input: Whether the Amazon S3 bucket is considered read only. If this is :data:`True` then the Amazon S3 bucket will only be used for :class:`~pip_accel.caches.s3.S3CacheBackend.get()` operations (all :class:`~pip_accel.caches.s3.S3CacheBackend.put()` operations will be disabled). - Environment variable: ``$PIP_ACCEL_S3_READONLY`` (refer to :func:`~humanfriendly.coerce_boolean()` for details on how the value of the environment variable is interpreted) - Configuration option: ``s3-readonly`` (also parsed using :func:`~humanfriendly.coerce_boolean()`) - Default: :data:`False` For details please refer to the :mod:`pip_accel.caches.s3` module. ### Response: def s3_cache_readonly(self): return coerce_boolean(self.get(property_name=, environment_variable=, configuration_option=, default=False))
def _rest_request_to_json(self, address, object_path, service_name, requests_config, tags, *args, **kwargs): response = self._rest_request(address, object_path, service_name, requests_config, tags, *args, **kwargs) try: response_json = response.json() except JSONDecodeError as e: self.service_check( service_name, AgentCheck.CRITICAL, tags=[ % self._get_url_base(address)] + tags, message=.format(e), ) raise return response_json
Query the given URL and return the JSON response
### Input: Query the given URL and return the JSON response ### Response: def _rest_request_to_json(self, address, object_path, service_name, requests_config, tags, *args, **kwargs): response = self._rest_request(address, object_path, service_name, requests_config, tags, *args, **kwargs) try: response_json = response.json() except JSONDecodeError as e: self.service_check( service_name, AgentCheck.CRITICAL, tags=[ % self._get_url_base(address)] + tags, message=.format(e), ) raise return response_json
def save_congress(congress, dest): try: logger.debug(congress.name) logger.debug(dest) congress_dir = make_congress_dir(congress.name, dest) congress.legislation.to_csv("{0}/legislation.csv".format(congress_dir), encoding=) logger.debug(congress_dir) congress.sponsors.to_csv("{0}/sponsor_map.csv".format(congress_dir), encoding=) congress.cosponsors.to_csv( "{0}/cosponsor_map.csv".format(congress_dir), encoding=) congress.events.to_csv("{0}/events.csv".format(congress_dir), encoding=) congress.committees.to_csv( "{0}/committees_map.csv".format(congress_dir), encoding=) congress.subjects.to_csv("{0}/subjects_map.csv".format(congress_dir), encoding=) congress.votes.to_csv("{0}/votes.csv".format(congress_dir), encoding=) congress.votes_people.to_csv( "{0}/votes_people.csv".format(congress_dir), encoding=) if hasattr(congress, ): congress.amendments.to_csv( "{0}/amendments.csv".format(congress_dir), encoding=) except Exception: logger.error(" exc_type, exc_obj, exc_tb = sys.exc_info() fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1] logger.error(exc_type, fname, exc_tb.tb_lineno)
Takes a congress object with legislation, sponser, cosponsor, commities and subjects attributes and saves each item to it's own csv file.
### Input: Takes a congress object with legislation, sponser, cosponsor, commities and subjects attributes and saves each item to it's own csv file. ### Response: def save_congress(congress, dest): try: logger.debug(congress.name) logger.debug(dest) congress_dir = make_congress_dir(congress.name, dest) congress.legislation.to_csv("{0}/legislation.csv".format(congress_dir), encoding=) logger.debug(congress_dir) congress.sponsors.to_csv("{0}/sponsor_map.csv".format(congress_dir), encoding=) congress.cosponsors.to_csv( "{0}/cosponsor_map.csv".format(congress_dir), encoding=) congress.events.to_csv("{0}/events.csv".format(congress_dir), encoding=) congress.committees.to_csv( "{0}/committees_map.csv".format(congress_dir), encoding=) congress.subjects.to_csv("{0}/subjects_map.csv".format(congress_dir), encoding=) congress.votes.to_csv("{0}/votes.csv".format(congress_dir), encoding=) congress.votes_people.to_csv( "{0}/votes_people.csv".format(congress_dir), encoding=) if hasattr(congress, ): congress.amendments.to_csv( "{0}/amendments.csv".format(congress_dir), encoding=) except Exception: logger.error(" exc_type, exc_obj, exc_tb = sys.exc_info() fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1] logger.error(exc_type, fname, exc_tb.tb_lineno)
def update(self, id, **dict): if not self._item_path: raise AttributeError( % self._item_name) target = (self._update_path or self._item_path) % id payload = json.dumps({self._item_type:dict}) self._redmine.put(target, payload) return None
Update a given item with the passed data.
### Input: Update a given item with the passed data. ### Response: def update(self, id, **dict): if not self._item_path: raise AttributeError( % self._item_name) target = (self._update_path or self._item_path) % id payload = json.dumps({self._item_type:dict}) self._redmine.put(target, payload) return None
def compat_stat(path): stat = os.stat(path) info = get_file_info(path) return nt.stat_result( (stat.st_mode,) + (info.file_index, info.volume_serial_number, info.number_of_links) + stat[4:] )
Generate stat as found on Python 3.2 and later.
### Input: Generate stat as found on Python 3.2 and later. ### Response: def compat_stat(path): stat = os.stat(path) info = get_file_info(path) return nt.stat_result( (stat.st_mode,) + (info.file_index, info.volume_serial_number, info.number_of_links) + stat[4:] )
def datasets_download_file(self, owner_slug, dataset_slug, file_name, **kwargs): kwargs[] = True if kwargs.get(): return self.datasets_download_file_with_http_info(owner_slug, dataset_slug, file_name, **kwargs) else: (data) = self.datasets_download_file_with_http_info(owner_slug, dataset_slug, file_name, **kwargs) return data
Download dataset file # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.datasets_download_file(owner_slug, dataset_slug, file_name, async_req=True) >>> result = thread.get() :param async_req bool :param str owner_slug: Dataset owner (required) :param str dataset_slug: Dataset name (required) :param str file_name: File name (required) :param str dataset_version_number: Dataset version number :return: Result If the method is called asynchronously, returns the request thread.
### Input: Download dataset file # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.datasets_download_file(owner_slug, dataset_slug, file_name, async_req=True) >>> result = thread.get() :param async_req bool :param str owner_slug: Dataset owner (required) :param str dataset_slug: Dataset name (required) :param str file_name: File name (required) :param str dataset_version_number: Dataset version number :return: Result If the method is called asynchronously, returns the request thread. ### Response: def datasets_download_file(self, owner_slug, dataset_slug, file_name, **kwargs): kwargs[] = True if kwargs.get(): return self.datasets_download_file_with_http_info(owner_slug, dataset_slug, file_name, **kwargs) else: (data) = self.datasets_download_file_with_http_info(owner_slug, dataset_slug, file_name, **kwargs) return data
def dict_hash(dct): dct_s = json.dumps(dct, sort_keys=True) try: m = md5(dct_s) except TypeError: m = md5(dct_s.encode()) return m.hexdigest()
Return a hash of the contents of a dictionary
### Input: Return a hash of the contents of a dictionary ### Response: def dict_hash(dct): dct_s = json.dumps(dct, sort_keys=True) try: m = md5(dct_s) except TypeError: m = md5(dct_s.encode()) return m.hexdigest()
def almost_identity(gate: Gate) -> bool: N = gate.qubit_nb return np.allclose(asarray(gate.asoperator()), np.eye(2**N))
Return true if gate tensor is (almost) the identity
### Input: Return true if gate tensor is (almost) the identity ### Response: def almost_identity(gate: Gate) -> bool: N = gate.qubit_nb return np.allclose(asarray(gate.asoperator()), np.eye(2**N))
def _default(self): if self._default_args: return self._func( *self._default_args, **self._default_kwargs) return self._func(**self._default_kwargs)
Get the default function return
### Input: Get the default function return ### Response: def _default(self): if self._default_args: return self._func( *self._default_args, **self._default_kwargs) return self._func(**self._default_kwargs)
def question_detail(request, topic_slug, slug): extra_context = { : Topic.objects.published().get(slug=topic_slug), } return object_detail(request, queryset=Question.objects.published(), extra_context=extra_context, template_object_name=, slug=slug)
A detail view of a Question. Templates: :template:`faq/question_detail.html` Context: question A :model:`faq.Question`. topic The :model:`faq.Topic` object related to ``question``.
### Input: A detail view of a Question. Templates: :template:`faq/question_detail.html` Context: question A :model:`faq.Question`. topic The :model:`faq.Topic` object related to ``question``. ### Response: def question_detail(request, topic_slug, slug): extra_context = { : Topic.objects.published().get(slug=topic_slug), } return object_detail(request, queryset=Question.objects.published(), extra_context=extra_context, template_object_name=, slug=slug)
def _split_into_groups(n, max_group_size, mesh_dim_size): if n % mesh_dim_size != 0: raise ValueError( "n=%d is not a multiple of mesh_dim_size=%d" % (n, mesh_dim_size)) num_groups = max(1, n // max_group_size) while (num_groups % mesh_dim_size != 0 or n % num_groups != 0): num_groups += 1 group_size = n // num_groups tf.logging.info( "_split_into_groups(n=%d, max_group_size=%d, mesh_dim_size=%d)" " = (num_groups=%d group_size=%d)" % (n, max_group_size, mesh_dim_size, num_groups, group_size)) return num_groups, group_size
Helper function for figuring out how to split a dimensino into groups. We have a dimension with size n and we want to split it into two dimensions: n = num_groups * group_size group_size should be the largest possible value meeting the constraints: group_size <= max_group_size (num_groups = n/group_size) is a multiple of mesh_dim_size Args: n: an integer max_group_size: an integer mesh_dim_size: an integer Returns: num_groups: an integer group_size: an integer Raises: ValueError: if n is not a multiple of mesh_dim_size
### Input: Helper function for figuring out how to split a dimensino into groups. We have a dimension with size n and we want to split it into two dimensions: n = num_groups * group_size group_size should be the largest possible value meeting the constraints: group_size <= max_group_size (num_groups = n/group_size) is a multiple of mesh_dim_size Args: n: an integer max_group_size: an integer mesh_dim_size: an integer Returns: num_groups: an integer group_size: an integer Raises: ValueError: if n is not a multiple of mesh_dim_size ### Response: def _split_into_groups(n, max_group_size, mesh_dim_size): if n % mesh_dim_size != 0: raise ValueError( "n=%d is not a multiple of mesh_dim_size=%d" % (n, mesh_dim_size)) num_groups = max(1, n // max_group_size) while (num_groups % mesh_dim_size != 0 or n % num_groups != 0): num_groups += 1 group_size = n // num_groups tf.logging.info( "_split_into_groups(n=%d, max_group_size=%d, mesh_dim_size=%d)" " = (num_groups=%d group_size=%d)" % (n, max_group_size, mesh_dim_size, num_groups, group_size)) return num_groups, group_size
def _get(self, id): "Return keys and value for karma id" VALUE_SQL = "SELECT karmavalue from karma_values where karmaid = ?" KEYS_SQL = "SELECT karmakey from karma_keys where karmaid = ?" value = self.db.execute(VALUE_SQL, [id]).fetchall()[0][0] keys_cur = self.db.execute(KEYS_SQL, [id]).fetchall() keys = sorted(x[0] for x in keys_cur) return keys, value
Return keys and value for karma id
### Input: Return keys and value for karma id ### Response: def _get(self, id): "Return keys and value for karma id" VALUE_SQL = "SELECT karmavalue from karma_values where karmaid = ?" KEYS_SQL = "SELECT karmakey from karma_keys where karmaid = ?" value = self.db.execute(VALUE_SQL, [id]).fetchall()[0][0] keys_cur = self.db.execute(KEYS_SQL, [id]).fetchall() keys = sorted(x[0] for x in keys_cur) return keys, value
def handle_error(self, error, data): required_messages = {, } for field_name in error.field_names: for i, msg in enumerate(error.messages[field_name]): if isinstance(msg, _LazyString): msg = str(msg) if msg in required_messages: label = title_case(field_name) error.messages[field_name][i] = f
Customize the error messages for required/not-null validators with dynamically generated field names. This is definitely a little hacky (it mutates state, uses hardcoded strings), but unsure how better to do it
### Input: Customize the error messages for required/not-null validators with dynamically generated field names. This is definitely a little hacky (it mutates state, uses hardcoded strings), but unsure how better to do it ### Response: def handle_error(self, error, data): required_messages = {, } for field_name in error.field_names: for i, msg in enumerate(error.messages[field_name]): if isinstance(msg, _LazyString): msg = str(msg) if msg in required_messages: label = title_case(field_name) error.messages[field_name][i] = f
def string_to_version(verstring): components = verstring.split() if len(components) > 1: epoch = components[0] else: epoch = 0 remaining = components[:2][0].split() version = remaining[0] release = remaining[1] return (epoch, version, release)
Return a tuple of (epoch, version, release) from a version string This function replaces rpmUtils.miscutils.stringToVersion, see https://bugzilla.redhat.com/1364504
### Input: Return a tuple of (epoch, version, release) from a version string This function replaces rpmUtils.miscutils.stringToVersion, see https://bugzilla.redhat.com/1364504 ### Response: def string_to_version(verstring): components = verstring.split() if len(components) > 1: epoch = components[0] else: epoch = 0 remaining = components[:2][0].split() version = remaining[0] release = remaining[1] return (epoch, version, release)
def closest_distance_to(self, position: Union[Unit, Point2, Point3]) -> Union[int, float]: assert self.exists if isinstance(position, Unit): position = position.position return position.distance_to_closest( [u.position for u in self] )
Returns the distance between the closest unit from this group to the target unit
### Input: Returns the distance between the closest unit from this group to the target unit ### Response: def closest_distance_to(self, position: Union[Unit, Point2, Point3]) -> Union[int, float]: assert self.exists if isinstance(position, Unit): position = position.position return position.distance_to_closest( [u.position for u in self] )
async def on_raw_account(self, message): if not self._capabilities.get(, False): return nick, metadata = self._parse_user(message.source) account = message.params[0] if nick not in self.users: return self._sync_user(nick, metadata) if account == NO_ACCOUNT: self._sync_user(nick, { : None, : False }) else: self._sync_user(nick, { : account, : True })
Changes in the associated account for a nickname.
### Input: Changes in the associated account for a nickname. ### Response: async def on_raw_account(self, message): if not self._capabilities.get(, False): return nick, metadata = self._parse_user(message.source) account = message.params[0] if nick not in self.users: return self._sync_user(nick, metadata) if account == NO_ACCOUNT: self._sync_user(nick, { : None, : False }) else: self._sync_user(nick, { : account, : True })
def step_a_file_named_filename_with(context, filename): step_a_file_named_filename_and_encoding_with(context, filename, "UTF-8") if filename.endswith(".feature"): command_util.ensure_context_attribute_exists(context, "features", []) context.features.append(filename)
Creates a textual file with the content provided as docstring.
### Input: Creates a textual file with the content provided as docstring. ### Response: def step_a_file_named_filename_with(context, filename): step_a_file_named_filename_and_encoding_with(context, filename, "UTF-8") if filename.endswith(".feature"): command_util.ensure_context_attribute_exists(context, "features", []) context.features.append(filename)
def OnSelectReader(self, reader): SimpleSCardAppEventObserver.OnSelectReader(self, reader) self.feedbacktext.SetLabel( + repr(reader)) self.transmitbutton.Disable()
Called when a reader is selected by clicking on the reader tree control or toolbar.
### Input: Called when a reader is selected by clicking on the reader tree control or toolbar. ### Response: def OnSelectReader(self, reader): SimpleSCardAppEventObserver.OnSelectReader(self, reader) self.feedbacktext.SetLabel( + repr(reader)) self.transmitbutton.Disable()
def count(self, v): ctr = 0 for item in reversed(self): if item == v: ctr += 1 return ctr
Count occurrences of value v in the entire history. Note that the subclass must implement the __reversed__ method, otherwise an exception will be thrown. :param object v: The value to look for :return: The number of occurrences :rtype: int
### Input: Count occurrences of value v in the entire history. Note that the subclass must implement the __reversed__ method, otherwise an exception will be thrown. :param object v: The value to look for :return: The number of occurrences :rtype: int ### Response: def count(self, v): ctr = 0 for item in reversed(self): if item == v: ctr += 1 return ctr
def fromXml(cls, xdata, filepath=): builder = cls() builder.loadXml(xdata, filepath=filepath) return builder
Generates a new builder from the given xml data and then loads its information. :param xdata | <xml.etree.ElementTree.Element> :return <Builder> || None
### Input: Generates a new builder from the given xml data and then loads its information. :param xdata | <xml.etree.ElementTree.Element> :return <Builder> || None ### Response: def fromXml(cls, xdata, filepath=): builder = cls() builder.loadXml(xdata, filepath=filepath) return builder
def velocity(msg): if 5 <= typecode(msg) <= 8: return surface_velocity(msg) elif typecode(msg) == 19: return airborne_velocity(msg) else: raise RuntimeError("incorrect or inconsistant message types, expecting 4<TC<9 or TC=19")
Calculate the speed, heading, and vertical rate (handles both airborne or surface message) Args: msg (string): 28 bytes hexadecimal message string Returns: (int, float, int, string): speed (kt), ground track or heading (degree), rate of climb/descend (ft/min), and speed type ('GS' for ground speed, 'AS' for airspeed)
### Input: Calculate the speed, heading, and vertical rate (handles both airborne or surface message) Args: msg (string): 28 bytes hexadecimal message string Returns: (int, float, int, string): speed (kt), ground track or heading (degree), rate of climb/descend (ft/min), and speed type ('GS' for ground speed, 'AS' for airspeed) ### Response: def velocity(msg): if 5 <= typecode(msg) <= 8: return surface_velocity(msg) elif typecode(msg) == 19: return airborne_velocity(msg) else: raise RuntimeError("incorrect or inconsistant message types, expecting 4<TC<9 or TC=19")
def add_filter(self, ftype, func): if not isinstance(ftype, type): raise TypeError("Expected type object, got %s" % type(ftype)) self.castfilter = [(t, f) for (t, f) in self.castfilter if t != ftype] self.castfilter.append((ftype, func)) self.castfilter.sort()
Register a new output filter. Whenever bottle hits a handler output matching `ftype`, `func` is applyed to it.
### Input: Register a new output filter. Whenever bottle hits a handler output matching `ftype`, `func` is applyed to it. ### Response: def add_filter(self, ftype, func): if not isinstance(ftype, type): raise TypeError("Expected type object, got %s" % type(ftype)) self.castfilter = [(t, f) for (t, f) in self.castfilter if t != ftype] self.castfilter.append((ftype, func)) self.castfilter.sort()
def inline_css(self, html): premailer = Premailer(html) inlined_html = premailer.transform(pretty_print=True) return inlined_html
Inlines CSS defined in external style sheets.
### Input: Inlines CSS defined in external style sheets. ### Response: def inline_css(self, html): premailer = Premailer(html) inlined_html = premailer.transform(pretty_print=True) return inlined_html
def list_knowledge_bases(project_id): import dialogflow_v2beta1 as dialogflow client = dialogflow.KnowledgeBasesClient() project_path = client.project_path(project_id) print(.format(project_id)) for knowledge_base in client.list_knowledge_bases(project_path): print(.format(knowledge_base.display_name)) print(.format(knowledge_base.name))
Lists the Knowledge bases belonging to a project. Args: project_id: The GCP project linked with the agent.
### Input: Lists the Knowledge bases belonging to a project. Args: project_id: The GCP project linked with the agent. ### Response: def list_knowledge_bases(project_id): import dialogflow_v2beta1 as dialogflow client = dialogflow.KnowledgeBasesClient() project_path = client.project_path(project_id) print(.format(project_id)) for knowledge_base in client.list_knowledge_bases(project_path): print(.format(knowledge_base.display_name)) print(.format(knowledge_base.name))
def get_ip(request, real_ip_only=False, right_most_proxy=False): best_matched_ip = None warnings.warn(, DeprecationWarning) for key in defs.IPWARE_META_PRECEDENCE_ORDER: value = request.META.get(key, request.META.get(key.replace(, ), )).strip() if value is not None and value != : ips = [ip.strip().lower() for ip in value.split()] if right_most_proxy and len(ips) > 1: ips = reversed(ips) for ip_str in ips: if ip_str and is_valid_ip(ip_str): if not ip_str.startswith(NON_PUBLIC_IP_PREFIX): return ip_str if not real_ip_only: loopback = defs.IPWARE_LOOPBACK_PREFIX if best_matched_ip is None: best_matched_ip = ip_str elif best_matched_ip.startswith(loopback) and not ip_str.startswith(loopback): best_matched_ip = ip_str return best_matched_ip
Returns client's best-matched ip-address, or None @deprecated - Do not edit
### Input: Returns client's best-matched ip-address, or None @deprecated - Do not edit ### Response: def get_ip(request, real_ip_only=False, right_most_proxy=False): best_matched_ip = None warnings.warn(, DeprecationWarning) for key in defs.IPWARE_META_PRECEDENCE_ORDER: value = request.META.get(key, request.META.get(key.replace(, ), )).strip() if value is not None and value != : ips = [ip.strip().lower() for ip in value.split()] if right_most_proxy and len(ips) > 1: ips = reversed(ips) for ip_str in ips: if ip_str and is_valid_ip(ip_str): if not ip_str.startswith(NON_PUBLIC_IP_PREFIX): return ip_str if not real_ip_only: loopback = defs.IPWARE_LOOPBACK_PREFIX if best_matched_ip is None: best_matched_ip = ip_str elif best_matched_ip.startswith(loopback) and not ip_str.startswith(loopback): best_matched_ip = ip_str return best_matched_ip
def calculate_amr(cls, is_extended, from_id, to_id, rtr_only=False, rtr_too=True): return (((from_id ^ to_id) << 3) | (0x7 if rtr_too and not rtr_only else 0x3)) if is_extended else \ (((from_id ^ to_id) << 21) | (0x1FFFFF if rtr_too and not rtr_only else 0xFFFFF))
Calculates AMR using CAN-ID range as parameter. :param bool is_extended: If True parameters from_id and to_id contains 29-bit CAN-ID. :param int from_id: First CAN-ID which should be received. :param int to_id: Last CAN-ID which should be received. :param bool rtr_only: If True only RTR-Messages should be received, and rtr_too will be ignored. :param bool rtr_too: If True CAN data frames and RTR-Messages should be received. :return: Value for AMR. :rtype: int
### Input: Calculates AMR using CAN-ID range as parameter. :param bool is_extended: If True parameters from_id and to_id contains 29-bit CAN-ID. :param int from_id: First CAN-ID which should be received. :param int to_id: Last CAN-ID which should be received. :param bool rtr_only: If True only RTR-Messages should be received, and rtr_too will be ignored. :param bool rtr_too: If True CAN data frames and RTR-Messages should be received. :return: Value for AMR. :rtype: int ### Response: def calculate_amr(cls, is_extended, from_id, to_id, rtr_only=False, rtr_too=True): return (((from_id ^ to_id) << 3) | (0x7 if rtr_too and not rtr_only else 0x3)) if is_extended else \ (((from_id ^ to_id) << 21) | (0x1FFFFF if rtr_too and not rtr_only else 0xFFFFF))
def write_PIA0_A_data(self, cpu_cycles, op_address, address, value): log.error("%04x| write $%02x (%s) to $%04x -> PIA 0 A side Data reg.\t|%s", op_address, value, byte2bit_string(value), address, self.cfg.mem_info.get_shortest(op_address) ) self.pia_0_A_register.set(value)
write to 0xff00 -> PIA 0 A side Data reg.
### Input: write to 0xff00 -> PIA 0 A side Data reg. ### Response: def write_PIA0_A_data(self, cpu_cycles, op_address, address, value): log.error("%04x| write $%02x (%s) to $%04x -> PIA 0 A side Data reg.\t|%s", op_address, value, byte2bit_string(value), address, self.cfg.mem_info.get_shortest(op_address) ) self.pia_0_A_register.set(value)
def join(strin, items): return strin.join(map(lambda item: str(item), items))
Ramda implementation of join :param strin: :param items: :return:
### Input: Ramda implementation of join :param strin: :param items: :return: ### Response: def join(strin, items): return strin.join(map(lambda item: str(item), items))
def check_for_git_repo(url): u = parse.urlparse(url) is_git = False if os.path.splitext(u.path)[1] == : is_git = True elif u.scheme in (, ): from git import InvalidGitRepositoryError, Repo try: Repo(u.path, search_parent_directories=True) is_git = True except InvalidGitRepositoryError: is_git = False return is_git
Check if a url points to a git repository.
### Input: Check if a url points to a git repository. ### Response: def check_for_git_repo(url): u = parse.urlparse(url) is_git = False if os.path.splitext(u.path)[1] == : is_git = True elif u.scheme in (, ): from git import InvalidGitRepositoryError, Repo try: Repo(u.path, search_parent_directories=True) is_git = True except InvalidGitRepositoryError: is_git = False return is_git
def dispatch_to_index_op(op, left, right, index_class): left_idx = index_class(left) if getattr(left_idx, , None) is not None: left_idx = left_idx._shallow_copy(freq=None) try: result = op(left_idx, right) except NullFrequencyError: raise TypeError( .format(name=op.__name__)) return result
Wrap Series left in the given index_class to delegate the operation op to the index implementation. DatetimeIndex and TimedeltaIndex perform type checking, timezone handling, overflow checks, etc. Parameters ---------- op : binary operator (operator.add, operator.sub, ...) left : Series right : object index_class : DatetimeIndex or TimedeltaIndex Returns ------- result : object, usually DatetimeIndex, TimedeltaIndex, or Series
### Input: Wrap Series left in the given index_class to delegate the operation op to the index implementation. DatetimeIndex and TimedeltaIndex perform type checking, timezone handling, overflow checks, etc. Parameters ---------- op : binary operator (operator.add, operator.sub, ...) left : Series right : object index_class : DatetimeIndex or TimedeltaIndex Returns ------- result : object, usually DatetimeIndex, TimedeltaIndex, or Series ### Response: def dispatch_to_index_op(op, left, right, index_class): left_idx = index_class(left) if getattr(left_idx, , None) is not None: left_idx = left_idx._shallow_copy(freq=None) try: result = op(left_idx, right) except NullFrequencyError: raise TypeError( .format(name=op.__name__)) return result
def matrixidx2sheet(self,row,col): x,y = self.matrix2sheet((row+0.5), (col+0.5)) if not isinstance(x, datetime_types): x = np.around(x,10) if not isinstance(y, datetime_types): y = np.around(y,10) return x, y
Return (x,y) where x and y are the floating point coordinates of the *center* of the given matrix cell (row,col). If the matrix cell represents a 0.2 by 0.2 region, then the center location returned would be 0.1,0.1. NOTE: This is NOT the strict mathematical inverse of sheet2matrixidx(), because sheet2matrixidx() discards all but the integer portion of the continuous matrix coordinate. Valid only for scalar or array row and col.
### Input: Return (x,y) where x and y are the floating point coordinates of the *center* of the given matrix cell (row,col). If the matrix cell represents a 0.2 by 0.2 region, then the center location returned would be 0.1,0.1. NOTE: This is NOT the strict mathematical inverse of sheet2matrixidx(), because sheet2matrixidx() discards all but the integer portion of the continuous matrix coordinate. Valid only for scalar or array row and col. ### Response: def matrixidx2sheet(self,row,col): x,y = self.matrix2sheet((row+0.5), (col+0.5)) if not isinstance(x, datetime_types): x = np.around(x,10) if not isinstance(y, datetime_types): y = np.around(y,10) return x, y
def create_new_sale(self, amount, purpose, payment_reference=None, order_id=None, channel_id=None, capture=True): request_data = { "amount": self.base.convert_decimal_to_hundreds(amount), "currency": self.currency, "purpose": purpose, "capture": capture } if payment_reference: request_data[] = payment_reference if order_id: request_data[] = order_id if channel_id: request_data[] = channel_id url = "%s%s" % (self.api_endpoint, constants.NEW_SALE_ENDPOINT) username = self.base.get_username() password = self.base.get_password(username=username, request_url=url) response = requests.post(url, json=request_data, auth=HTTPBasicAuth(username=username, password=password)) if not self.base.verify_response(response.json()): raise SignatureValidationException() response_json = response.json() return response_json.get(), response_json.get(), response_json.get()
Create new sale. :param amount: :param purpose: :param payment_reference: :param order_id: :param channel_id: :param capture: :return: tuple (transaction_id, payment_token_number, status)
### Input: Create new sale. :param amount: :param purpose: :param payment_reference: :param order_id: :param channel_id: :param capture: :return: tuple (transaction_id, payment_token_number, status) ### Response: def create_new_sale(self, amount, purpose, payment_reference=None, order_id=None, channel_id=None, capture=True): request_data = { "amount": self.base.convert_decimal_to_hundreds(amount), "currency": self.currency, "purpose": purpose, "capture": capture } if payment_reference: request_data[] = payment_reference if order_id: request_data[] = order_id if channel_id: request_data[] = channel_id url = "%s%s" % (self.api_endpoint, constants.NEW_SALE_ENDPOINT) username = self.base.get_username() password = self.base.get_password(username=username, request_url=url) response = requests.post(url, json=request_data, auth=HTTPBasicAuth(username=username, password=password)) if not self.base.verify_response(response.json()): raise SignatureValidationException() response_json = response.json() return response_json.get(), response_json.get(), response_json.get()
def fetch(self, method, url, data=None, expected_status_code=None): kwargs = self.prepare_request(method, url, data) log.debug(json.dumps(kwargs)) response = getattr(requests, method.lower())(url, **kwargs) log.debug(json.dumps(response.content)) if response.status_code >= 400: response.raise_for_status() if (expected_status_code and response.status_code != expected_status_code): raise NotExpectedStatusCode(self._get_error_reason(response)) return response
Prepare the headers, encode data, call API and provide data it returns
### Input: Prepare the headers, encode data, call API and provide data it returns ### Response: def fetch(self, method, url, data=None, expected_status_code=None): kwargs = self.prepare_request(method, url, data) log.debug(json.dumps(kwargs)) response = getattr(requests, method.lower())(url, **kwargs) log.debug(json.dumps(response.content)) if response.status_code >= 400: response.raise_for_status() if (expected_status_code and response.status_code != expected_status_code): raise NotExpectedStatusCode(self._get_error_reason(response)) return response
def umi_transform(data): fqfiles = data["files"] fqfiles.extend(list(repeat("", 4-len(fqfiles)))) fq1, fq2, fq3, fq4 = fqfiles umi_dir = os.path.join(dd.get_work_dir(data), "umis") safe_makedir(umi_dir) transform = dd.get_umi_type(data) if not transform: logger.info("No UMI transform specified, assuming pre-transformed data.") if is_transformed(fq1): logger.info("%s detected as pre-transformed, passing it on unchanged." % fq1) data["files"] = [fq1] return [[data]] else: logger.error("No UMI transform was specified, but %s does not look " "pre-transformed." % fq1) sys.exit(1) if file_exists(transform): transform_file = transform else: transform_file = get_transform_file(transform) if not file_exists(transform_file): logger.error( "The UMI transform can be specified as either a file or a " "bcbio-supported transform. Either the file %s does not exist " "or the transform is not supported by bcbio. Supported " "transforms are %s." %(dd.get_umi_type(data), ", ".join(SUPPORTED_TRANSFORMS))) sys.exit(1) out_base = dd.get_sample_name(data) + ".umitransformed.fq.gz" out_file = os.path.join(umi_dir, out_base) if file_exists(out_file): data["files"] = [out_file] return [[data]] cellular_barcodes = get_cellular_barcodes(data) if len(cellular_barcodes) > 1: split_option = "--separate_cb" else: split_option = "" if dd.get_demultiplexed(data): demuxed_option = "--demuxed_cb %s" % dd.get_sample_name(data) split_option = "" else: demuxed_option = "" cores = dd.get_num_cores(data) with open_fastq(fq1) as in_handle: read = next(in_handle) if "UMI_" in read: data["files"] = [out_file] return [[data]] locale_export = utils.locale_export() umis = _umis_cmd(data) cmd = ("{umis} fastqtransform {split_option} {transform_file} " "--cores {cores} {demuxed_option} " "{fq1} {fq2} {fq3} {fq4}" "| seqtk seq -L 20 - | gzip > {tx_out_file}") message = ("Inserting UMI and barcode information into the read name of %s" % fq1) with file_transaction(out_file) as tx_out_file: do.run(cmd.format(**locals()), message) data["files"] = [out_file] return [[data]]
transform each read by identifying the barcode and UMI for each read and putting the information in the read name
### Input: transform each read by identifying the barcode and UMI for each read and putting the information in the read name ### Response: def umi_transform(data): fqfiles = data["files"] fqfiles.extend(list(repeat("", 4-len(fqfiles)))) fq1, fq2, fq3, fq4 = fqfiles umi_dir = os.path.join(dd.get_work_dir(data), "umis") safe_makedir(umi_dir) transform = dd.get_umi_type(data) if not transform: logger.info("No UMI transform specified, assuming pre-transformed data.") if is_transformed(fq1): logger.info("%s detected as pre-transformed, passing it on unchanged." % fq1) data["files"] = [fq1] return [[data]] else: logger.error("No UMI transform was specified, but %s does not look " "pre-transformed." % fq1) sys.exit(1) if file_exists(transform): transform_file = transform else: transform_file = get_transform_file(transform) if not file_exists(transform_file): logger.error( "The UMI transform can be specified as either a file or a " "bcbio-supported transform. Either the file %s does not exist " "or the transform is not supported by bcbio. Supported " "transforms are %s." %(dd.get_umi_type(data), ", ".join(SUPPORTED_TRANSFORMS))) sys.exit(1) out_base = dd.get_sample_name(data) + ".umitransformed.fq.gz" out_file = os.path.join(umi_dir, out_base) if file_exists(out_file): data["files"] = [out_file] return [[data]] cellular_barcodes = get_cellular_barcodes(data) if len(cellular_barcodes) > 1: split_option = "--separate_cb" else: split_option = "" if dd.get_demultiplexed(data): demuxed_option = "--demuxed_cb %s" % dd.get_sample_name(data) split_option = "" else: demuxed_option = "" cores = dd.get_num_cores(data) with open_fastq(fq1) as in_handle: read = next(in_handle) if "UMI_" in read: data["files"] = [out_file] return [[data]] locale_export = utils.locale_export() umis = _umis_cmd(data) cmd = ("{umis} fastqtransform {split_option} {transform_file} " "--cores {cores} {demuxed_option} " "{fq1} {fq2} {fq3} {fq4}" "| seqtk seq -L 20 - | gzip > {tx_out_file}") message = ("Inserting UMI and barcode information into the read name of %s" % fq1) with file_transaction(out_file) as tx_out_file: do.run(cmd.format(**locals()), message) data["files"] = [out_file] return [[data]]
def update_binary_stats(self, label, pred): pred = pred.asnumpy() label = label.asnumpy().astype() pred_label = numpy.argmax(pred, axis=1) check_label_shapes(label, pred) if len(numpy.unique(label)) > 2: raise ValueError("%s currently only supports binary classification." % self.__class__.__name__) pred_true = (pred_label == 1) pred_false = 1 - pred_true label_true = (label == 1) label_false = 1 - label_true true_pos = (pred_true * label_true).sum() false_pos = (pred_true * label_false).sum() false_neg = (pred_false * label_true).sum() true_neg = (pred_false * label_false).sum() self.true_positives += true_pos self.global_true_positives += true_pos self.false_positives += false_pos self.global_false_positives += false_pos self.false_negatives += false_neg self.global_false_negatives += false_neg self.true_negatives += true_neg self.global_true_negatives += true_neg
Update various binary classification counts for a single (label, pred) pair. Parameters ---------- label : `NDArray` The labels of the data. pred : `NDArray` Predicted values.
### Input: Update various binary classification counts for a single (label, pred) pair. Parameters ---------- label : `NDArray` The labels of the data. pred : `NDArray` Predicted values. ### Response: def update_binary_stats(self, label, pred): pred = pred.asnumpy() label = label.asnumpy().astype() pred_label = numpy.argmax(pred, axis=1) check_label_shapes(label, pred) if len(numpy.unique(label)) > 2: raise ValueError("%s currently only supports binary classification." % self.__class__.__name__) pred_true = (pred_label == 1) pred_false = 1 - pred_true label_true = (label == 1) label_false = 1 - label_true true_pos = (pred_true * label_true).sum() false_pos = (pred_true * label_false).sum() false_neg = (pred_false * label_true).sum() true_neg = (pred_false * label_false).sum() self.true_positives += true_pos self.global_true_positives += true_pos self.false_positives += false_pos self.global_false_positives += false_pos self.false_negatives += false_neg self.global_false_negatives += false_neg self.true_negatives += true_neg self.global_true_negatives += true_neg
def userForCert(store, cert): return store.findUnique(User, User.email == emailForCert(cert))
Gets the user for the given certificate.
### Input: Gets the user for the given certificate. ### Response: def userForCert(store, cert): return store.findUnique(User, User.email == emailForCert(cert))
def acquire(self) -> Connection: assert not self._closed yield from self._condition.acquire() while True: if self.ready: connection = self.ready.pop() break elif len(self.busy) < self.max_connections: connection = self._connection_factory() break else: yield from self._condition.wait() self.busy.add(connection) self._condition.release() return connection
Register and return a connection. Coroutine.
### Input: Register and return a connection. Coroutine. ### Response: def acquire(self) -> Connection: assert not self._closed yield from self._condition.acquire() while True: if self.ready: connection = self.ready.pop() break elif len(self.busy) < self.max_connections: connection = self._connection_factory() break else: yield from self._condition.wait() self.busy.add(connection) self._condition.release() return connection
def get_color(self): self.get_status() try: self.color = self.data[] self.mode = self.data[] except TypeError: self.color = 0 self.mode = return {: self.color, : self.mode}
Get current color.
### Input: Get current color. ### Response: def get_color(self): self.get_status() try: self.color = self.data[] self.mode = self.data[] except TypeError: self.color = 0 self.mode = return {: self.color, : self.mode}
def getData(self): url = self.server + self.name data = GitHubUser.__getDataFromURL(url) web = BeautifulSoup(data, "lxml") self.__getContributions(web) self.__getLocation(web) self.__getAvatar(web) self.__getNumberOfRepositories(web) self.__getNumberOfFollowers(web) self.__getBio(web) self.__getJoin(web) self.__getOrganizations(web)
Get data of the GitHub user.
### Input: Get data of the GitHub user. ### Response: def getData(self): url = self.server + self.name data = GitHubUser.__getDataFromURL(url) web = BeautifulSoup(data, "lxml") self.__getContributions(web) self.__getLocation(web) self.__getAvatar(web) self.__getNumberOfRepositories(web) self.__getNumberOfFollowers(web) self.__getBio(web) self.__getJoin(web) self.__getOrganizations(web)
def reset_input_generators(self, seed): seed_generator = SeedGenerator().reset(seed=seed) for gen in self.input_generators: gen.reset(next(seed_generator)) try: gen.reset_input_generators(next(seed_generator)) except AttributeError: pass
Helper method which explicitly resets all input generators to the derived generator. This should only ever be called for testing or debugging.
### Input: Helper method which explicitly resets all input generators to the derived generator. This should only ever be called for testing or debugging. ### Response: def reset_input_generators(self, seed): seed_generator = SeedGenerator().reset(seed=seed) for gen in self.input_generators: gen.reset(next(seed_generator)) try: gen.reset_input_generators(next(seed_generator)) except AttributeError: pass
def _half_log_det(self, M): chol = np.linalg.cholesky(M) if M.ndim == 2: return np.sum(np.log(np.abs(np.diag(chol)))) else: return np.sum(np.log(np.abs(np.diagonal( chol, axis1=-2, axis2=-1))), axis=-1)
Return log(|M|)*0.5. For positive definite matrix M of more than 2 dimensions, calculate this for the last two dimension and return a value corresponding to each element in the first few dimensions.
### Input: Return log(|M|)*0.5. For positive definite matrix M of more than 2 dimensions, calculate this for the last two dimension and return a value corresponding to each element in the first few dimensions. ### Response: def _half_log_det(self, M): chol = np.linalg.cholesky(M) if M.ndim == 2: return np.sum(np.log(np.abs(np.diag(chol)))) else: return np.sum(np.log(np.abs(np.diagonal( chol, axis1=-2, axis2=-1))), axis=-1)
def _simulate_matern(D1, D2, D3, N, num_inducing, plot_sim=False): Q_signal = 4 import GPy import numpy as np np.random.seed(3000) k = GPy.kern.Matern32(Q_signal, 1., lengthscale=(np.random.uniform(1, 6, Q_signal)), ARD=1) for i in range(Q_signal): k += GPy.kern.PeriodicExponential(1, variance=1., active_dims=[i], period=3., lower=-2, upper=6) t = np.c_[[np.linspace(-1, 5, N) for _ in range(Q_signal)]].T K = k.K(t) s2, s1, s3, sS = np.random.multivariate_normal(np.zeros(K.shape[0]), K, size=(4))[:, :, None] Y1, Y2, Y3, S1, S2, S3 = _generate_high_dimensional_output(D1, D2, D3, s1, s2, s3, sS) slist = [sS, s1, s2, s3] slist_names = ["sS", "s1", "s2", "s3"] Ylist = [Y1, Y2, Y3] if plot_sim: from matplotlib import pyplot as plt import matplotlib.cm as cm import itertools fig = plt.figure("MRD Simulation Data", figsize=(8, 6)) fig.clf() ax = fig.add_subplot(2, 1, 1) labls = slist_names for S, lab in zip(slist, labls): ax.plot(S, label=lab) ax.legend() for i, Y in enumerate(Ylist): ax = fig.add_subplot(2, len(Ylist), len(Ylist) + 1 + i) ax.imshow(Y, aspect=, cmap=cm.gray) ax.set_title("Y{}".format(i + 1)) plt.draw() plt.tight_layout() return slist, [S1, S2, S3], Ylist
Simulate some data drawn from a matern covariance and a periodic exponential for use in MRD demos.
### Input: Simulate some data drawn from a matern covariance and a periodic exponential for use in MRD demos. ### Response: def _simulate_matern(D1, D2, D3, N, num_inducing, plot_sim=False): Q_signal = 4 import GPy import numpy as np np.random.seed(3000) k = GPy.kern.Matern32(Q_signal, 1., lengthscale=(np.random.uniform(1, 6, Q_signal)), ARD=1) for i in range(Q_signal): k += GPy.kern.PeriodicExponential(1, variance=1., active_dims=[i], period=3., lower=-2, upper=6) t = np.c_[[np.linspace(-1, 5, N) for _ in range(Q_signal)]].T K = k.K(t) s2, s1, s3, sS = np.random.multivariate_normal(np.zeros(K.shape[0]), K, size=(4))[:, :, None] Y1, Y2, Y3, S1, S2, S3 = _generate_high_dimensional_output(D1, D2, D3, s1, s2, s3, sS) slist = [sS, s1, s2, s3] slist_names = ["sS", "s1", "s2", "s3"] Ylist = [Y1, Y2, Y3] if plot_sim: from matplotlib import pyplot as plt import matplotlib.cm as cm import itertools fig = plt.figure("MRD Simulation Data", figsize=(8, 6)) fig.clf() ax = fig.add_subplot(2, 1, 1) labls = slist_names for S, lab in zip(slist, labls): ax.plot(S, label=lab) ax.legend() for i, Y in enumerate(Ylist): ax = fig.add_subplot(2, len(Ylist), len(Ylist) + 1 + i) ax.imshow(Y, aspect=, cmap=cm.gray) ax.set_title("Y{}".format(i + 1)) plt.draw() plt.tight_layout() return slist, [S1, S2, S3], Ylist
def load_spatial_filters(packed=True): names = ("Bilinear", "Hanning", "Hamming", "Hermite", "Kaiser", "Quadric", "Bicubic", "CatRom", "Mitchell", "Spline16", "Spline36", "Gaussian", "Bessel", "Sinc", "Lanczos", "Blackman", "Nearest") kernel = np.load(op.join(DATA_DIR, )) if packed: kernel = pack_unit(kernel) return kernel, names
Load spatial-filters kernel Parameters ---------- packed : bool Whether or not the data should be in "packed" representation for use in GLSL code. Returns ------- kernel : array 16x1024x4 (packed float in rgba) or 16x1024 (unpacked float) 16 interpolation kernel with length 1024 each. names : tuple of strings Respective interpolation names, plus "Nearest" which does not require a filter but can still be used
### Input: Load spatial-filters kernel Parameters ---------- packed : bool Whether or not the data should be in "packed" representation for use in GLSL code. Returns ------- kernel : array 16x1024x4 (packed float in rgba) or 16x1024 (unpacked float) 16 interpolation kernel with length 1024 each. names : tuple of strings Respective interpolation names, plus "Nearest" which does not require a filter but can still be used ### Response: def load_spatial_filters(packed=True): names = ("Bilinear", "Hanning", "Hamming", "Hermite", "Kaiser", "Quadric", "Bicubic", "CatRom", "Mitchell", "Spline16", "Spline36", "Gaussian", "Bessel", "Sinc", "Lanczos", "Blackman", "Nearest") kernel = np.load(op.join(DATA_DIR, )) if packed: kernel = pack_unit(kernel) return kernel, names
def get_block_operator(self): block_stack = [] for f in self.manager.iter_filters(block_end=True): if f is None: block_stack.pop() continue if f.type in (, , ): block_stack.append(f.type) if f == self: break return block_stack[-1]
Determine the immediate parent boolean operator for a filter
### Input: Determine the immediate parent boolean operator for a filter ### Response: def get_block_operator(self): block_stack = [] for f in self.manager.iter_filters(block_end=True): if f is None: block_stack.pop() continue if f.type in (, , ): block_stack.append(f.type) if f == self: break return block_stack[-1]
def pcolor(text, color, indent=0): r esc_dict = { "black": 30, "red": 31, "green": 32, "yellow": 33, "blue": 34, "magenta": 35, "cyan": 36, "white": 37, "none": -1, } if not isinstance(text, str): raise RuntimeError("Argument `text` is not valid") if not isinstance(color, str): raise RuntimeError("Argument `color` is not valid") if not isinstance(indent, int): raise RuntimeError("Argument `indent` is not valid") color = color.lower() if color not in esc_dict: raise ValueError("Unknown color {color}".format(color=color)) if esc_dict[color] != -1: return "\033[{color_code}m{indent}{text}\033[0m".format( color_code=esc_dict[color], indent=" " * indent, text=text ) return "{indent}{text}".format(indent=" " * indent, text=text)
r""" Return a string that once printed is colorized. :param text: Text to colorize :type text: string :param color: Color to use, one of :code:`'black'`, :code:`'red'`, :code:`'green'`, :code:`'yellow'`, :code:`'blue'`, :code:`'magenta'`, :code:`'cyan'`, :code:`'white'` or :code:`'none'` (case insensitive) :type color: string :param indent: Number of spaces to prefix the output with :type indent: integer :rtype: string :raises: * RuntimeError (Argument \`color\` is not valid) * RuntimeError (Argument \`indent\` is not valid) * RuntimeError (Argument \`text\` is not valid) * ValueError (Unknown color *[color]*)
### Input: r""" Return a string that once printed is colorized. :param text: Text to colorize :type text: string :param color: Color to use, one of :code:`'black'`, :code:`'red'`, :code:`'green'`, :code:`'yellow'`, :code:`'blue'`, :code:`'magenta'`, :code:`'cyan'`, :code:`'white'` or :code:`'none'` (case insensitive) :type color: string :param indent: Number of spaces to prefix the output with :type indent: integer :rtype: string :raises: * RuntimeError (Argument \`color\` is not valid) * RuntimeError (Argument \`indent\` is not valid) * RuntimeError (Argument \`text\` is not valid) * ValueError (Unknown color *[color]*) ### Response: def pcolor(text, color, indent=0): r esc_dict = { "black": 30, "red": 31, "green": 32, "yellow": 33, "blue": 34, "magenta": 35, "cyan": 36, "white": 37, "none": -1, } if not isinstance(text, str): raise RuntimeError("Argument `text` is not valid") if not isinstance(color, str): raise RuntimeError("Argument `color` is not valid") if not isinstance(indent, int): raise RuntimeError("Argument `indent` is not valid") color = color.lower() if color not in esc_dict: raise ValueError("Unknown color {color}".format(color=color)) if esc_dict[color] != -1: return "\033[{color_code}m{indent}{text}\033[0m".format( color_code=esc_dict[color], indent=" " * indent, text=text ) return "{indent}{text}".format(indent=" " * indent, text=text)
def clean(self): if self._initialized: logger.info("brace yourselves, removing %r", self.path) shutil.rmtree(self.path)
remove the directory we operated on :return: None
### Input: remove the directory we operated on :return: None ### Response: def clean(self): if self._initialized: logger.info("brace yourselves, removing %r", self.path) shutil.rmtree(self.path)