code
stringlengths
70
11.9k
docstring
stringlengths
4
7.08k
text
stringlengths
128
15k
def escape_string(self, value): if isinstance(value, EscapedString): return value.formatted(self._escaper) return self._escaper(value)
Escape the <, >, ^, and & special characters reserved by Windows. Args: value (str/EscapedString): String or already escaped string. Returns: str: The value escaped for Windows.
### Input: Escape the <, >, ^, and & special characters reserved by Windows. Args: value (str/EscapedString): String or already escaped string. Returns: str: The value escaped for Windows. ### Response: def escape_string(self, value): if isinstance(value, EscapedString): return value.formatted(self._escaper) return self._escaper(value)
def get_text_stream(stream="stdout", encoding=None): stream_map = {"stdin": sys.stdin, "stdout": sys.stdout, "stderr": sys.stderr} if os.name == "nt" or sys.platform.startswith("win"): from ._winconsole import _get_windows_console_stream, _wrap_std_stream else: _get_windows_console_stream = lambda *args: None _wrap_std_stream = lambda *args: None if six.PY2 and stream != "stdin": _wrap_std_stream(stream) sys_stream = stream_map[stream] windows_console = _get_windows_console_stream(sys_stream, encoding, None) if windows_console is not None: return windows_console return get_wrapped_stream(sys_stream, encoding)
Retrieve a unicode stream wrapper around **sys.stdout** or **sys.stderr**. :param str stream: The name of the stream to wrap from the :mod:`sys` module. :param str encoding: An optional encoding to use. :return: A new :class:`~vistir.misc.StreamWrapper` instance around the stream :rtype: `vistir.misc.StreamWrapper`
### Input: Retrieve a unicode stream wrapper around **sys.stdout** or **sys.stderr**. :param str stream: The name of the stream to wrap from the :mod:`sys` module. :param str encoding: An optional encoding to use. :return: A new :class:`~vistir.misc.StreamWrapper` instance around the stream :rtype: `vistir.misc.StreamWrapper` ### Response: def get_text_stream(stream="stdout", encoding=None): stream_map = {"stdin": sys.stdin, "stdout": sys.stdout, "stderr": sys.stderr} if os.name == "nt" or sys.platform.startswith("win"): from ._winconsole import _get_windows_console_stream, _wrap_std_stream else: _get_windows_console_stream = lambda *args: None _wrap_std_stream = lambda *args: None if six.PY2 and stream != "stdin": _wrap_std_stream(stream) sys_stream = stream_map[stream] windows_console = _get_windows_console_stream(sys_stream, encoding, None) if windows_console is not None: return windows_console return get_wrapped_stream(sys_stream, encoding)
def find_span_binsearch(degree, knot_vector, num_ctrlpts, knot, **kwargs): tol = kwargs.get(, 10e-6) n = num_ctrlpts - 1 if abs(knot_vector[n + 1] - knot) <= tol: return n low = degree high = num_ctrlpts mid = (low + high) / 2 mid = int(round(mid + tol)) while (knot < knot_vector[mid]) or (knot >= knot_vector[mid + 1]): if knot < knot_vector[mid]: high = mid else: low = mid mid = int((low + high) / 2) return mid
Finds the span of the knot over the input knot vector using binary search. Implementation of Algorithm A2.1 from The NURBS Book by Piegl & Tiller. The NURBS Book states that the knot span index always starts from zero, i.e. for a knot vector [0, 0, 1, 1]; if FindSpan returns 1, then the knot is between the interval [0, 1). :param degree: degree, :math:`p` :type degree: int :param knot_vector: knot vector, :math:`U` :type knot_vector: list, tuple :param num_ctrlpts: number of control points, :math:`n + 1` :type num_ctrlpts: int :param knot: knot or parameter, :math:`u` :type knot: float :return: knot span :rtype: int
### Input: Finds the span of the knot over the input knot vector using binary search. Implementation of Algorithm A2.1 from The NURBS Book by Piegl & Tiller. The NURBS Book states that the knot span index always starts from zero, i.e. for a knot vector [0, 0, 1, 1]; if FindSpan returns 1, then the knot is between the interval [0, 1). :param degree: degree, :math:`p` :type degree: int :param knot_vector: knot vector, :math:`U` :type knot_vector: list, tuple :param num_ctrlpts: number of control points, :math:`n + 1` :type num_ctrlpts: int :param knot: knot or parameter, :math:`u` :type knot: float :return: knot span :rtype: int ### Response: def find_span_binsearch(degree, knot_vector, num_ctrlpts, knot, **kwargs): tol = kwargs.get(, 10e-6) n = num_ctrlpts - 1 if abs(knot_vector[n + 1] - knot) <= tol: return n low = degree high = num_ctrlpts mid = (low + high) / 2 mid = int(round(mid + tol)) while (knot < knot_vector[mid]) or (knot >= knot_vector[mid + 1]): if knot < knot_vector[mid]: high = mid else: low = mid mid = int((low + high) / 2) return mid
def _describe_tree(self, prefix, with_transform): extra = % self.name if self.name is not None else if with_transform: extra += ( % self.transform.__class__.__name__) output = if len(prefix) > 0: output += prefix[:-3] output += output += % (self.__class__.__name__, extra) n_children = len(self.children) for ii, child in enumerate(self.children): sub_prefix = prefix + ( if ii+1 == n_children else ) output += child._describe_tree(sub_prefix, with_transform) return output
Helper function to actuall construct the tree
### Input: Helper function to actuall construct the tree ### Response: def _describe_tree(self, prefix, with_transform): extra = % self.name if self.name is not None else if with_transform: extra += ( % self.transform.__class__.__name__) output = if len(prefix) > 0: output += prefix[:-3] output += output += % (self.__class__.__name__, extra) n_children = len(self.children) for ii, child in enumerate(self.children): sub_prefix = prefix + ( if ii+1 == n_children else ) output += child._describe_tree(sub_prefix, with_transform) return output
def from_pb(cls, instance_pb, client): match = _INSTANCE_NAME_RE.match(instance_pb.name) if match is None: raise ValueError( "Instance protobuf name was not in the " "expected format.", instance_pb.name, ) if match.group("project") != client.project: raise ValueError( "Project ID on instance does not match the " "project ID on the client" ) instance_id = match.group("instance_id") configuration_name = instance_pb.config result = cls(instance_id, client, configuration_name) result._update_from_pb(instance_pb) return result
Creates an instance from a protobuf. :type instance_pb: :class:`google.spanner.v2.spanner_instance_admin_pb2.Instance` :param instance_pb: A instance protobuf object. :type client: :class:`~google.cloud.spanner_v1.client.Client` :param client: The client that owns the instance. :rtype: :class:`Instance` :returns: The instance parsed from the protobuf response. :raises ValueError: if the instance name does not match ``projects/{project}/instances/{instance_id}`` or if the parsed project ID does not match the project ID on the client.
### Input: Creates an instance from a protobuf. :type instance_pb: :class:`google.spanner.v2.spanner_instance_admin_pb2.Instance` :param instance_pb: A instance protobuf object. :type client: :class:`~google.cloud.spanner_v1.client.Client` :param client: The client that owns the instance. :rtype: :class:`Instance` :returns: The instance parsed from the protobuf response. :raises ValueError: if the instance name does not match ``projects/{project}/instances/{instance_id}`` or if the parsed project ID does not match the project ID on the client. ### Response: def from_pb(cls, instance_pb, client): match = _INSTANCE_NAME_RE.match(instance_pb.name) if match is None: raise ValueError( "Instance protobuf name was not in the " "expected format.", instance_pb.name, ) if match.group("project") != client.project: raise ValueError( "Project ID on instance does not match the " "project ID on the client" ) instance_id = match.group("instance_id") configuration_name = instance_pb.config result = cls(instance_id, client, configuration_name) result._update_from_pb(instance_pb) return result
def labels_to_matrix(image, mask, target_labels=None, missing_val=np.nan): if (not isinstance(image, iio.ANTsImage)) or (not isinstance(mask, iio.ANTsImage)): raise ValueError() vec = image[mask > 0] if target_labels is not None: the_labels = target_labels else: the_labels = np.sort(np.unique(vec)) n_labels = len(the_labels) labels = np.zeros((n_labels, len(vec))) for i in range(n_labels): lab = float(the_labels[i]) filler = (vec == lab).astype() if np.sum(filler) == 0: filler = np.asarray([np.nan]*len(vec)) labels[i,:] = filler return labels
Convert a labeled image to an n x m binary matrix where n = number of voxels and m = number of labels. Only includes values inside the provided mask while including background ( image == 0 ) for consistency with timeseries2matrix and other image to matrix operations. ANTsR function: `labels2matrix` Arguments --------- image : ANTsImage input label image mask : ANTsImage defines domain of interest target_labels : list/tuple defines target regions to be returned. if the target label does not exist in the input label image, then the matrix will contain a constant value of missing_val (default None) in that row. missing_val : scalar value to use for missing label values Returns ------- ndarray Example ------- >>> import ants >>> fi = ants.image_read(ants.get_ants_data('r16')).resample_image((60,60),1,0) >>> mask = ants.get_mask(fi) >>> labs = ants.kmeans_segmentation(fi,3)['segmentation'] >>> labmat = ants.labels_to_matrix(labs, mask)
### Input: Convert a labeled image to an n x m binary matrix where n = number of voxels and m = number of labels. Only includes values inside the provided mask while including background ( image == 0 ) for consistency with timeseries2matrix and other image to matrix operations. ANTsR function: `labels2matrix` Arguments --------- image : ANTsImage input label image mask : ANTsImage defines domain of interest target_labels : list/tuple defines target regions to be returned. if the target label does not exist in the input label image, then the matrix will contain a constant value of missing_val (default None) in that row. missing_val : scalar value to use for missing label values Returns ------- ndarray Example ------- >>> import ants >>> fi = ants.image_read(ants.get_ants_data('r16')).resample_image((60,60),1,0) >>> mask = ants.get_mask(fi) >>> labs = ants.kmeans_segmentation(fi,3)['segmentation'] >>> labmat = ants.labels_to_matrix(labs, mask) ### Response: def labels_to_matrix(image, mask, target_labels=None, missing_val=np.nan): if (not isinstance(image, iio.ANTsImage)) or (not isinstance(mask, iio.ANTsImage)): raise ValueError() vec = image[mask > 0] if target_labels is not None: the_labels = target_labels else: the_labels = np.sort(np.unique(vec)) n_labels = len(the_labels) labels = np.zeros((n_labels, len(vec))) for i in range(n_labels): lab = float(the_labels[i]) filler = (vec == lab).astype() if np.sum(filler) == 0: filler = np.asarray([np.nan]*len(vec)) labels[i,:] = filler return labels
def toggle_hscrollbar(self, checked): self.parent_widget.sig_option_changed.emit(, checked) self.show_hscrollbar = checked self.header().setStretchLastSection(not checked) self.header().setHorizontalScrollMode(QAbstractItemView.ScrollPerPixel) try: self.header().setSectionResizeMode(QHeaderView.ResizeToContents) except: self.header().setResizeMode(QHeaderView.ResizeToContents)
Toggle horizontal scrollbar
### Input: Toggle horizontal scrollbar ### Response: def toggle_hscrollbar(self, checked): self.parent_widget.sig_option_changed.emit(, checked) self.show_hscrollbar = checked self.header().setStretchLastSection(not checked) self.header().setHorizontalScrollMode(QAbstractItemView.ScrollPerPixel) try: self.header().setSectionResizeMode(QHeaderView.ResizeToContents) except: self.header().setResizeMode(QHeaderView.ResizeToContents)
def select(self): if self.GUI==None: return self.GUI.current_fit = self if self.tmax != None and self.tmin != None: self.GUI.update_bounds_boxes() if self.PCA_type != None: self.GUI.update_PCA_box() try: self.GUI.zijplot except AttributeError: self.GUI.draw_figure(self.GUI.s) self.GUI.fit_box.SetStringSelection(self.name) self.GUI.get_new_PCA_parameters(-1)
Makes this fit the selected fit on the GUI that is it's parent (Note: may be moved into GUI soon)
### Input: Makes this fit the selected fit on the GUI that is it's parent (Note: may be moved into GUI soon) ### Response: def select(self): if self.GUI==None: return self.GUI.current_fit = self if self.tmax != None and self.tmin != None: self.GUI.update_bounds_boxes() if self.PCA_type != None: self.GUI.update_PCA_box() try: self.GUI.zijplot except AttributeError: self.GUI.draw_figure(self.GUI.s) self.GUI.fit_box.SetStringSelection(self.name) self.GUI.get_new_PCA_parameters(-1)
def disease(self, identifier=None, ref_id=None, ref_type=None, name=None, acronym=None, description=None, entry_name=None, limit=None, as_df=False ): q = self.session.query(models.Disease) model_queries_config = ( (identifier, models.Disease.identifier), (ref_id, models.Disease.ref_id), (ref_type, models.Disease.ref_type), (name, models.Disease.name), (acronym, models.Disease.acronym), (description, models.Disease.description) ) q = self.get_model_queries(q, model_queries_config) if entry_name: q = q.session.query(models.Disease).join(models.DiseaseComment).join(models.Entry) if isinstance(entry_name, str): q = q.filter(models.Entry.name == entry_name) elif isinstance(entry_name, Iterable): q = q.filter(models.Entry.name.in_(entry_name)) return self._limit_and_df(q, limit, as_df)
Method to query :class:`.models.Disease` objects in database :param identifier: disease UniProt identifier(s) :type identifier: str or tuple(str) or None :param ref_id: identifier(s) of referenced database :type ref_id: str or tuple(str) or None :param ref_type: database name(s) :type ref_type: str or tuple(str) or None :param name: disease name(s) :type name: str or tuple(str) or None :param acronym: disease acronym(s) :type acronym: str or tuple(str) or None :param description: disease description(s) :type description: str or tuple(str) or None :param entry_name: name(s) in :class:`.models.Entry` :type entry_name: str or tuple(str) or None :param limit: - if `isinstance(limit,int)==True` -> limit - if `isinstance(limit,tuple)==True` -> format:= tuple(page_number, results_per_page) - if limit == None -> all results :type limit: int or tuple(int) or None :param bool as_df: if `True` results are returned as :class:`pandas.DataFrame` :return: - if `as_df == False` -> list(:class:`.models.Disease`) - if `as_df == True` -> :class:`pandas.DataFrame` :rtype: list(:class:`.models.Disease`) or :class:`pandas.DataFrame`
### Input: Method to query :class:`.models.Disease` objects in database :param identifier: disease UniProt identifier(s) :type identifier: str or tuple(str) or None :param ref_id: identifier(s) of referenced database :type ref_id: str or tuple(str) or None :param ref_type: database name(s) :type ref_type: str or tuple(str) or None :param name: disease name(s) :type name: str or tuple(str) or None :param acronym: disease acronym(s) :type acronym: str or tuple(str) or None :param description: disease description(s) :type description: str or tuple(str) or None :param entry_name: name(s) in :class:`.models.Entry` :type entry_name: str or tuple(str) or None :param limit: - if `isinstance(limit,int)==True` -> limit - if `isinstance(limit,tuple)==True` -> format:= tuple(page_number, results_per_page) - if limit == None -> all results :type limit: int or tuple(int) or None :param bool as_df: if `True` results are returned as :class:`pandas.DataFrame` :return: - if `as_df == False` -> list(:class:`.models.Disease`) - if `as_df == True` -> :class:`pandas.DataFrame` :rtype: list(:class:`.models.Disease`) or :class:`pandas.DataFrame` ### Response: def disease(self, identifier=None, ref_id=None, ref_type=None, name=None, acronym=None, description=None, entry_name=None, limit=None, as_df=False ): q = self.session.query(models.Disease) model_queries_config = ( (identifier, models.Disease.identifier), (ref_id, models.Disease.ref_id), (ref_type, models.Disease.ref_type), (name, models.Disease.name), (acronym, models.Disease.acronym), (description, models.Disease.description) ) q = self.get_model_queries(q, model_queries_config) if entry_name: q = q.session.query(models.Disease).join(models.DiseaseComment).join(models.Entry) if isinstance(entry_name, str): q = q.filter(models.Entry.name == entry_name) elif isinstance(entry_name, Iterable): q = q.filter(models.Entry.name.in_(entry_name)) return self._limit_and_df(q, limit, as_df)
def diff(name): s filesystem since it was created. Equivalent to running the ``docker diff`` Docker CLI command. name Container name or ID **RETURN DATA** A dictionary containing any of the following keys: - ``Added`` - A list of paths that were added. - ``Changed`` - A list of paths that were changed. - ``Deleted`` - A list of paths that were deleted. These keys will only be present if there were changes, so if the container has no differences the return dict will be empty. CLI Example: .. code-block:: bash salt myminion docker.diff mycontainer diffChangedAddedDeletedKindUnknownPathUnknownUnknown changes detected in docker.diff of container %s. This is probably due to a change in the Docker API. Please report this to the SaltStack developers', name ) return ret
Get information on changes made to container's filesystem since it was created. Equivalent to running the ``docker diff`` Docker CLI command. name Container name or ID **RETURN DATA** A dictionary containing any of the following keys: - ``Added`` - A list of paths that were added. - ``Changed`` - A list of paths that were changed. - ``Deleted`` - A list of paths that were deleted. These keys will only be present if there were changes, so if the container has no differences the return dict will be empty. CLI Example: .. code-block:: bash salt myminion docker.diff mycontainer
### Input: Get information on changes made to container's filesystem since it was created. Equivalent to running the ``docker diff`` Docker CLI command. name Container name or ID **RETURN DATA** A dictionary containing any of the following keys: - ``Added`` - A list of paths that were added. - ``Changed`` - A list of paths that were changed. - ``Deleted`` - A list of paths that were deleted. These keys will only be present if there were changes, so if the container has no differences the return dict will be empty. CLI Example: .. code-block:: bash salt myminion docker.diff mycontainer ### Response: def diff(name): s filesystem since it was created. Equivalent to running the ``docker diff`` Docker CLI command. name Container name or ID **RETURN DATA** A dictionary containing any of the following keys: - ``Added`` - A list of paths that were added. - ``Changed`` - A list of paths that were changed. - ``Deleted`` - A list of paths that were deleted. These keys will only be present if there were changes, so if the container has no differences the return dict will be empty. CLI Example: .. code-block:: bash salt myminion docker.diff mycontainer diffChangedAddedDeletedKindUnknownPathUnknownUnknown changes detected in docker.diff of container %s. This is probably due to a change in the Docker API. Please report this to the SaltStack developers', name ) return ret
def _try_satellite6_configuration(config): try: rhsm_config = _importInitConfig() logger.debug() cert = open(rhsmCertificate.certpath(), ).read() key = open(rhsmCertificate.keypath(), ).read() rhsm = rhsmCertificate(key, cert) is_satellite = False logger.debug() rhsm.getConsumerId() logger.debug() rhsm_hostname = rhsm_config.get(, ) rhsm_hostport = rhsm_config.get(, ) rhsm_proxy_hostname = rhsm_config.get(, ).strip() rhsm_proxy_port = rhsm_config.get(, ).strip() rhsm_proxy_user = rhsm_config.get(, ).strip() rhsm_proxy_pass = rhsm_config.get(, ).strip() proxy = None if rhsm_proxy_hostname != "": logger.debug("Found rhsm_proxy_hostname %s", rhsm_proxy_hostname) proxy = "http://" if rhsm_proxy_user != "" and rhsm_proxy_pass != "": logger.debug("Found user and password for rhsm_proxy") proxy = proxy + rhsm_proxy_user + ":" + rhsm_proxy_pass + "@" proxy = proxy + rhsm_proxy_hostname + + rhsm_proxy_port logger.debug("RHSM Proxy: %s", proxy) logger.debug("Found %sHost: %s, Port: %s", ( if _is_rhn_or_rhsm(rhsm_hostname) else ), rhsm_hostname, rhsm_hostport) rhsm_ca = rhsm_config.get(, ) logger.debug("Found CA: %s", rhsm_ca) logger.debug("Setting authmethod to CERT") config.authmethod = if _is_rhn_or_rhsm(rhsm_hostname): if config.legacy_upload: logger.debug("Connected to Red Hat Directly, using cert-api") rhsm_hostname = else: logger.debug("Connected to Red Hat Directly, using cloud.redhat.com") rhsm_hostname = rhsm_ca = None else: rhsm_hostname = rhsm_hostname + + rhsm_hostport + is_satellite = True logger.debug("Trying to set auto_configuration") set_auto_configuration(config, rhsm_hostname, rhsm_ca, proxy, is_satellite) return True except Exception as e: logger.debug(e) logger.debug() return False
Try to autoconfigure for Satellite 6
### Input: Try to autoconfigure for Satellite 6 ### Response: def _try_satellite6_configuration(config): try: rhsm_config = _importInitConfig() logger.debug() cert = open(rhsmCertificate.certpath(), ).read() key = open(rhsmCertificate.keypath(), ).read() rhsm = rhsmCertificate(key, cert) is_satellite = False logger.debug() rhsm.getConsumerId() logger.debug() rhsm_hostname = rhsm_config.get(, ) rhsm_hostport = rhsm_config.get(, ) rhsm_proxy_hostname = rhsm_config.get(, ).strip() rhsm_proxy_port = rhsm_config.get(, ).strip() rhsm_proxy_user = rhsm_config.get(, ).strip() rhsm_proxy_pass = rhsm_config.get(, ).strip() proxy = None if rhsm_proxy_hostname != "": logger.debug("Found rhsm_proxy_hostname %s", rhsm_proxy_hostname) proxy = "http://" if rhsm_proxy_user != "" and rhsm_proxy_pass != "": logger.debug("Found user and password for rhsm_proxy") proxy = proxy + rhsm_proxy_user + ":" + rhsm_proxy_pass + "@" proxy = proxy + rhsm_proxy_hostname + + rhsm_proxy_port logger.debug("RHSM Proxy: %s", proxy) logger.debug("Found %sHost: %s, Port: %s", ( if _is_rhn_or_rhsm(rhsm_hostname) else ), rhsm_hostname, rhsm_hostport) rhsm_ca = rhsm_config.get(, ) logger.debug("Found CA: %s", rhsm_ca) logger.debug("Setting authmethod to CERT") config.authmethod = if _is_rhn_or_rhsm(rhsm_hostname): if config.legacy_upload: logger.debug("Connected to Red Hat Directly, using cert-api") rhsm_hostname = else: logger.debug("Connected to Red Hat Directly, using cloud.redhat.com") rhsm_hostname = rhsm_ca = None else: rhsm_hostname = rhsm_hostname + + rhsm_hostport + is_satellite = True logger.debug("Trying to set auto_configuration") set_auto_configuration(config, rhsm_hostname, rhsm_ca, proxy, is_satellite) return True except Exception as e: logger.debug(e) logger.debug() return False
def firstId(self) -> BaseReference: if self.childIds is not None: if len(self.childIds) > 0: return self.childIds[0] return None else: raise NotImplementedError
First child's id of current TextualNode
### Input: First child's id of current TextualNode ### Response: def firstId(self) -> BaseReference: if self.childIds is not None: if len(self.childIds) > 0: return self.childIds[0] return None else: raise NotImplementedError
def choose_meas_file(self, event=None): dlg = wx.FileDialog( self, message="Please choose a measurement file", defaultDir=self.WD, defaultFile="measurements.txt", wildcard="measurement files (*.magic,*.txt)|*.magic;*.txt", style=wx.FD_OPEN | wx.FD_CHANGE_DIR ) if self.show_dlg(dlg) == wx.ID_OK: meas_file = dlg.GetPath() dlg.Destroy() else: meas_file = self.data_model = 2.5 dlg.Destroy() return meas_file
Opens a dialog allowing the user to pick a measurement file
### Input: Opens a dialog allowing the user to pick a measurement file ### Response: def choose_meas_file(self, event=None): dlg = wx.FileDialog( self, message="Please choose a measurement file", defaultDir=self.WD, defaultFile="measurements.txt", wildcard="measurement files (*.magic,*.txt)|*.magic;*.txt", style=wx.FD_OPEN | wx.FD_CHANGE_DIR ) if self.show_dlg(dlg) == wx.ID_OK: meas_file = dlg.GetPath() dlg.Destroy() else: meas_file = self.data_model = 2.5 dlg.Destroy() return meas_file
def _parse_raw_data(self): if self._START_OF_FRAME in self._raw and self._END_OF_FRAME in self._raw: while self._raw[0] != self._START_OF_FRAME and len(self._raw) > 0: self._raw.pop(0) if self._raw[0] == self._START_OF_FRAME: self._raw.pop(0) eof_index = self._raw.index(self._END_OF_FRAME) raw_message = self._raw[:eof_index] self._raw = self._raw[eof_index:] logger.debug(.format(raw_message)) message = self._remove_esc_chars(raw_message) logger.debug(.format(message)) expected_checksum = (message[-1] << 8) | message[-2] logger.debug(.format(expected_checksum)) message = message[:-2] logger.debug(.format(message)) sum1, sum2 = self._fletcher16_checksum(message) calculated_checksum = (sum2 << 8) | sum1 if expected_checksum == calculated_checksum: message = message[2:] logger.debug(.format(message)) self._messages.append(message) else: logger.warning(.format(message)) logger.debug(.format(expected_checksum, calculated_checksum)) try: while self._raw[0] != self._START_OF_FRAME and len(self._raw) > 0: self._raw.pop(0) except IndexError: pass
Parses the incoming data and determines if it is valid. Valid data gets placed into self._messages :return: None
### Input: Parses the incoming data and determines if it is valid. Valid data gets placed into self._messages :return: None ### Response: def _parse_raw_data(self): if self._START_OF_FRAME in self._raw and self._END_OF_FRAME in self._raw: while self._raw[0] != self._START_OF_FRAME and len(self._raw) > 0: self._raw.pop(0) if self._raw[0] == self._START_OF_FRAME: self._raw.pop(0) eof_index = self._raw.index(self._END_OF_FRAME) raw_message = self._raw[:eof_index] self._raw = self._raw[eof_index:] logger.debug(.format(raw_message)) message = self._remove_esc_chars(raw_message) logger.debug(.format(message)) expected_checksum = (message[-1] << 8) | message[-2] logger.debug(.format(expected_checksum)) message = message[:-2] logger.debug(.format(message)) sum1, sum2 = self._fletcher16_checksum(message) calculated_checksum = (sum2 << 8) | sum1 if expected_checksum == calculated_checksum: message = message[2:] logger.debug(.format(message)) self._messages.append(message) else: logger.warning(.format(message)) logger.debug(.format(expected_checksum, calculated_checksum)) try: while self._raw[0] != self._START_OF_FRAME and len(self._raw) > 0: self._raw.pop(0) except IndexError: pass
def _update_style(self): try: self._style = get_style_by_name(self._pygments_style) except ClassNotFound: if self._pygments_style == : from pyqode.core.styles import QtStyle self._style = QtStyle elif self._pygments_style == : from pyqode.core.styles import DarculaStyle self._style = DarculaStyle else: self._style = get_style_by_name() self._pygments_style = self._clear_caches()
Sets the style to the specified Pygments style.
### Input: Sets the style to the specified Pygments style. ### Response: def _update_style(self): try: self._style = get_style_by_name(self._pygments_style) except ClassNotFound: if self._pygments_style == : from pyqode.core.styles import QtStyle self._style = QtStyle elif self._pygments_style == : from pyqode.core.styles import DarculaStyle self._style = DarculaStyle else: self._style = get_style_by_name() self._pygments_style = self._clear_caches()
def serial_wire_viewer(jlink_serial, device): buf = StringIO.StringIO() jlink = pylink.JLink(log=buf.write, detailed_log=buf.write) jlink.open(serial_no=jlink_serial) jlink.set_tif(pylink.enums.JLinkInterfaces.SWD) jlink.connect(device, verbose=True) jlink.coresight_configure() jlink.set_reset_strategy(pylink.enums.JLinkResetStrategyCortexM3.RESETPIN) jlink.reset() jlink.halt() sys.stdout.write() sys.stdout.write() sys.stdout.write() jlink.reset(ms=10, halt=False) try: while True: if jlink.register_read(0x0) != 0x05: continue offset = jlink.register_read(0x1) handle, ptr, num_bytes = jlink.memory_read32(offset, 3) read = .join(map(chr, jlink.memory_read8(ptr, num_bytes))) if num_bytes == 0: time.sleep(1) continue jlink.register_write(0x0, 0) jlink.step(thumb=True) jlink.restart(2, skip_breakpoints=True) sys.stdout.write(read) sys.stdout.flush() except KeyboardInterrupt: pass sys.stdout.write() return 0
Implements a Serial Wire Viewer (SWV). A Serial Wire Viewer (SWV) allows us implement real-time logging of output from a connected device over Serial Wire Output (SWO). Args: jlink_serial (str): the J-Link serial number device (str): the target CPU Returns: Always returns ``0``. Raises: JLinkException: on error
### Input: Implements a Serial Wire Viewer (SWV). A Serial Wire Viewer (SWV) allows us implement real-time logging of output from a connected device over Serial Wire Output (SWO). Args: jlink_serial (str): the J-Link serial number device (str): the target CPU Returns: Always returns ``0``. Raises: JLinkException: on error ### Response: def serial_wire_viewer(jlink_serial, device): buf = StringIO.StringIO() jlink = pylink.JLink(log=buf.write, detailed_log=buf.write) jlink.open(serial_no=jlink_serial) jlink.set_tif(pylink.enums.JLinkInterfaces.SWD) jlink.connect(device, verbose=True) jlink.coresight_configure() jlink.set_reset_strategy(pylink.enums.JLinkResetStrategyCortexM3.RESETPIN) jlink.reset() jlink.halt() sys.stdout.write() sys.stdout.write() sys.stdout.write() jlink.reset(ms=10, halt=False) try: while True: if jlink.register_read(0x0) != 0x05: continue offset = jlink.register_read(0x1) handle, ptr, num_bytes = jlink.memory_read32(offset, 3) read = .join(map(chr, jlink.memory_read8(ptr, num_bytes))) if num_bytes == 0: time.sleep(1) continue jlink.register_write(0x0, 0) jlink.step(thumb=True) jlink.restart(2, skip_breakpoints=True) sys.stdout.write(read) sys.stdout.flush() except KeyboardInterrupt: pass sys.stdout.write() return 0
def autodetect_url(): for url in ["http://routerlogin.net:5000", "https://routerlogin.net", "http://routerlogin.net"]: try: r = requests.get(url + "/soap/server_sa/", headers=_get_soap_headers("Test:1", "test"), verify=False) if r.status_code == 200: return url except requests.exceptions.RequestException: pass return None
Try to autodetect the base URL of the router SOAP service. Returns None if it can't be found.
### Input: Try to autodetect the base URL of the router SOAP service. Returns None if it can't be found. ### Response: def autodetect_url(): for url in ["http://routerlogin.net:5000", "https://routerlogin.net", "http://routerlogin.net"]: try: r = requests.get(url + "/soap/server_sa/", headers=_get_soap_headers("Test:1", "test"), verify=False) if r.status_code == 200: return url except requests.exceptions.RequestException: pass return None
def getDissemination(self, pid, sdefPid, method, method_params=None): if method_params is None: method_params = {} uri = % \ {: pid, : sdefPid, : method} return self.get(uri, params=method_params)
Get a service dissemination. .. NOTE: This method not available in REST API until Fedora 3.3 :param pid: object pid :param sDefPid: service definition pid :param method: service method name :param method_params: method parameters :rtype: :class:`requests.models.Response`
### Input: Get a service dissemination. .. NOTE: This method not available in REST API until Fedora 3.3 :param pid: object pid :param sDefPid: service definition pid :param method: service method name :param method_params: method parameters :rtype: :class:`requests.models.Response` ### Response: def getDissemination(self, pid, sdefPid, method, method_params=None): if method_params is None: method_params = {} uri = % \ {: pid, : sdefPid, : method} return self.get(uri, params=method_params)
def replica_set_link(rel, repl_id=None, member_id=None, self_rel=False): repls_href = link = _REPLICA_SET_LINKS[rel].copy() link[] = link[].format(**locals()) link[] = if self_rel else rel return link
Helper for getting a ReplicaSet link document, given a rel.
### Input: Helper for getting a ReplicaSet link document, given a rel. ### Response: def replica_set_link(rel, repl_id=None, member_id=None, self_rel=False): repls_href = link = _REPLICA_SET_LINKS[rel].copy() link[] = link[].format(**locals()) link[] = if self_rel else rel return link
def update_schema(self, catalog="hypermap"): schema_url = "{0}/solr/{1}/schema".format(SEARCH_URL, catalog) print schema_url location_rpt_quad_5m_payload = { "add-field-type": { "name": "location_rpt_quad_5m", "class": "solr.SpatialRecursivePrefixTreeFieldType", "geo": False, "worldBounds": "ENVELOPE(-180, 180, 180, -180)", "prefixTree": "packedQuad", "distErrPct": "0.025", "maxDistErr": "0.001", "distanceUnits": "degrees" } } requests.post(schema_url, json=location_rpt_quad_5m_payload) text_ngrm_payload = { "add-field-type": { "name": "text_ngrm", "class": "solr.TextField", "positionIncrementGap": "100", "indexAnalyzer": { "tokenizer": { "class": "solr.WhitespaceTokenizerFactory" }, "filters": [ { "class": "solr.NGramFilterFactory", "minGramSize": "1", "maxGramSize": "50" }, { "class": "solr.LowerCaseFilterFactory" } ] }, "queryAnalyzer": { "tokenizer": { "class": "solr.WhitespaceTokenizerFactory" }, "filters": [ { "class": "solr.LowerCaseFilterFactory", } ] } } } requests.post(schema_url, json=text_ngrm_payload) fields = [ {"name": "abstract", "type": "string"}, {"name": "abstract_txt", "type": "text_ngrm"}, {"name": "area", "type": "pdouble"}, {"name": "availability", "type": "string"}, {"name": "bbox", "type": "location_rpt_quad_5m"}, {"name": "domain_name", "type": "string"}, {"name": "is_public", "type": "boolean"}, {"name": "is_valid", "type": "boolean"}, {"name": "keywords", "type": "string", "multiValued": True}, {"name": "last_status", "type": "boolean"}, {"name": "layer_category", "type": "string"}, {"name": "layer_date", "type": "pdate", "docValues": True}, {"name": "layer_datetype", "type": "string"}, {"name": "layer_id", "type": "plong"}, {"name": "layer_originator", "type": "string"}, {"name": "layer_originator_txt", "type": "text_ngrm"}, {"name": "layer_username", "type": "string"}, {"name": "layer_username_txt", "type": "text_ngrm"}, {"name": "location", "type": "string"}, {"name": "max_x", "type": "pdouble"}, {"name": "max_y", "type": "pdouble"}, {"name": "min_x", "type": "pdouble"}, {"name": "min_y", "type": "pdouble"}, {"name": "name", "type": "string"}, {"name": "recent_reliability", "type": "pdouble"}, {"name": "reliability", "type": "pdouble"}, {"name": "service_id", "type": "plong"}, {"name": "service_type", "type": "string"}, {"name": "srs", "type": "string", "multiValued": True}, {"name": "tile_url", "type": "string"}, {"name": "title", "type": "string"}, {"name": "title_txt", "type": "text_ngrm"}, {"name": "type", "type": "string"}, {"name": "url", "type": "string"}, {"name": "uuid", "type": "string", "required": True}, {"name": "centroid_y", "type": "pdouble"}, {"name": "centroid_x", "type": "pdouble"}, ] copy_fields = [ {"source": "*", "dest": "_text_"}, {"source": "title", "dest": "title_txt"}, {"source": "abstract", "dest": "abstract_txt"}, {"source": "layer_originator", "dest": "layer_originator_txt"}, {"source": "layer_username", "dest": "layer_username_txt"}, ] headers = { "Content-type": "application/json" } for field in fields: data = { "add-field": field } requests.post(schema_url, json=data, headers=headers) for field in copy_fields: data = { "add-copy-field": field } print data requests.post(schema_url, json=data, headers=headers)
set the mapping in solr. :param catalog: core :return:
### Input: set the mapping in solr. :param catalog: core :return: ### Response: def update_schema(self, catalog="hypermap"): schema_url = "{0}/solr/{1}/schema".format(SEARCH_URL, catalog) print schema_url location_rpt_quad_5m_payload = { "add-field-type": { "name": "location_rpt_quad_5m", "class": "solr.SpatialRecursivePrefixTreeFieldType", "geo": False, "worldBounds": "ENVELOPE(-180, 180, 180, -180)", "prefixTree": "packedQuad", "distErrPct": "0.025", "maxDistErr": "0.001", "distanceUnits": "degrees" } } requests.post(schema_url, json=location_rpt_quad_5m_payload) text_ngrm_payload = { "add-field-type": { "name": "text_ngrm", "class": "solr.TextField", "positionIncrementGap": "100", "indexAnalyzer": { "tokenizer": { "class": "solr.WhitespaceTokenizerFactory" }, "filters": [ { "class": "solr.NGramFilterFactory", "minGramSize": "1", "maxGramSize": "50" }, { "class": "solr.LowerCaseFilterFactory" } ] }, "queryAnalyzer": { "tokenizer": { "class": "solr.WhitespaceTokenizerFactory" }, "filters": [ { "class": "solr.LowerCaseFilterFactory", } ] } } } requests.post(schema_url, json=text_ngrm_payload) fields = [ {"name": "abstract", "type": "string"}, {"name": "abstract_txt", "type": "text_ngrm"}, {"name": "area", "type": "pdouble"}, {"name": "availability", "type": "string"}, {"name": "bbox", "type": "location_rpt_quad_5m"}, {"name": "domain_name", "type": "string"}, {"name": "is_public", "type": "boolean"}, {"name": "is_valid", "type": "boolean"}, {"name": "keywords", "type": "string", "multiValued": True}, {"name": "last_status", "type": "boolean"}, {"name": "layer_category", "type": "string"}, {"name": "layer_date", "type": "pdate", "docValues": True}, {"name": "layer_datetype", "type": "string"}, {"name": "layer_id", "type": "plong"}, {"name": "layer_originator", "type": "string"}, {"name": "layer_originator_txt", "type": "text_ngrm"}, {"name": "layer_username", "type": "string"}, {"name": "layer_username_txt", "type": "text_ngrm"}, {"name": "location", "type": "string"}, {"name": "max_x", "type": "pdouble"}, {"name": "max_y", "type": "pdouble"}, {"name": "min_x", "type": "pdouble"}, {"name": "min_y", "type": "pdouble"}, {"name": "name", "type": "string"}, {"name": "recent_reliability", "type": "pdouble"}, {"name": "reliability", "type": "pdouble"}, {"name": "service_id", "type": "plong"}, {"name": "service_type", "type": "string"}, {"name": "srs", "type": "string", "multiValued": True}, {"name": "tile_url", "type": "string"}, {"name": "title", "type": "string"}, {"name": "title_txt", "type": "text_ngrm"}, {"name": "type", "type": "string"}, {"name": "url", "type": "string"}, {"name": "uuid", "type": "string", "required": True}, {"name": "centroid_y", "type": "pdouble"}, {"name": "centroid_x", "type": "pdouble"}, ] copy_fields = [ {"source": "*", "dest": "_text_"}, {"source": "title", "dest": "title_txt"}, {"source": "abstract", "dest": "abstract_txt"}, {"source": "layer_originator", "dest": "layer_originator_txt"}, {"source": "layer_username", "dest": "layer_username_txt"}, ] headers = { "Content-type": "application/json" } for field in fields: data = { "add-field": field } requests.post(schema_url, json=data, headers=headers) for field in copy_fields: data = { "add-copy-field": field } print data requests.post(schema_url, json=data, headers=headers)
def pushAwayFrom(self, otherPositions, rng): positions = [self.choices.index(x) for x in otherPositions] positionCounts = [0] * len(self.choices) for pos in positions: positionCounts[pos] += 1 self._positionIdx = numpy.array(positionCounts).argmin() self._bestPositionIdx = self._positionIdx
See comments in base class.
### Input: See comments in base class. ### Response: def pushAwayFrom(self, otherPositions, rng): positions = [self.choices.index(x) for x in otherPositions] positionCounts = [0] * len(self.choices) for pos in positions: positionCounts[pos] += 1 self._positionIdx = numpy.array(positionCounts).argmin() self._bestPositionIdx = self._positionIdx
def on_unicode_checkbox(self, w=None, state=False): logging.debug("unicode State is %s", state) self.controller.smooth_graph_mode = state if state: self.hline = urwid.AttrWrap( urwid.SolidFill(u), ) else: self.hline = urwid.AttrWrap(urwid.SolidFill(u), ) for graph in self.graphs.values(): graph.set_smooth_colors(state) self.show_graphs()
Enable smooth edges if utf-8 is supported
### Input: Enable smooth edges if utf-8 is supported ### Response: def on_unicode_checkbox(self, w=None, state=False): logging.debug("unicode State is %s", state) self.controller.smooth_graph_mode = state if state: self.hline = urwid.AttrWrap( urwid.SolidFill(u), ) else: self.hline = urwid.AttrWrap(urwid.SolidFill(u), ) for graph in self.graphs.values(): graph.set_smooth_colors(state) self.show_graphs()
def get_best_auth(self, family, address, dispno, types = ( b"MIT-MAGIC-COOKIE-1", )): num = str(dispno).encode() matches = {} for efam, eaddr, enum, ename, edata in self.entries: if efam == family and eaddr == address and num == enum: matches[ename] = edata for t in types: try: return (t, matches[t]) except KeyError: pass raise error.XNoAuthError((family, address, dispno))
Find an authentication entry matching FAMILY, ADDRESS and DISPNO. The name of the auth scheme must match one of the names in TYPES. If several entries match, the first scheme in TYPES will be choosen. If an entry is found, the tuple (name, data) is returned, otherwise XNoAuthError is raised.
### Input: Find an authentication entry matching FAMILY, ADDRESS and DISPNO. The name of the auth scheme must match one of the names in TYPES. If several entries match, the first scheme in TYPES will be choosen. If an entry is found, the tuple (name, data) is returned, otherwise XNoAuthError is raised. ### Response: def get_best_auth(self, family, address, dispno, types = ( b"MIT-MAGIC-COOKIE-1", )): num = str(dispno).encode() matches = {} for efam, eaddr, enum, ename, edata in self.entries: if efam == family and eaddr == address and num == enum: matches[ename] = edata for t in types: try: return (t, matches[t]) except KeyError: pass raise error.XNoAuthError((family, address, dispno))
def parse_args(args): from argparse import ArgumentParser description = ( ) parser = ArgumentParser(description=description) parser.add_argument(, action=, version=__version__) parser.add_argument( , , default=DEFAULT_CONFIG, help=.format(DEFAULT_CONFIG) ) parser.add_argument( , , default=[], nargs=, help= ) parser.add_argument( , , help=. format(CONFIG[__script__][]) ) parser.add_argument( , , help=. format(CONFIG[__script__][]) ) parser.add_argument( , , action=, default=None, help= ) parser.add_argument( , , help= ) parser.add_argument( , action=, default=None, help= ) parser.add_argument( , action=, default=None, help= ) parser.add_argument( , , action=, default=None, help= ) return parser.parse_args(args)
Parse args from command line by creating argument parser instance and process it. :param args: Command line arguments list.
### Input: Parse args from command line by creating argument parser instance and process it. :param args: Command line arguments list. ### Response: def parse_args(args): from argparse import ArgumentParser description = ( ) parser = ArgumentParser(description=description) parser.add_argument(, action=, version=__version__) parser.add_argument( , , default=DEFAULT_CONFIG, help=.format(DEFAULT_CONFIG) ) parser.add_argument( , , default=[], nargs=, help= ) parser.add_argument( , , help=. format(CONFIG[__script__][]) ) parser.add_argument( , , help=. format(CONFIG[__script__][]) ) parser.add_argument( , , action=, default=None, help= ) parser.add_argument( , , help= ) parser.add_argument( , action=, default=None, help= ) parser.add_argument( , action=, default=None, help= ) parser.add_argument( , , action=, default=None, help= ) return parser.parse_args(args)
def _wrap_callback_parse_link_event(subscription, on_data, message): if message.type == message.DATA: if message.data.type == yamcs_pb2.LINK_EVENT: link_message = getattr(message.data, ) link_event = LinkEvent(link_message) subscription._process(link_event) if on_data: on_data(link_event)
Wraps a user callback to parse LinkEvents from a WebSocket data message
### Input: Wraps a user callback to parse LinkEvents from a WebSocket data message ### Response: def _wrap_callback_parse_link_event(subscription, on_data, message): if message.type == message.DATA: if message.data.type == yamcs_pb2.LINK_EVENT: link_message = getattr(message.data, ) link_event = LinkEvent(link_message) subscription._process(link_event) if on_data: on_data(link_event)
def _findPortalId(self): if not self.root.lower().endswith("/self"): url = self.root + "/self" else: url = self.root params = { "f" : "json" } res = self._get(url=url, param_dict=params, securityHandler=self._securityHandler, proxy_port=self._proxy_port, proxy_url=self._proxy_url) if in res: return res[] return None
gets the portal id for a site if not known.
### Input: gets the portal id for a site if not known. ### Response: def _findPortalId(self): if not self.root.lower().endswith("/self"): url = self.root + "/self" else: url = self.root params = { "f" : "json" } res = self._get(url=url, param_dict=params, securityHandler=self._securityHandler, proxy_port=self._proxy_port, proxy_url=self._proxy_url) if in res: return res[] return None
def exit(self, pub_id, *node_ids): try: pub = self[][pub_id] except KeyError: raise ValueError(.format(pub_id)) for node_id in node_ids: node = self.get_agent(node_id) if pub_id == node[]: del node[] pub[] -= 1
Agents will notify the pub they want to leave
### Input: Agents will notify the pub they want to leave ### Response: def exit(self, pub_id, *node_ids): try: pub = self[][pub_id] except KeyError: raise ValueError(.format(pub_id)) for node_id in node_ids: node = self.get_agent(node_id) if pub_id == node[]: del node[] pub[] -= 1
def assign_extension_to_users(self, body): content = self._serialize.body(body, ) response = self._send(http_method=, location_id=, version=, content=content) return self._deserialize(, self._unwrap_collection(response))
AssignExtensionToUsers. [Preview API] Assigns the access to the given extension for a given list of users :param :class:`<ExtensionAssignment> <azure.devops.v5_0.licensing.models.ExtensionAssignment>` body: The extension assignment details. :rtype: [ExtensionOperationResult]
### Input: AssignExtensionToUsers. [Preview API] Assigns the access to the given extension for a given list of users :param :class:`<ExtensionAssignment> <azure.devops.v5_0.licensing.models.ExtensionAssignment>` body: The extension assignment details. :rtype: [ExtensionOperationResult] ### Response: def assign_extension_to_users(self, body): content = self._serialize.body(body, ) response = self._send(http_method=, location_id=, version=, content=content) return self._deserialize(, self._unwrap_collection(response))
def send_data(self, data=None): save_in_error = False if not data: if self.__data_lock.acquire(): data = self.__data self.__data = [] save_in_error = True self.__data_lock.release() else: return False s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) payload = pickle.dumps(data, protocol=2) header = struct.pack("!L", len(payload)) message = header + payload s.settimeout(1) s.connect((self.host, self.port)) try: s.send(message) except: if save_in_error: self.__data.extend(data) return False else: return True finally: s.close()
If data is empty, current buffer is sent. Otherwise data must be like: data = [('metricname', (timestamp, value)), ('metricname', (timestamp, value)), ...]
### Input: If data is empty, current buffer is sent. Otherwise data must be like: data = [('metricname', (timestamp, value)), ('metricname', (timestamp, value)), ...] ### Response: def send_data(self, data=None): save_in_error = False if not data: if self.__data_lock.acquire(): data = self.__data self.__data = [] save_in_error = True self.__data_lock.release() else: return False s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) payload = pickle.dumps(data, protocol=2) header = struct.pack("!L", len(payload)) message = header + payload s.settimeout(1) s.connect((self.host, self.port)) try: s.send(message) except: if save_in_error: self.__data.extend(data) return False else: return True finally: s.close()
def solve(self, grid): soln = self.S.satisfy_one(assumptions=self._parse_grid(grid)) return self.S.soln2point(soln, self.litmap)
Return a solution point for a Sudoku grid.
### Input: Return a solution point for a Sudoku grid. ### Response: def solve(self, grid): soln = self.S.satisfy_one(assumptions=self._parse_grid(grid)) return self.S.soln2point(soln, self.litmap)
def add_summary(self, summary, global_step=None): if isinstance(summary, bytes): summ = summary_pb2.Summary() summ.ParseFromString(summary) summary = summ for value in summary.value: if not value.metadata: continue if value.tag in self._seen_summary_tags: value.ClearField("metadata") continue self._seen_summary_tags.add(value.tag) event = event_pb2.Event(summary=summary) self._add_event(event, global_step)
Adds a `Summary` protocol buffer to the event file. This method wraps the provided summary in an `Event` protocol buffer and adds it to the event file. Parameters ---------- summary : A `Summary` protocol buffer Optionally serialized as a string. global_step: Number Optional global step value to record with the summary.
### Input: Adds a `Summary` protocol buffer to the event file. This method wraps the provided summary in an `Event` protocol buffer and adds it to the event file. Parameters ---------- summary : A `Summary` protocol buffer Optionally serialized as a string. global_step: Number Optional global step value to record with the summary. ### Response: def add_summary(self, summary, global_step=None): if isinstance(summary, bytes): summ = summary_pb2.Summary() summ.ParseFromString(summary) summary = summ for value in summary.value: if not value.metadata: continue if value.tag in self._seen_summary_tags: value.ClearField("metadata") continue self._seen_summary_tags.add(value.tag) event = event_pb2.Event(summary=summary) self._add_event(event, global_step)
def create_certificate(self, cert_info, request=False, valid_from=0, valid_to=315360000, sn=1, key_length=1024, hash_alg="sha256", write_to_file=False, cert_dir="", cipher_passphrase=None): cn = cert_info["cn"] c_f = None k_f = None if write_to_file: cert_file = "%s.crt" % cn key_file = "%s.key" % cn try: remove(cert_file) except: pass try: remove(key_file) except: pass c_f = join(cert_dir, cert_file) k_f = join(cert_dir, key_file) k = crypto.PKey() k.generate_key(crypto.TYPE_RSA, key_length) cert = crypto.X509() if request: cert = crypto.X509Req() if (len(cert_info["country_code"]) != 2): raise WrongInput("Country code must be two letters!") cert.get_subject().C = cert_info["country_code"] cert.get_subject().ST = cert_info["state"] cert.get_subject().L = cert_info["city"] cert.get_subject().O = cert_info["organization"] cert.get_subject().OU = cert_info["organization_unit"] cert.get_subject().CN = cn if not request: cert.set_serial_number(sn) cert.gmtime_adj_notBefore(valid_from) cert.gmtime_adj_notAfter(valid_to) cert.set_issuer(cert.get_subject()) cert.set_pubkey(k) cert.sign(k, hash_alg) try: if request: tmp_cert = crypto.dump_certificate_request(crypto.FILETYPE_PEM, cert) else: tmp_cert = crypto.dump_certificate(crypto.FILETYPE_PEM, cert) tmp_key = None if cipher_passphrase is not None: passphrase = cipher_passphrase["passphrase"] if isinstance(cipher_passphrase["passphrase"], six.string_types): passphrase = passphrase.encode() tmp_key = crypto.dump_privatekey(crypto.FILETYPE_PEM, k, cipher_passphrase["cipher"], passphrase) else: tmp_key = crypto.dump_privatekey(crypto.FILETYPE_PEM, k) if write_to_file: with open(c_f, ) as fc: fc.write(tmp_cert.decode()) with open(k_f, ) as fk: fk.write(tmp_key.decode()) return c_f, k_f return tmp_cert, tmp_key except Exception as ex: raise CertificateError("Certificate cannot be generated.", ex)
Can create certificate requests, to be signed later by another certificate with the method create_cert_signed_certificate. If request is True. Can also create self signed root certificates if request is False. This is default behaviour. :param cert_info: Contains information about the certificate. Is a dictionary that must contain the keys: cn = Common name. This part must match the host being authenticated country_code = Two letter description of the country. state = State city = City organization = Organization, can be a company name. organization_unit = A unit at the organization, can be a department. Example: cert_info_ca = { "cn": "company.com", "country_code": "se", "state": "AC", "city": "Dorotea", "organization": "Company", "organization_unit": "Sales" } :param request: True if this is a request for certificate, that should be signed. False if this is a self signed certificate, root certificate. :param valid_from: When the certificate starts to be valid. Amount of seconds from when the certificate is generated. :param valid_to: How long the certificate will be valid from when it is generated. The value is in seconds. Default is 315360000 seconds, a.k.a 10 years. :param sn: Serial number for the certificate. Default is 1. :param key_length: Length of the key to be generated. Defaults to 1024. :param hash_alg: Hash algorithm to use for the key. Default is sha256. :param write_to_file: True if you want to write the certificate to a file. The method will then return a tuple with path to certificate file and path to key file. False if you want to get the result as strings. The method will then return a tuple with the certificate string and the key as string. WILL OVERWRITE ALL EXISTING FILES WITHOUT ASKING! :param cert_dir: Where to save the files if write_to_file is true. :param cipher_passphrase A dictionary with cipher and passphrase. Example:: {"cipher": "blowfish", "passphrase": "qwerty"} :return: string representation of certificate, string representation of private key if write_to_file parameter is False otherwise path to certificate file, path to private key file
### Input: Can create certificate requests, to be signed later by another certificate with the method create_cert_signed_certificate. If request is True. Can also create self signed root certificates if request is False. This is default behaviour. :param cert_info: Contains information about the certificate. Is a dictionary that must contain the keys: cn = Common name. This part must match the host being authenticated country_code = Two letter description of the country. state = State city = City organization = Organization, can be a company name. organization_unit = A unit at the organization, can be a department. Example: cert_info_ca = { "cn": "company.com", "country_code": "se", "state": "AC", "city": "Dorotea", "organization": "Company", "organization_unit": "Sales" } :param request: True if this is a request for certificate, that should be signed. False if this is a self signed certificate, root certificate. :param valid_from: When the certificate starts to be valid. Amount of seconds from when the certificate is generated. :param valid_to: How long the certificate will be valid from when it is generated. The value is in seconds. Default is 315360000 seconds, a.k.a 10 years. :param sn: Serial number for the certificate. Default is 1. :param key_length: Length of the key to be generated. Defaults to 1024. :param hash_alg: Hash algorithm to use for the key. Default is sha256. :param write_to_file: True if you want to write the certificate to a file. The method will then return a tuple with path to certificate file and path to key file. False if you want to get the result as strings. The method will then return a tuple with the certificate string and the key as string. WILL OVERWRITE ALL EXISTING FILES WITHOUT ASKING! :param cert_dir: Where to save the files if write_to_file is true. :param cipher_passphrase A dictionary with cipher and passphrase. Example:: {"cipher": "blowfish", "passphrase": "qwerty"} :return: string representation of certificate, string representation of private key if write_to_file parameter is False otherwise path to certificate file, path to private key file ### Response: def create_certificate(self, cert_info, request=False, valid_from=0, valid_to=315360000, sn=1, key_length=1024, hash_alg="sha256", write_to_file=False, cert_dir="", cipher_passphrase=None): cn = cert_info["cn"] c_f = None k_f = None if write_to_file: cert_file = "%s.crt" % cn key_file = "%s.key" % cn try: remove(cert_file) except: pass try: remove(key_file) except: pass c_f = join(cert_dir, cert_file) k_f = join(cert_dir, key_file) k = crypto.PKey() k.generate_key(crypto.TYPE_RSA, key_length) cert = crypto.X509() if request: cert = crypto.X509Req() if (len(cert_info["country_code"]) != 2): raise WrongInput("Country code must be two letters!") cert.get_subject().C = cert_info["country_code"] cert.get_subject().ST = cert_info["state"] cert.get_subject().L = cert_info["city"] cert.get_subject().O = cert_info["organization"] cert.get_subject().OU = cert_info["organization_unit"] cert.get_subject().CN = cn if not request: cert.set_serial_number(sn) cert.gmtime_adj_notBefore(valid_from) cert.gmtime_adj_notAfter(valid_to) cert.set_issuer(cert.get_subject()) cert.set_pubkey(k) cert.sign(k, hash_alg) try: if request: tmp_cert = crypto.dump_certificate_request(crypto.FILETYPE_PEM, cert) else: tmp_cert = crypto.dump_certificate(crypto.FILETYPE_PEM, cert) tmp_key = None if cipher_passphrase is not None: passphrase = cipher_passphrase["passphrase"] if isinstance(cipher_passphrase["passphrase"], six.string_types): passphrase = passphrase.encode() tmp_key = crypto.dump_privatekey(crypto.FILETYPE_PEM, k, cipher_passphrase["cipher"], passphrase) else: tmp_key = crypto.dump_privatekey(crypto.FILETYPE_PEM, k) if write_to_file: with open(c_f, ) as fc: fc.write(tmp_cert.decode()) with open(k_f, ) as fk: fk.write(tmp_key.decode()) return c_f, k_f return tmp_cert, tmp_key except Exception as ex: raise CertificateError("Certificate cannot be generated.", ex)
def _k_prototypes_iter(Xnum, Xcat, centroids, cl_attr_sum, cl_memb_sum, cl_attr_freq, membship, num_dissim, cat_dissim, gamma, random_state): moves = 0 for ipoint in range(Xnum.shape[0]): clust = np.argmin( num_dissim(centroids[0], Xnum[ipoint]) + gamma * cat_dissim(centroids[1], Xcat[ipoint], X=Xcat, membship=membship) ) if membship[clust, ipoint]: continue moves += 1 old_clust = np.argwhere(membship[:, ipoint])[0][0] cl_attr_sum, cl_memb_sum = move_point_num( Xnum[ipoint], clust, old_clust, cl_attr_sum, cl_memb_sum ) cl_attr_freq, membship, centroids[1] = kmodes.move_point_cat( Xcat[ipoint], ipoint, clust, old_clust, cl_attr_freq, membship, centroids[1] ) for iattr in range(len(Xnum[ipoint])): for curc in (clust, old_clust): if cl_memb_sum[curc]: centroids[0][curc, iattr] = cl_attr_sum[curc, iattr] / cl_memb_sum[curc] else: centroids[0][curc, iattr] = 0. if not cl_memb_sum[old_clust]: from_clust = membship.sum(axis=1).argmax() choices = [ii for ii, ch in enumerate(membship[from_clust, :]) if ch] rindx = random_state.choice(choices) cl_attr_sum, cl_memb_sum = move_point_num( Xnum[rindx], old_clust, from_clust, cl_attr_sum, cl_memb_sum ) cl_attr_freq, membship, centroids[1] = kmodes.move_point_cat( Xcat[rindx], rindx, old_clust, from_clust, cl_attr_freq, membship, centroids[1] ) return centroids, moves
Single iteration of the k-prototypes algorithm
### Input: Single iteration of the k-prototypes algorithm ### Response: def _k_prototypes_iter(Xnum, Xcat, centroids, cl_attr_sum, cl_memb_sum, cl_attr_freq, membship, num_dissim, cat_dissim, gamma, random_state): moves = 0 for ipoint in range(Xnum.shape[0]): clust = np.argmin( num_dissim(centroids[0], Xnum[ipoint]) + gamma * cat_dissim(centroids[1], Xcat[ipoint], X=Xcat, membship=membship) ) if membship[clust, ipoint]: continue moves += 1 old_clust = np.argwhere(membship[:, ipoint])[0][0] cl_attr_sum, cl_memb_sum = move_point_num( Xnum[ipoint], clust, old_clust, cl_attr_sum, cl_memb_sum ) cl_attr_freq, membship, centroids[1] = kmodes.move_point_cat( Xcat[ipoint], ipoint, clust, old_clust, cl_attr_freq, membship, centroids[1] ) for iattr in range(len(Xnum[ipoint])): for curc in (clust, old_clust): if cl_memb_sum[curc]: centroids[0][curc, iattr] = cl_attr_sum[curc, iattr] / cl_memb_sum[curc] else: centroids[0][curc, iattr] = 0. if not cl_memb_sum[old_clust]: from_clust = membship.sum(axis=1).argmax() choices = [ii for ii, ch in enumerate(membship[from_clust, :]) if ch] rindx = random_state.choice(choices) cl_attr_sum, cl_memb_sum = move_point_num( Xnum[rindx], old_clust, from_clust, cl_attr_sum, cl_memb_sum ) cl_attr_freq, membship, centroids[1] = kmodes.move_point_cat( Xcat[rindx], rindx, old_clust, from_clust, cl_attr_freq, membship, centroids[1] ) return centroids, moves
def tuple(data, field_name): if isinstance(data, Cube): Log.error("not supported yet") if isinstance(data, FlatList): Log.error("not supported yet") if is_data(field_name) and "value" in field_name: field_name = field_name["value"] if is_text(field_name): if len(split_field(field_name)) == 1: return [(d[field_name],) for d in data] else: path = split_field(field_name) output = [] flat_list._tuple1(data, path, 0, output) return output elif is_list(field_name): paths = [_select_a_field(f) for f in field_name] output = FlatList() _tuple((), unwrap(data), paths, 0, output) return output else: paths = [_select_a_field(field_name)] output = FlatList() _tuple((), data, paths, 0, output) return output
RETURN LIST OF TUPLES
### Input: RETURN LIST OF TUPLES ### Response: def tuple(data, field_name): if isinstance(data, Cube): Log.error("not supported yet") if isinstance(data, FlatList): Log.error("not supported yet") if is_data(field_name) and "value" in field_name: field_name = field_name["value"] if is_text(field_name): if len(split_field(field_name)) == 1: return [(d[field_name],) for d in data] else: path = split_field(field_name) output = [] flat_list._tuple1(data, path, 0, output) return output elif is_list(field_name): paths = [_select_a_field(f) for f in field_name] output = FlatList() _tuple((), unwrap(data), paths, 0, output) return output else: paths = [_select_a_field(field_name)] output = FlatList() _tuple((), data, paths, 0, output) return output
def desymbolize(self): self.sort = content = self.binary.fast_memory_load(self.addr, self.size, bytes) self.content = [ content ]
We believe this was a pointer and symbolized it before. Now we want to desymbolize it. The following actions are performed: - Reload content from memory - Mark the sort as 'unknown' :return: None
### Input: We believe this was a pointer and symbolized it before. Now we want to desymbolize it. The following actions are performed: - Reload content from memory - Mark the sort as 'unknown' :return: None ### Response: def desymbolize(self): self.sort = content = self.binary.fast_memory_load(self.addr, self.size, bytes) self.content = [ content ]
def _intersected_edge(self, edges, cut_edge): for edge in edges: if self._edges_intersect(edge, cut_edge): return edge
Given a list of *edges*, return the first that is intersected by *cut_edge*.
### Input: Given a list of *edges*, return the first that is intersected by *cut_edge*. ### Response: def _intersected_edge(self, edges, cut_edge): for edge in edges: if self._edges_intersect(edge, cut_edge): return edge
def print_info(info_mapping): if not info_mapping: return content_format = "{:<16} : {:<}\n" content = "\n==================== Output ====================\n" content += content_format.format("Variable", "Value") content += content_format.format("-" * 16, "-" * 29) for key, value in info_mapping.items(): if isinstance(value, (tuple, collections.deque)): continue elif isinstance(value, (dict, list)): value = json.dumps(value) elif value is None: value = "None" if is_py2: if isinstance(key, unicode): key = key.encode("utf-8") if isinstance(value, unicode): value = value.encode("utf-8") content += content_format.format(key, value) content += "-" * 48 + "\n" logger.log_info(content)
print info in mapping. Args: info_mapping (dict): input(variables) or output mapping. Examples: >>> info_mapping = { "var_a": "hello", "var_b": "world" } >>> info_mapping = { "status_code": 500 } >>> print_info(info_mapping) ==================== Output ==================== Key : Value ---------------- : ---------------------------- var_a : hello var_b : world ------------------------------------------------
### Input: print info in mapping. Args: info_mapping (dict): input(variables) or output mapping. Examples: >>> info_mapping = { "var_a": "hello", "var_b": "world" } >>> info_mapping = { "status_code": 500 } >>> print_info(info_mapping) ==================== Output ==================== Key : Value ---------------- : ---------------------------- var_a : hello var_b : world ------------------------------------------------ ### Response: def print_info(info_mapping): if not info_mapping: return content_format = "{:<16} : {:<}\n" content = "\n==================== Output ====================\n" content += content_format.format("Variable", "Value") content += content_format.format("-" * 16, "-" * 29) for key, value in info_mapping.items(): if isinstance(value, (tuple, collections.deque)): continue elif isinstance(value, (dict, list)): value = json.dumps(value) elif value is None: value = "None" if is_py2: if isinstance(key, unicode): key = key.encode("utf-8") if isinstance(value, unicode): value = value.encode("utf-8") content += content_format.format(key, value) content += "-" * 48 + "\n" logger.log_info(content)
def mv_videos(path): count = 0 for f in os.listdir(path): f = os.path.join(path, f) if os.path.isdir(f): for sf in os.listdir(f): sf = os.path.join(f, sf) if os.path.isfile(sf): new_name = os.path.join(path, os.path.basename(sf)) try: os.rename(sf, new_name) except (WindowsError, OSError) as e: print(.format(sf, e)) else: count += 1 print(.format(sf, new_name)) return count
move videos in sub-directory of path to path.
### Input: move videos in sub-directory of path to path. ### Response: def mv_videos(path): count = 0 for f in os.listdir(path): f = os.path.join(path, f) if os.path.isdir(f): for sf in os.listdir(f): sf = os.path.join(f, sf) if os.path.isfile(sf): new_name = os.path.join(path, os.path.basename(sf)) try: os.rename(sf, new_name) except (WindowsError, OSError) as e: print(.format(sf, e)) else: count += 1 print(.format(sf, new_name)) return count
def get_preset_by_key(self, package_keyname, preset_keyname, mask=None): preset_operation = % preset_keyname _filter = { : { : { : preset_operation } }, : { : { : preset_operation } } } presets = self.list_presets(package_keyname, mask=mask, filter=_filter) if len(presets) == 0: raise exceptions.SoftLayerError( "Preset {} does not exist in package {}".format(preset_keyname, package_keyname)) return presets[0]
Gets a single preset with the given key.
### Input: Gets a single preset with the given key. ### Response: def get_preset_by_key(self, package_keyname, preset_keyname, mask=None): preset_operation = % preset_keyname _filter = { : { : { : preset_operation } }, : { : { : preset_operation } } } presets = self.list_presets(package_keyname, mask=mask, filter=_filter) if len(presets) == 0: raise exceptions.SoftLayerError( "Preset {} does not exist in package {}".format(preset_keyname, package_keyname)) return presets[0]
def round_to_multiple(number, multiple): multiple = int(multiple) if multiple == 0: multiple = 1 ceil_mod_number = number - number % (-multiple) return int(ceil_mod_number)
Rounding up to the nearest multiple of any positive integer Parameters ---------- number : int, float Input number. multiple : int Round up to multiple of multiple. Will be converted to int. Must not be equal zero. Returns ------- ceil_mod_number : int Rounded up number. Example ------- round_to_multiple(maximum, math.floor(math.log10(maximum)))
### Input: Rounding up to the nearest multiple of any positive integer Parameters ---------- number : int, float Input number. multiple : int Round up to multiple of multiple. Will be converted to int. Must not be equal zero. Returns ------- ceil_mod_number : int Rounded up number. Example ------- round_to_multiple(maximum, math.floor(math.log10(maximum))) ### Response: def round_to_multiple(number, multiple): multiple = int(multiple) if multiple == 0: multiple = 1 ceil_mod_number = number - number % (-multiple) return int(ceil_mod_number)
def get_previous_price_list(self, currency, start_date, end_date): start = start_date.strftime() end = end_date.strftime() url = ( .format( start, end, currency ) ) response = requests.get(url) if response.status_code == 200: data = self._decode_rates(response) price_dict = data.get(, {}) return price_dict return {}
Get List of prices between two dates
### Input: Get List of prices between two dates ### Response: def get_previous_price_list(self, currency, start_date, end_date): start = start_date.strftime() end = end_date.strftime() url = ( .format( start, end, currency ) ) response = requests.get(url) if response.status_code == 200: data = self._decode_rates(response) price_dict = data.get(, {}) return price_dict return {}
def getContactItems(self, person): return person.store.query( EmailAddress, EmailAddress.person == person)
Return all L{EmailAddress} instances associated with the given person. @type person: L{Person}
### Input: Return all L{EmailAddress} instances associated with the given person. @type person: L{Person} ### Response: def getContactItems(self, person): return person.store.query( EmailAddress, EmailAddress.person == person)
def iterline(x1, y1, x2, y2): xdiff = abs(x2-x1) ydiff = abs(y2-y1) xdir = 1 if x1 <= x2 else -1 ydir = 1 if y1 <= y2 else -1 r = math.ceil(max(xdiff, ydiff)) if r == 0: yield x1, y1 else: x, y = math.floor(x1), math.floor(y1) i = 0 while i < r: x += xdir * xdiff / r y += ydir * ydiff / r yield x, y i += 1
Yields (x, y) coords of line from (x1, y1) to (x2, y2)
### Input: Yields (x, y) coords of line from (x1, y1) to (x2, y2) ### Response: def iterline(x1, y1, x2, y2): xdiff = abs(x2-x1) ydiff = abs(y2-y1) xdir = 1 if x1 <= x2 else -1 ydir = 1 if y1 <= y2 else -1 r = math.ceil(max(xdiff, ydiff)) if r == 0: yield x1, y1 else: x, y = math.floor(x1), math.floor(y1) i = 0 while i < r: x += xdir * xdiff / r y += ydir * ydiff / r yield x, y i += 1
def send(self, envelope): if not self.is_connected: self._connect() msg = envelope.to_mime_message() to_addrs = [envelope._addrs_to_header([addr]) for addr in envelope._to + envelope._cc + envelope._bcc] return self._conn.sendmail(msg[], to_addrs, msg.as_string())
Sends an *envelope*.
### Input: Sends an *envelope*. ### Response: def send(self, envelope): if not self.is_connected: self._connect() msg = envelope.to_mime_message() to_addrs = [envelope._addrs_to_header([addr]) for addr in envelope._to + envelope._cc + envelope._bcc] return self._conn.sendmail(msg[], to_addrs, msg.as_string())
def post(self, request): if constants.ENFORCE_SECURE and not request.is_secure(): return self.error_response({ : , : _("A secure connection is required.")}) if not in request.POST: return self.error_response({ : , : _("No included in the " "request.")}) grant_type = request.POST[] if grant_type not in self.grant_types: return self.error_response({: }) client = self.authenticate(request) if client is None: return self.error_response({: }) handler = self.get_handler(grant_type) try: return handler(request, request.POST, client) except OAuthError, e: return self.error_response(e.args[0])
As per :rfc:`3.2` the token endpoint *only* supports POST requests.
### Input: As per :rfc:`3.2` the token endpoint *only* supports POST requests. ### Response: def post(self, request): if constants.ENFORCE_SECURE and not request.is_secure(): return self.error_response({ : , : _("A secure connection is required.")}) if not in request.POST: return self.error_response({ : , : _("No included in the " "request.")}) grant_type = request.POST[] if grant_type not in self.grant_types: return self.error_response({: }) client = self.authenticate(request) if client is None: return self.error_response({: }) handler = self.get_handler(grant_type) try: return handler(request, request.POST, client) except OAuthError, e: return self.error_response(e.args[0])
def get_comments(self, card_id): params = {: , : } comments = self.api_request( "/1/cards/{card_id}/actions".format(card_id=card_id), **params) for comment in comments: assert comment[] == yield comment
Returns an iterator for the comments on a certain card.
### Input: Returns an iterator for the comments on a certain card. ### Response: def get_comments(self, card_id): params = {: , : } comments = self.api_request( "/1/cards/{card_id}/actions".format(card_id=card_id), **params) for comment in comments: assert comment[] == yield comment
def decode_jwt(encoded_token): secret = config.decode_key algorithm = config.algorithm audience = config.audience return jwt.decode(encoded_token, secret, algorithms=[algorithm], audience=audience)
Returns the decoded token from an encoded one. This does all the checks to insure that the decoded token is valid before returning it.
### Input: Returns the decoded token from an encoded one. This does all the checks to insure that the decoded token is valid before returning it. ### Response: def decode_jwt(encoded_token): secret = config.decode_key algorithm = config.algorithm audience = config.audience return jwt.decode(encoded_token, secret, algorithms=[algorithm], audience=audience)
def heldout_log_likelihood(self, test_mask=None): if test_mask is None: if self.mask is None: return 0 else: test_mask = ~self.mask xs = np.hstack((self.gaussian_states, self.inputs)) if self.single_emission: return self.emission_distns[0].\ log_likelihood((xs, self.data), mask=test_mask).sum() else: hll = 0 z = self.stateseq for idx, ed in enumerate(self.emission_distns): hll += ed.log_likelihood((xs[z == idx], self.data[z == idx]), mask=test_mask[z == idx]).sum()
Compute the log likelihood of the masked data given the latent discrete and continuous states.
### Input: Compute the log likelihood of the masked data given the latent discrete and continuous states. ### Response: def heldout_log_likelihood(self, test_mask=None): if test_mask is None: if self.mask is None: return 0 else: test_mask = ~self.mask xs = np.hstack((self.gaussian_states, self.inputs)) if self.single_emission: return self.emission_distns[0].\ log_likelihood((xs, self.data), mask=test_mask).sum() else: hll = 0 z = self.stateseq for idx, ed in enumerate(self.emission_distns): hll += ed.log_likelihood((xs[z == idx], self.data[z == idx]), mask=test_mask[z == idx]).sum()
def set_chat_description( self, chat_id: Union[int, str], description: str ) -> bool: peer = self.resolve_peer(chat_id) if isinstance(peer, (types.InputPeerChannel, types.InputPeerChat)): self.send( functions.messages.EditChatAbout( peer=peer, about=description ) ) else: raise ValueError("The chat_id \"{}\" belongs to a user".format(chat_id)) return True
Use this method to change the description of a supergroup or a channel. You must be an administrator in the chat for this to work and must have the appropriate admin rights. Args: chat_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target chat. description (``str``): New chat description, 0-255 characters. Returns: True on success. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. ``ValueError`` if a chat_id doesn't belong to a supergroup or a channel.
### Input: Use this method to change the description of a supergroup or a channel. You must be an administrator in the chat for this to work and must have the appropriate admin rights. Args: chat_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target chat. description (``str``): New chat description, 0-255 characters. Returns: True on success. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. ``ValueError`` if a chat_id doesn't belong to a supergroup or a channel. ### Response: def set_chat_description( self, chat_id: Union[int, str], description: str ) -> bool: peer = self.resolve_peer(chat_id) if isinstance(peer, (types.InputPeerChannel, types.InputPeerChat)): self.send( functions.messages.EditChatAbout( peer=peer, about=description ) ) else: raise ValueError("The chat_id \"{}\" belongs to a user".format(chat_id)) return True
def disconnect_devices(self, service_uuids=[]): service_uuids = set(service_uuids) for device in self.list_devices(): if not device.is_connected: continue device_uuids = set(map(lambda x: x.uuid, device.list_services())) if device_uuids >= service_uuids: device.disconnect()
Disconnect any connected devices that have the specified list of service UUIDs. The default is an empty list which means all devices are disconnected.
### Input: Disconnect any connected devices that have the specified list of service UUIDs. The default is an empty list which means all devices are disconnected. ### Response: def disconnect_devices(self, service_uuids=[]): service_uuids = set(service_uuids) for device in self.list_devices(): if not device.is_connected: continue device_uuids = set(map(lambda x: x.uuid, device.list_services())) if device_uuids >= service_uuids: device.disconnect()
def lookstr(table, limit=0, **kwargs): kwargs[] = str return look(table, limit=limit, **kwargs)
Like :func:`petl.util.vis.look` but use str() rather than repr() for data values.
### Input: Like :func:`petl.util.vis.look` but use str() rather than repr() for data values. ### Response: def lookstr(table, limit=0, **kwargs): kwargs[] = str return look(table, limit=limit, **kwargs)
def calc_qt_v1(self): con = self.parameters.control.fastaccess flu = self.sequences.fluxes.fastaccess flu.qt = max(flu.outuh-con.abstr, 0.)
Calculate the total discharge after possible abstractions. Required control parameter: |Abstr| Required flux sequence: |OutUH| Calculated flux sequence: |QT| Basic equation: :math:`QT = max(OutUH - Abstr, 0)` Examples: Trying to abstract less then available, as much as available and less then available results in: >>> from hydpy.models.hland import * >>> parameterstep('1d') >>> simulationstep('12h') >>> abstr(2.0) >>> fluxes.outuh = 2.0 >>> model.calc_qt_v1() >>> fluxes.qt qt(1.0) >>> fluxes.outuh = 1.0 >>> model.calc_qt_v1() >>> fluxes.qt qt(0.0) >>> fluxes.outuh = 0.5 >>> model.calc_qt_v1() >>> fluxes.qt qt(0.0) Note that "negative abstractions" are allowed: >>> abstr(-2.0) >>> fluxes.outuh = 1.0 >>> model.calc_qt_v1() >>> fluxes.qt qt(2.0)
### Input: Calculate the total discharge after possible abstractions. Required control parameter: |Abstr| Required flux sequence: |OutUH| Calculated flux sequence: |QT| Basic equation: :math:`QT = max(OutUH - Abstr, 0)` Examples: Trying to abstract less then available, as much as available and less then available results in: >>> from hydpy.models.hland import * >>> parameterstep('1d') >>> simulationstep('12h') >>> abstr(2.0) >>> fluxes.outuh = 2.0 >>> model.calc_qt_v1() >>> fluxes.qt qt(1.0) >>> fluxes.outuh = 1.0 >>> model.calc_qt_v1() >>> fluxes.qt qt(0.0) >>> fluxes.outuh = 0.5 >>> model.calc_qt_v1() >>> fluxes.qt qt(0.0) Note that "negative abstractions" are allowed: >>> abstr(-2.0) >>> fluxes.outuh = 1.0 >>> model.calc_qt_v1() >>> fluxes.qt qt(2.0) ### Response: def calc_qt_v1(self): con = self.parameters.control.fastaccess flu = self.sequences.fluxes.fastaccess flu.qt = max(flu.outuh-con.abstr, 0.)
def str_rel_short(self, goobj): if not goobj.relationship: return rel_cur = goobj.relationship return "".join([self.rel2chr.get(r, ) for r in self.rels if r in rel_cur])
Get a string representing the presence of absence of relationships. Ex: P
### Input: Get a string representing the presence of absence of relationships. Ex: P ### Response: def str_rel_short(self, goobj): if not goobj.relationship: return rel_cur = goobj.relationship return "".join([self.rel2chr.get(r, ) for r in self.rels if r in rel_cur])
def add_tag(context, id, name): result = job.add_tag(context, id=id, name=name) utils.format_output(result, context.format)
add_tag(context, id, name) Attach a tag to a job. >>> dcictl job-add-tag [OPTIONS] :param string id: ID of the job to attach the tag on [required] :param string tag_name: name of the tag to be attached [required]
### Input: add_tag(context, id, name) Attach a tag to a job. >>> dcictl job-add-tag [OPTIONS] :param string id: ID of the job to attach the tag on [required] :param string tag_name: name of the tag to be attached [required] ### Response: def add_tag(context, id, name): result = job.add_tag(context, id=id, name=name) utils.format_output(result, context.format)
def _base_query(self, session): return session.query(ORMTargetMarker) \ .filter(ORMTargetMarker.name == self.name) \ .filter(ORMTargetMarker.params == self.params)
Base query for a target. Args: session: database session to query in
### Input: Base query for a target. Args: session: database session to query in ### Response: def _base_query(self, session): return session.query(ORMTargetMarker) \ .filter(ORMTargetMarker.name == self.name) \ .filter(ORMTargetMarker.params == self.params)
def ss2zpk(a,b,c,d, input=0): import scipy.signal z, p, k = scipy.signal.ss2zpk(a, b, c, d, input=input) return z, p, k
State-space representation to zero-pole-gain representation. :param A: ndarray State-space representation of linear system. :param B: ndarray State-space representation of linear system. :param C: ndarray State-space representation of linear system. :param D: ndarray State-space representation of linear system. :param int input: optional For multiple-input systems, the input to use. :return: * z, p : sequence Zeros and poles. * k : float System gain. .. note:: wrapper of scipy function ss2zpk
### Input: State-space representation to zero-pole-gain representation. :param A: ndarray State-space representation of linear system. :param B: ndarray State-space representation of linear system. :param C: ndarray State-space representation of linear system. :param D: ndarray State-space representation of linear system. :param int input: optional For multiple-input systems, the input to use. :return: * z, p : sequence Zeros and poles. * k : float System gain. .. note:: wrapper of scipy function ss2zpk ### Response: def ss2zpk(a,b,c,d, input=0): import scipy.signal z, p, k = scipy.signal.ss2zpk(a, b, c, d, input=input) return z, p, k
def eventFilter( self, object, event ): if ( object == self._filepathEdit and \ self._filepathEdit.isReadOnly() and \ event.type() == event.MouseButtonPress and \ event.button() == Qt.LeftButton ): self.pickFilepath() return False
Overloads the eventFilter to look for click events on the line edit. :param object | <QObject> event | <QEvent>
### Input: Overloads the eventFilter to look for click events on the line edit. :param object | <QObject> event | <QEvent> ### Response: def eventFilter( self, object, event ): if ( object == self._filepathEdit and \ self._filepathEdit.isReadOnly() and \ event.type() == event.MouseButtonPress and \ event.button() == Qt.LeftButton ): self.pickFilepath() return False
def infer_format(filename:str) -> str: _, ext = os.path.splitext(filename) return ext
Return extension identifying format of given filename
### Input: Return extension identifying format of given filename ### Response: def infer_format(filename:str) -> str: _, ext = os.path.splitext(filename) return ext
def check_HDF5_arrays(hdf5_file, N, convergence_iter): Worker.hdf5_lock.acquire() with tables.open_file(hdf5_file, ) as fileh: if not hasattr(fileh.root, ): fileh.create_group(fileh.root, "aff_prop_group") atom = tables.Float32Atom() filters = None for feature in (, , , ): if not hasattr(fileh.root.aff_prop_group, feature): fileh.create_carray(fileh.root.aff_prop_group, feature, atom, (N, N), "Matrix of {0} for affinity " "propagation clustering".format(feature), filters = filters) if not hasattr(fileh.root.aff_prop_group, ): fileh.create_carray(fileh.root.aff_prop_group, , atom, (N, convergence_iter), "Matrix of parallel updates for affinity propagation " "clustering", filters = filters) Worker.hdf5_lock.release()
Check that the HDF5 data structure of file handle 'hdf5_file' has all the required nodes organizing the various two-dimensional arrays required for Affinity Propagation clustering ('Responsibility' matrix, 'Availability', etc.). Parameters ---------- hdf5_file : string or file handle Name of the Hierarchical Data Format under consideration. N : int The number of samples in the data-set that will undergo Affinity Propagation clustering. convergence_iter : int Number of iterations with no change in the number of estimated clusters that stops the convergence.
### Input: Check that the HDF5 data structure of file handle 'hdf5_file' has all the required nodes organizing the various two-dimensional arrays required for Affinity Propagation clustering ('Responsibility' matrix, 'Availability', etc.). Parameters ---------- hdf5_file : string or file handle Name of the Hierarchical Data Format under consideration. N : int The number of samples in the data-set that will undergo Affinity Propagation clustering. convergence_iter : int Number of iterations with no change in the number of estimated clusters that stops the convergence. ### Response: def check_HDF5_arrays(hdf5_file, N, convergence_iter): Worker.hdf5_lock.acquire() with tables.open_file(hdf5_file, ) as fileh: if not hasattr(fileh.root, ): fileh.create_group(fileh.root, "aff_prop_group") atom = tables.Float32Atom() filters = None for feature in (, , , ): if not hasattr(fileh.root.aff_prop_group, feature): fileh.create_carray(fileh.root.aff_prop_group, feature, atom, (N, N), "Matrix of {0} for affinity " "propagation clustering".format(feature), filters = filters) if not hasattr(fileh.root.aff_prop_group, ): fileh.create_carray(fileh.root.aff_prop_group, , atom, (N, convergence_iter), "Matrix of parallel updates for affinity propagation " "clustering", filters = filters) Worker.hdf5_lock.release()
def formatTime (self, record, datefmt=None): if self.bsd: lt_ts = datetime.datetime.fromtimestamp(record.created) ts = lt_ts.strftime(self.BSD_DATEFMT) if ts[4] == : ts = ts[0:4] + + ts[5:] else: utc_ts = datetime.datetime.utcfromtimestamp(record.created) ts = utc_ts.strftime(self.SYS_DATEFMT) return ts
Returns the creation time of the given LogRecord as formatted text. NOTE: The datefmt parameter and self.converter (the time conversion method) are ignored. BSD Syslog Protocol messages always use local time, and by our convention, Syslog Protocol messages use UTC.
### Input: Returns the creation time of the given LogRecord as formatted text. NOTE: The datefmt parameter and self.converter (the time conversion method) are ignored. BSD Syslog Protocol messages always use local time, and by our convention, Syslog Protocol messages use UTC. ### Response: def formatTime (self, record, datefmt=None): if self.bsd: lt_ts = datetime.datetime.fromtimestamp(record.created) ts = lt_ts.strftime(self.BSD_DATEFMT) if ts[4] == : ts = ts[0:4] + + ts[5:] else: utc_ts = datetime.datetime.utcfromtimestamp(record.created) ts = utc_ts.strftime(self.SYS_DATEFMT) return ts
def handle_pre_call(self, message, connection): req = None try: req = connection.request_message_factory.build(message) if req: self.handle_call(req, connection) except TChannelError as e: log.warn(, exc_info=True) if req: e.tracing = req.tracing connection.send_error(e)
Handle incoming request message including CallRequestMessage and CallRequestContinueMessage This method will build the User friendly request object based on the incoming messages. It passes all the messages into the message_factory to build the init request object. Only when it get a CallRequestMessage and a completed arg_1=argstream[0], the message_factory will return a request object. Then it will trigger the async handle_call method. :param message: CallRequestMessage or CallRequestContinueMessage :param connection: tornado connection
### Input: Handle incoming request message including CallRequestMessage and CallRequestContinueMessage This method will build the User friendly request object based on the incoming messages. It passes all the messages into the message_factory to build the init request object. Only when it get a CallRequestMessage and a completed arg_1=argstream[0], the message_factory will return a request object. Then it will trigger the async handle_call method. :param message: CallRequestMessage or CallRequestContinueMessage :param connection: tornado connection ### Response: def handle_pre_call(self, message, connection): req = None try: req = connection.request_message_factory.build(message) if req: self.handle_call(req, connection) except TChannelError as e: log.warn(, exc_info=True) if req: e.tracing = req.tracing connection.send_error(e)
def open_files(self, idx=0): self.elements.open_files(idx=idx) self.nodes.open_files(idx=idx)
Call method |Devices.open_files| of the |Nodes| and |Elements| objects currently handled by the |HydPy| object.
### Input: Call method |Devices.open_files| of the |Nodes| and |Elements| objects currently handled by the |HydPy| object. ### Response: def open_files(self, idx=0): self.elements.open_files(idx=idx) self.nodes.open_files(idx=idx)
def _identify(self, dataframe): idx = ~idx return idx
Returns a list of indexes containing only the points that pass the filter. Parameters ---------- dataframe : DataFrame
### Input: Returns a list of indexes containing only the points that pass the filter. Parameters ---------- dataframe : DataFrame ### Response: def _identify(self, dataframe): idx = ~idx return idx
def map_property_instances(original_part, new_part): get_mapping_dictionary()[original_part.id] = new_part for prop_original in original_part.properties: get_mapping_dictionary()[prop_original.id] = [prop_new for prop_new in new_part.properties if get_mapping_dictionary()[prop_original._json_data[]].id == prop_new._json_data[]][0]
Map the id of the original part with the `Part` object of the newly created one. Updated the singleton `mapping dictionary` with the new mapping table values. :param original_part: `Part` object to be copied/moved :type original_part: :class:`Part` :param new_part: `Part` object copied/moved :type new_part: :class:`Part` :return: None
### Input: Map the id of the original part with the `Part` object of the newly created one. Updated the singleton `mapping dictionary` with the new mapping table values. :param original_part: `Part` object to be copied/moved :type original_part: :class:`Part` :param new_part: `Part` object copied/moved :type new_part: :class:`Part` :return: None ### Response: def map_property_instances(original_part, new_part): get_mapping_dictionary()[original_part.id] = new_part for prop_original in original_part.properties: get_mapping_dictionary()[prop_original.id] = [prop_new for prop_new in new_part.properties if get_mapping_dictionary()[prop_original._json_data[]].id == prop_new._json_data[]][0]
def track(self, message, name="Message"): if self.chatbase_token: asyncio.ensure_future(self._track(message, name))
Track message using http://chatbase.com Set chatbase_token to make it work
### Input: Track message using http://chatbase.com Set chatbase_token to make it work ### Response: def track(self, message, name="Message"): if self.chatbase_token: asyncio.ensure_future(self._track(message, name))
def resource_row_set(package, resource): tables = list(table_set.tables) if not len(tables): log.error("No tables were found in the source file.") return row_set = tables[0] offset, headers = headers_guess(row_set.sample) row_set.register_processor(headers_processor(headers)) row_set.register_processor(offset_processor(offset + 1)) types = type_guess(row_set.sample, strict=True) row_set.register_processor(types_processor(types)) return row_set
Generate an iterator over all the rows in this resource's source data.
### Input: Generate an iterator over all the rows in this resource's source data. ### Response: def resource_row_set(package, resource): tables = list(table_set.tables) if not len(tables): log.error("No tables were found in the source file.") return row_set = tables[0] offset, headers = headers_guess(row_set.sample) row_set.register_processor(headers_processor(headers)) row_set.register_processor(offset_processor(offset + 1)) types = type_guess(row_set.sample, strict=True) row_set.register_processor(types_processor(types)) return row_set
def save_supy( df_output: pandas.DataFrame, df_state_final: pandas.DataFrame, freq_s: int = 3600, site: str = , path_dir_save: str = Path(), path_runcontrol: str = None,)->list: .path/to/RunControl.nmlpath/to/some/dirTestpath/to/some/dir if path_runcontrol is not None: freq_s, path_dir_save, site = get_save_info(path_runcontrol) list_path_save = save_df_output(df_output, freq_s, site, path_dir_save) path_state_save = save_df_state(df_state_final, site, path_dir_save) list_path_save.append(path_state_save) return list_path_save
Save SuPy run results to files Parameters ---------- df_output : pandas.DataFrame DataFrame of output df_state_final : pandas.DataFrame DataFrame of final model states freq_s : int, optional Output frequency in seconds (the default is 3600, which indicates hourly output) site : str, optional Site identifier (the default is '', which indicates site identifier will be left empty) path_dir_save : str, optional Path to directory to saving the files (the default is Path('.'), which indicates the current working directory) path_runcontrol : str, optional Path to SUEWS :ref:`RunControl.nml <suews:RunControl.nml>`, which, if set, will be preferably used to derive `freq_s`, `site` and `path_dir_save`. (the default is None, which is unset) Returns ------- list a list of paths of saved files Examples -------- 1. save results of a supy run to the current working directory with default settings >>> list_path_save = supy.save_supy(df_output, df_state_final) 2. save results according to settings in :ref:`RunControl.nml <suews:RunControl.nml>` >>> list_path_save = supy.save_supy(df_output, df_state_final, path_runcontrol='path/to/RunControl.nml') 3. save results of a supy run at resampling frequency of 1800 s (i.e., half-hourly results) under the site code ``Test`` to a customised location 'path/to/some/dir' >>> list_path_save = supy.save_supy(df_output, df_state_final, freq_s=1800, site='Test', path_dir_save='path/to/some/dir')
### Input: Save SuPy run results to files Parameters ---------- df_output : pandas.DataFrame DataFrame of output df_state_final : pandas.DataFrame DataFrame of final model states freq_s : int, optional Output frequency in seconds (the default is 3600, which indicates hourly output) site : str, optional Site identifier (the default is '', which indicates site identifier will be left empty) path_dir_save : str, optional Path to directory to saving the files (the default is Path('.'), which indicates the current working directory) path_runcontrol : str, optional Path to SUEWS :ref:`RunControl.nml <suews:RunControl.nml>`, which, if set, will be preferably used to derive `freq_s`, `site` and `path_dir_save`. (the default is None, which is unset) Returns ------- list a list of paths of saved files Examples -------- 1. save results of a supy run to the current working directory with default settings >>> list_path_save = supy.save_supy(df_output, df_state_final) 2. save results according to settings in :ref:`RunControl.nml <suews:RunControl.nml>` >>> list_path_save = supy.save_supy(df_output, df_state_final, path_runcontrol='path/to/RunControl.nml') 3. save results of a supy run at resampling frequency of 1800 s (i.e., half-hourly results) under the site code ``Test`` to a customised location 'path/to/some/dir' >>> list_path_save = supy.save_supy(df_output, df_state_final, freq_s=1800, site='Test', path_dir_save='path/to/some/dir') ### Response: def save_supy( df_output: pandas.DataFrame, df_state_final: pandas.DataFrame, freq_s: int = 3600, site: str = , path_dir_save: str = Path(), path_runcontrol: str = None,)->list: .path/to/RunControl.nmlpath/to/some/dirTestpath/to/some/dir if path_runcontrol is not None: freq_s, path_dir_save, site = get_save_info(path_runcontrol) list_path_save = save_df_output(df_output, freq_s, site, path_dir_save) path_state_save = save_df_state(df_state_final, site, path_dir_save) list_path_save.append(path_state_save) return list_path_save
def install_packages(self): installs, upgraded = [], [] for inst in (self.dep_install + self.install): package = (self.tmp_path + inst).split() pkg_ver = "{0}-{1}".format(split_package(inst)[0], split_package(inst)[1]) self.checksums(inst) if GetFromInstalled(split_package(inst)[0]).name(): print("[ {0}upgrading{1} ] --> {2}".format( self.meta.color["YELLOW"], self.meta.color["ENDC"], inst)) upgraded.append(pkg_ver) if "--reinstall" in self.flag: PackageManager(package).upgrade("--reinstall") else: PackageManager(package).upgrade("--install-new") else: print("[ {0}installing{1} ] --> {2}".format( self.meta.color["GREEN"], self.meta.color["ENDC"], inst)) installs.append(pkg_ver) PackageManager(package).upgrade("--install-new") return [installs, upgraded]
Install or upgrade packages
### Input: Install or upgrade packages ### Response: def install_packages(self): installs, upgraded = [], [] for inst in (self.dep_install + self.install): package = (self.tmp_path + inst).split() pkg_ver = "{0}-{1}".format(split_package(inst)[0], split_package(inst)[1]) self.checksums(inst) if GetFromInstalled(split_package(inst)[0]).name(): print("[ {0}upgrading{1} ] --> {2}".format( self.meta.color["YELLOW"], self.meta.color["ENDC"], inst)) upgraded.append(pkg_ver) if "--reinstall" in self.flag: PackageManager(package).upgrade("--reinstall") else: PackageManager(package).upgrade("--install-new") else: print("[ {0}installing{1} ] --> {2}".format( self.meta.color["GREEN"], self.meta.color["ENDC"], inst)) installs.append(pkg_ver) PackageManager(package).upgrade("--install-new") return [installs, upgraded]
def parse_shebang_from_file(path): if not os.path.lexists(path): raise ValueError(.format(path)) if not os.access(path, os.X_OK): return () with open(path, ) as f: return parse_shebang(f)
Parse the shebang given a file path.
### Input: Parse the shebang given a file path. ### Response: def parse_shebang_from_file(path): if not os.path.lexists(path): raise ValueError(.format(path)) if not os.access(path, os.X_OK): return () with open(path, ) as f: return parse_shebang(f)
def sort_vid_split(vs): match = var_re.match(vs) if match is None: raise ValueError(.format(str(vs))) else: return match.groups()
Split a valid variable string into its variable sort and id. Examples: >>> sort_vid_split('h3') ('h', '3') >>> sort_vid_split('ref-ind12') ('ref-ind', '12')
### Input: Split a valid variable string into its variable sort and id. Examples: >>> sort_vid_split('h3') ('h', '3') >>> sort_vid_split('ref-ind12') ('ref-ind', '12') ### Response: def sort_vid_split(vs): match = var_re.match(vs) if match is None: raise ValueError(.format(str(vs))) else: return match.groups()
def combine_HSPs(a): m = a[0] if len(a) == 1: return m for b in a[1:]: assert m.query == b.query assert m.subject == b.subject m.hitlen += b.hitlen m.nmismatch += b.nmismatch m.ngaps += b.ngaps m.qstart = min(m.qstart, b.qstart) m.qstop = max(m.qstop, b.qstop) m.sstart = min(m.sstart, b.sstart) m.sstop = max(m.sstop, b.sstop) if m.has_score: m.score += b.score m.pctid = 100 - (m.nmismatch + m.ngaps) * 100. / m.hitlen return m
Combine HSPs into a single BlastLine.
### Input: Combine HSPs into a single BlastLine. ### Response: def combine_HSPs(a): m = a[0] if len(a) == 1: return m for b in a[1:]: assert m.query == b.query assert m.subject == b.subject m.hitlen += b.hitlen m.nmismatch += b.nmismatch m.ngaps += b.ngaps m.qstart = min(m.qstart, b.qstart) m.qstop = max(m.qstop, b.qstop) m.sstart = min(m.sstart, b.sstart) m.sstop = max(m.sstop, b.sstop) if m.has_score: m.score += b.score m.pctid = 100 - (m.nmismatch + m.ngaps) * 100. / m.hitlen return m
def remember_identity(self, subject, authc_token, account_id): try: identifiers = self.get_identity_to_remember(subject, account_id) except AttributeError: msg = "Neither account_id nor identifier arguments passed" raise AttributeError(msg) encrypted = self.convert_identifiers_to_bytes(identifiers) self.remember_encrypted_identity(subject, encrypted)
Yosai consolidates rememberIdentity, an overloaded method in java, to a method that will use an identifier-else-account logic. Remembers a subject-unique identity for retrieval later. This implementation first resolves the exact identifying attributes to remember. It then remembers these identifying attributes by calling remember_identity(Subject, IdentifierCollection) :param subject: the subject for which the identifying attributes are being remembered :param authc_token: ignored in the AbstractRememberMeManager :param account_id: the account id of authenticated account
### Input: Yosai consolidates rememberIdentity, an overloaded method in java, to a method that will use an identifier-else-account logic. Remembers a subject-unique identity for retrieval later. This implementation first resolves the exact identifying attributes to remember. It then remembers these identifying attributes by calling remember_identity(Subject, IdentifierCollection) :param subject: the subject for which the identifying attributes are being remembered :param authc_token: ignored in the AbstractRememberMeManager :param account_id: the account id of authenticated account ### Response: def remember_identity(self, subject, authc_token, account_id): try: identifiers = self.get_identity_to_remember(subject, account_id) except AttributeError: msg = "Neither account_id nor identifier arguments passed" raise AttributeError(msg) encrypted = self.convert_identifiers_to_bytes(identifiers) self.remember_encrypted_identity(subject, encrypted)
def set_mlimits(self, row, column, min=None, max=None): subplot = self.get_subplot_at(row, column) subplot.set_mlimits(min, max)
Set limits for the point meta (colormap). Point meta values outside this range will be clipped. :param min: value for start of the colormap. :param max: value for end of the colormap.
### Input: Set limits for the point meta (colormap). Point meta values outside this range will be clipped. :param min: value for start of the colormap. :param max: value for end of the colormap. ### Response: def set_mlimits(self, row, column, min=None, max=None): subplot = self.get_subplot_at(row, column) subplot.set_mlimits(min, max)
def check_future_import(node): savenode = node if not (node.type == syms.simple_stmt and node.children): return set() node = node.children[0] if not (node.type == syms.import_from and hasattr(node.children[1], ) and node.children[1].value == u): return set() if node.children[3].type == token.LPAR: node = node.children[4] else: node = node.children[3] if node.type == syms.import_as_names: result = set() for n in node.children: if n.type == token.NAME: result.add(n.value) elif n.type == syms.import_as_name: n = n.children[0] assert n.type == token.NAME result.add(n.value) return result elif node.type == syms.import_as_name: node = node.children[0] assert node.type == token.NAME return set([node.value]) elif node.type == token.NAME: return set([node.value]) else: assert False, "strange import: %s" % savenode
If this is a future import, return set of symbols that are imported, else return None.
### Input: If this is a future import, return set of symbols that are imported, else return None. ### Response: def check_future_import(node): savenode = node if not (node.type == syms.simple_stmt and node.children): return set() node = node.children[0] if not (node.type == syms.import_from and hasattr(node.children[1], ) and node.children[1].value == u): return set() if node.children[3].type == token.LPAR: node = node.children[4] else: node = node.children[3] if node.type == syms.import_as_names: result = set() for n in node.children: if n.type == token.NAME: result.add(n.value) elif n.type == syms.import_as_name: n = n.children[0] assert n.type == token.NAME result.add(n.value) return result elif node.type == syms.import_as_name: node = node.children[0] assert node.type == token.NAME return set([node.value]) elif node.type == token.NAME: return set([node.value]) else: assert False, "strange import: %s" % savenode
def _generate_custom_annotation_processors(self, ns, data_type, extra_annotations=()): dt, _, _ = unwrap(data_type) if is_struct_type(dt) or is_union_type(dt): annotation_types_seen = set() for annotation in get_custom_annotations_recursive(dt): if annotation.annotation_type not in annotation_types_seen: yield (annotation.annotation_type, generate_func_call( , args=[class_name_for_annotation_type(annotation.annotation_type, ns), ] )) annotation_types_seen.add(annotation.annotation_type) elif is_list_type(dt): for annotation_type, recursive_processor in self._generate_custom_annotation_processors( ns, dt.data_type): yield (annotation_type, generate_func_call( , args=[recursive_processor] )) elif is_map_type(dt): for annotation_type, recursive_processor in self._generate_custom_annotation_processors( ns, dt.value_data_type): yield (annotation_type, generate_func_call( , args=[recursive_processor] )) for annotation in itertools.chain(get_custom_annotations_for_alias(data_type), extra_annotations): yield (annotation.annotation_type, generate_func_call( , args=[, self._generate_custom_annotation_instance(ns, annotation)] ))
Generates code that will run a custom function 'processor' on every field with a custom annotation, no matter how deep (recursively) it might be located in data_type (incl. in elements of lists or maps). If extra_annotations is passed, it's assumed to be a list of custom annotation applied directly onto data_type (e.g. because it's a field in a struct). Yields pairs of (annotation_type, code) where code is code that evaluates to a function that should be executed with an instance of data_type as the only parameter, and whose return value should replace that instance.
### Input: Generates code that will run a custom function 'processor' on every field with a custom annotation, no matter how deep (recursively) it might be located in data_type (incl. in elements of lists or maps). If extra_annotations is passed, it's assumed to be a list of custom annotation applied directly onto data_type (e.g. because it's a field in a struct). Yields pairs of (annotation_type, code) where code is code that evaluates to a function that should be executed with an instance of data_type as the only parameter, and whose return value should replace that instance. ### Response: def _generate_custom_annotation_processors(self, ns, data_type, extra_annotations=()): dt, _, _ = unwrap(data_type) if is_struct_type(dt) or is_union_type(dt): annotation_types_seen = set() for annotation in get_custom_annotations_recursive(dt): if annotation.annotation_type not in annotation_types_seen: yield (annotation.annotation_type, generate_func_call( , args=[class_name_for_annotation_type(annotation.annotation_type, ns), ] )) annotation_types_seen.add(annotation.annotation_type) elif is_list_type(dt): for annotation_type, recursive_processor in self._generate_custom_annotation_processors( ns, dt.data_type): yield (annotation_type, generate_func_call( , args=[recursive_processor] )) elif is_map_type(dt): for annotation_type, recursive_processor in self._generate_custom_annotation_processors( ns, dt.value_data_type): yield (annotation_type, generate_func_call( , args=[recursive_processor] )) for annotation in itertools.chain(get_custom_annotations_for_alias(data_type), extra_annotations): yield (annotation.annotation_type, generate_func_call( , args=[, self._generate_custom_annotation_instance(ns, annotation)] ))
def get_field_SQL(self, field_name, field): field_type = "" is_pk = field.options.get(, False) if issubclass(field.type, bool): field_type = elif issubclass(field.type, long): if is_pk: field_type = else: field_type = elif issubclass(field.type, int): field_type = if is_pk: field_type += elif issubclass(field.type, basestring): fo = field.options if field.is_ref():
returns the SQL for a given field with full type information http://www.sqlite.org/datatype3.html field_name -- string -- the field's name field -- Field() -- the set options for the field return -- string -- the field type (eg, foo BOOL NOT NULL)
### Input: returns the SQL for a given field with full type information http://www.sqlite.org/datatype3.html field_name -- string -- the field's name field -- Field() -- the set options for the field return -- string -- the field type (eg, foo BOOL NOT NULL) ### Response: def get_field_SQL(self, field_name, field): field_type = "" is_pk = field.options.get(, False) if issubclass(field.type, bool): field_type = elif issubclass(field.type, long): if is_pk: field_type = else: field_type = elif issubclass(field.type, int): field_type = if is_pk: field_type += elif issubclass(field.type, basestring): fo = field.options if field.is_ref():
def _download_from_s3(bucket, key, version=None): s3 = boto3.client() extra_args = {} if version: extra_args["VersionId"] = version with tempfile.TemporaryFile() as fp: try: s3.download_fileobj( bucket, key, fp, ExtraArgs=extra_args) fp.seek(0) return fp.read() except botocore.exceptions.ClientError: LOG.error("Unable to download Swagger document from S3 Bucket=%s Key=%s Version=%s", bucket, key, version) raise
Download a file from given S3 location, if available. Parameters ---------- bucket : str S3 Bucket name key : str S3 Bucket Key aka file path version : str Optional Version ID of the file Returns ------- str Contents of the file that was downloaded Raises ------ botocore.exceptions.ClientError if we were unable to download the file from S3
### Input: Download a file from given S3 location, if available. Parameters ---------- bucket : str S3 Bucket name key : str S3 Bucket Key aka file path version : str Optional Version ID of the file Returns ------- str Contents of the file that was downloaded Raises ------ botocore.exceptions.ClientError if we were unable to download the file from S3 ### Response: def _download_from_s3(bucket, key, version=None): s3 = boto3.client() extra_args = {} if version: extra_args["VersionId"] = version with tempfile.TemporaryFile() as fp: try: s3.download_fileobj( bucket, key, fp, ExtraArgs=extra_args) fp.seek(0) return fp.read() except botocore.exceptions.ClientError: LOG.error("Unable to download Swagger document from S3 Bucket=%s Key=%s Version=%s", bucket, key, version) raise
def join(self, ToMerge, keycols=None, nullvals=None, renamer=None, returnrenaming=False, selfname=None, Names=None): if isinstance(ToMerge,np.ndarray): ToMerge = [ToMerge] if isinstance(ToMerge,dict): assert selfname not in ToMerge.keys(), \ (t use "" for name of one of the things to merge, since it is the same name as the self object.self' ToMerge.update({selfname:self}) else: ToMerge = [self] + ToMerge return tab_join(ToMerge, keycols=keycols, nullvals=nullvals, renamer=renamer, returnrenaming=returnrenaming, Names=Names)
Wrapper for spreadsheet.join, but handles coloring attributes. The `selfname` argument allows naming of `self` to be used if `ToMerge` is a dictionary. **See also:** :func:`tabular.spreadsheet.join`, :func:`tab_join`
### Input: Wrapper for spreadsheet.join, but handles coloring attributes. The `selfname` argument allows naming of `self` to be used if `ToMerge` is a dictionary. **See also:** :func:`tabular.spreadsheet.join`, :func:`tab_join` ### Response: def join(self, ToMerge, keycols=None, nullvals=None, renamer=None, returnrenaming=False, selfname=None, Names=None): if isinstance(ToMerge,np.ndarray): ToMerge = [ToMerge] if isinstance(ToMerge,dict): assert selfname not in ToMerge.keys(), \ (t use "" for name of one of the things to merge, since it is the same name as the self object.self' ToMerge.update({selfname:self}) else: ToMerge = [self] + ToMerge return tab_join(ToMerge, keycols=keycols, nullvals=nullvals, renamer=renamer, returnrenaming=returnrenaming, Names=Names)
def create(tournament, name, **params): params.update({"name": name}) return api.fetch_and_parse( "POST", "tournaments/%s/participants" % tournament, "participant", **params)
Add a participant to a tournament.
### Input: Add a participant to a tournament. ### Response: def create(tournament, name, **params): params.update({"name": name}) return api.fetch_and_parse( "POST", "tournaments/%s/participants" % tournament, "participant", **params)
def delete(workflow_id: str = None, workflow_version: str = None): if workflow_id is None and workflow_version is None: keys = DB.get_keys("workflow_definitions:*") DB.delete(*keys) elif workflow_id is not None and workflow_version is None: keys = DB.get_keys("workflow_definitions:{}:*".format(workflow_id)) DB.delete(*keys) elif workflow_id is None and workflow_version is not None: keys = DB.get_keys("workflow_definitions:*:{}" .format(workflow_version)) DB.delete(*keys) else: name = "workflow_definitions:{}:{}".format(workflow_id, workflow_version) DB.delete(name)
Delete workflow definitions. Args: workflow_id (str, optional): Optional workflow identifier workflow_version (str, optional): Optional workflow identifier version If workflow_id and workflow_version are None, delete all workflow definitions.
### Input: Delete workflow definitions. Args: workflow_id (str, optional): Optional workflow identifier workflow_version (str, optional): Optional workflow identifier version If workflow_id and workflow_version are None, delete all workflow definitions. ### Response: def delete(workflow_id: str = None, workflow_version: str = None): if workflow_id is None and workflow_version is None: keys = DB.get_keys("workflow_definitions:*") DB.delete(*keys) elif workflow_id is not None and workflow_version is None: keys = DB.get_keys("workflow_definitions:{}:*".format(workflow_id)) DB.delete(*keys) elif workflow_id is None and workflow_version is not None: keys = DB.get_keys("workflow_definitions:*:{}" .format(workflow_version)) DB.delete(*keys) else: name = "workflow_definitions:{}:{}".format(workflow_id, workflow_version) DB.delete(name)
def make_table(grid): cell_width = 2 + max( reduce( lambda x, y: x+y, [[len(item) for item in row] for row in grid], [] ) ) num_cols = len(grid[0]) rst = table_div(num_cols, cell_width, 0) header_flag = 1 for row in grid: rst = rst + + .join( [normalize_cell(x, cell_width-1) for x in row] ) + rst = rst + table_div(num_cols, cell_width, header_flag) header_flag = 0 return rst
Make a RST-compatible table From http://stackoverflow.com/a/12539081
### Input: Make a RST-compatible table From http://stackoverflow.com/a/12539081 ### Response: def make_table(grid): cell_width = 2 + max( reduce( lambda x, y: x+y, [[len(item) for item in row] for row in grid], [] ) ) num_cols = len(grid[0]) rst = table_div(num_cols, cell_width, 0) header_flag = 1 for row in grid: rst = rst + + .join( [normalize_cell(x, cell_width-1) for x in row] ) + rst = rst + table_div(num_cols, cell_width, header_flag) header_flag = 0 return rst
def _validate_machines(machines, add_error): if not machines: return for machine_id, machine in machines.items(): if machine_id < 0: add_error( .format(machine_id)) if machine is None: continue elif not isdict(machine): add_error( .format(machine_id)) continue label = .format(machine_id) _validate_constraints(machine.get(), label, add_error) _validate_series(machine.get(), label, add_error) _validate_annotations(machine.get(), label, add_error)
Validate the given machines section. Validation includes machines constraints, series and annotations. Use the given add_error callable to register validation error.
### Input: Validate the given machines section. Validation includes machines constraints, series and annotations. Use the given add_error callable to register validation error. ### Response: def _validate_machines(machines, add_error): if not machines: return for machine_id, machine in machines.items(): if machine_id < 0: add_error( .format(machine_id)) if machine is None: continue elif not isdict(machine): add_error( .format(machine_id)) continue label = .format(machine_id) _validate_constraints(machine.get(), label, add_error) _validate_series(machine.get(), label, add_error) _validate_annotations(machine.get(), label, add_error)
def set_logging_level(args): "Computes and sets the logging level from the parsed arguments." root_logger = logging.getLogger() level = logging.INFO logging.getLogger().setLevel(logging.WARNING) if "verbose" in args and args.verbose is not None: logging.getLogger().setLevel(0) if args.verbose > 1: level = 5 elif args.verbose > 0: level = logging.DEBUG else: logging.critical("verbose is an unexpected value. (%s) exiting.", args.verbose) sys.exit(2) elif "quiet" in args and args.quiet is not None: if args.quiet > 1: level = logging.ERROR elif args.quiet > 0: level = logging.WARNING else: logging.critical("quiet is an unexpected value. (%s) exiting.", args.quiet) if level is not None: root_logger.setLevel(level) if args.silence_urllib3: requests.packages.urllib3.disable_warnings()
Computes and sets the logging level from the parsed arguments.
### Input: Computes and sets the logging level from the parsed arguments. ### Response: def set_logging_level(args): "Computes and sets the logging level from the parsed arguments." root_logger = logging.getLogger() level = logging.INFO logging.getLogger().setLevel(logging.WARNING) if "verbose" in args and args.verbose is not None: logging.getLogger().setLevel(0) if args.verbose > 1: level = 5 elif args.verbose > 0: level = logging.DEBUG else: logging.critical("verbose is an unexpected value. (%s) exiting.", args.verbose) sys.exit(2) elif "quiet" in args and args.quiet is not None: if args.quiet > 1: level = logging.ERROR elif args.quiet > 0: level = logging.WARNING else: logging.critical("quiet is an unexpected value. (%s) exiting.", args.quiet) if level is not None: root_logger.setLevel(level) if args.silence_urllib3: requests.packages.urllib3.disable_warnings()
def sendImage( self, image_id, message=None, thread_id=None, thread_type=ThreadType.USER, is_gif=False, ): if is_gif: mimetype = "image/gif" else: mimetype = "image/png" return self._sendFiles( files=[(image_id, mimetype)], message=message, thread_id=thread_id, thread_type=thread_type, )
Deprecated. Use :func:`fbchat.Client._sendFiles` instead
### Input: Deprecated. Use :func:`fbchat.Client._sendFiles` instead ### Response: def sendImage( self, image_id, message=None, thread_id=None, thread_type=ThreadType.USER, is_gif=False, ): if is_gif: mimetype = "image/gif" else: mimetype = "image/png" return self._sendFiles( files=[(image_id, mimetype)], message=message, thread_id=thread_id, thread_type=thread_type, )
def create_observe_operations(self, terminal, reward, index): num_episodes = tf.count_nonzero(input_tensor=terminal, dtype=util.tf_dtype()) increment_episode = tf.assign_add(ref=self.episode, value=tf.to_int64(x=num_episodes)) increment_global_episode = tf.assign_add(ref=self.global_episode, value=tf.to_int64(x=num_episodes)) with tf.control_dependencies(control_inputs=(increment_episode, increment_global_episode)): fn = (lambda x: tf.stop_gradient(input=x[:self.list_buffer_index[index]])) states = util.map_tensors(fn=fn, tensors=self.list_states_buffer, index=index) internals = util.map_tensors(fn=fn, tensors=self.list_internals_buffer, index=index) actions = util.map_tensors(fn=fn, tensors=self.list_actions_buffer, index=index) terminal = tf.stop_gradient(input=terminal) reward = tf.stop_gradient(input=reward) observation = self.fn_observe_timestep( states=states, internals=internals, actions=actions, terminal=terminal, reward=reward ) with tf.control_dependencies(control_inputs=(observation,)): reset_index = tf.assign(ref=self.list_buffer_index[index], value=0) with tf.control_dependencies(control_inputs=(reset_index,)): self.episode_output = self.global_episode + 0 self.list_buffer_index_reset_op = tf.group( *(tf.assign(ref=self.list_buffer_index[n], value=0) for n in range(self.num_parallel)) )
Returns the tf op to fetch when an observation batch is passed in (e.g. an episode's rewards and terminals). Uses the filled tf buffers for states, actions and internals to run the tf_observe_timestep (model-dependent), resets buffer index and increases counters (episodes, timesteps). Args: terminal: The 1D tensor (bool) of terminal signals to process (more than one True within that list is ok). reward: The 1D tensor (float) of rewards to process. Returns: Tf op to fetch when `observe()` is called.
### Input: Returns the tf op to fetch when an observation batch is passed in (e.g. an episode's rewards and terminals). Uses the filled tf buffers for states, actions and internals to run the tf_observe_timestep (model-dependent), resets buffer index and increases counters (episodes, timesteps). Args: terminal: The 1D tensor (bool) of terminal signals to process (more than one True within that list is ok). reward: The 1D tensor (float) of rewards to process. Returns: Tf op to fetch when `observe()` is called. ### Response: def create_observe_operations(self, terminal, reward, index): num_episodes = tf.count_nonzero(input_tensor=terminal, dtype=util.tf_dtype()) increment_episode = tf.assign_add(ref=self.episode, value=tf.to_int64(x=num_episodes)) increment_global_episode = tf.assign_add(ref=self.global_episode, value=tf.to_int64(x=num_episodes)) with tf.control_dependencies(control_inputs=(increment_episode, increment_global_episode)): fn = (lambda x: tf.stop_gradient(input=x[:self.list_buffer_index[index]])) states = util.map_tensors(fn=fn, tensors=self.list_states_buffer, index=index) internals = util.map_tensors(fn=fn, tensors=self.list_internals_buffer, index=index) actions = util.map_tensors(fn=fn, tensors=self.list_actions_buffer, index=index) terminal = tf.stop_gradient(input=terminal) reward = tf.stop_gradient(input=reward) observation = self.fn_observe_timestep( states=states, internals=internals, actions=actions, terminal=terminal, reward=reward ) with tf.control_dependencies(control_inputs=(observation,)): reset_index = tf.assign(ref=self.list_buffer_index[index], value=0) with tf.control_dependencies(control_inputs=(reset_index,)): self.episode_output = self.global_episode + 0 self.list_buffer_index_reset_op = tf.group( *(tf.assign(ref=self.list_buffer_index[n], value=0) for n in range(self.num_parallel)) )
def _process_json_data(person_data): person = SwsPerson() if person_data["BirthDate"]: person.birth_date = parse(person_data["BirthDate"]).date() person.directory_release = person_data["DirectoryRelease"] person.email = person_data["Email"] person.employee_id = person_data["EmployeeID"] person.first_name = person_data["FirstName"] person.gender = person_data["Gender"] person.last_name = person_data["LastName"] person.student_name = person_data["StudentName"] if person_data["LastEnrolled"] is not None: last_enrolled = LastEnrolled() last_enrolled.href = person_data["LastEnrolled"]["Href"] last_enrolled.quarter = person_data["LastEnrolled"]["Quarter"] last_enrolled.year = person_data["LastEnrolled"]["Year"] person.last_enrolled = last_enrolled if person_data["LocalAddress"] is not None: address_data = person_data["LocalAddress"] local_address = StudentAddress() local_address.city = address_data["City"] local_address.country = address_data["Country"] local_address.street_line1 = address_data["Line1"] local_address.street_line2 = address_data["Line2"] local_address.postal_code = address_data["PostalCode"] local_address.state = address_data["State"] local_address.zip_code = address_data["Zip"] person.local_address = local_address person.local_phone = person_data["LocalPhone"] if person_data["PermanentAddress"] is not None: perm_address_data = person_data["PermanentAddress"] permanent_address = StudentAddress() permanent_address.city = perm_address_data["City"] permanent_address.country = perm_address_data["Country"] permanent_address.street_line1 = perm_address_data["Line1"] permanent_address.street_line2 = perm_address_data["Line2"] permanent_address.postal_code = perm_address_data["PostalCode"] permanent_address.state = perm_address_data["State"] permanent_address.zip_code = perm_address_data["Zip"] person.permanent_address = permanent_address person.permanent_phone = person_data["PermanentPhone"] person.uwregid = person_data["RegID"] person.student_number = person_data["StudentNumber"] person.student_system_key = person_data["StudentSystemKey"] person.uwnetid = person_data["UWNetID"] person.visa_type = person_data["VisaType"] return person
Returns a uw_sws.models.SwsPerson object
### Input: Returns a uw_sws.models.SwsPerson object ### Response: def _process_json_data(person_data): person = SwsPerson() if person_data["BirthDate"]: person.birth_date = parse(person_data["BirthDate"]).date() person.directory_release = person_data["DirectoryRelease"] person.email = person_data["Email"] person.employee_id = person_data["EmployeeID"] person.first_name = person_data["FirstName"] person.gender = person_data["Gender"] person.last_name = person_data["LastName"] person.student_name = person_data["StudentName"] if person_data["LastEnrolled"] is not None: last_enrolled = LastEnrolled() last_enrolled.href = person_data["LastEnrolled"]["Href"] last_enrolled.quarter = person_data["LastEnrolled"]["Quarter"] last_enrolled.year = person_data["LastEnrolled"]["Year"] person.last_enrolled = last_enrolled if person_data["LocalAddress"] is not None: address_data = person_data["LocalAddress"] local_address = StudentAddress() local_address.city = address_data["City"] local_address.country = address_data["Country"] local_address.street_line1 = address_data["Line1"] local_address.street_line2 = address_data["Line2"] local_address.postal_code = address_data["PostalCode"] local_address.state = address_data["State"] local_address.zip_code = address_data["Zip"] person.local_address = local_address person.local_phone = person_data["LocalPhone"] if person_data["PermanentAddress"] is not None: perm_address_data = person_data["PermanentAddress"] permanent_address = StudentAddress() permanent_address.city = perm_address_data["City"] permanent_address.country = perm_address_data["Country"] permanent_address.street_line1 = perm_address_data["Line1"] permanent_address.street_line2 = perm_address_data["Line2"] permanent_address.postal_code = perm_address_data["PostalCode"] permanent_address.state = perm_address_data["State"] permanent_address.zip_code = perm_address_data["Zip"] person.permanent_address = permanent_address person.permanent_phone = person_data["PermanentPhone"] person.uwregid = person_data["RegID"] person.student_number = person_data["StudentNumber"] person.student_system_key = person_data["StudentSystemKey"] person.uwnetid = person_data["UWNetID"] person.visa_type = person_data["VisaType"] return person
async def volume(self, ctx, volume: int): if ctx.voice_client is None: return await ctx.send("Not connected to a voice channel.") ctx.voice_client.source.volume = volume / 100 await ctx.send("Changed volume to {}%".format(volume))
Changes the player's volume
### Input: Changes the player's volume ### Response: async def volume(self, ctx, volume: int): if ctx.voice_client is None: return await ctx.send("Not connected to a voice channel.") ctx.voice_client.source.volume = volume / 100 await ctx.send("Changed volume to {}%".format(volume))
def _ProgressMeterUpdate(bar, value, text_elem, *args): global _my_windows if bar == None: return False if bar.BarExpired: return False message, w, h = ConvertArgsToSingleString(*args) text_elem.Update(message) bar.CurrentValue = value rc = bar.UpdateBar(value) if value >= bar.MaxValue or not rc: bar.BarExpired = True bar.ParentForm._Close() if rc: _my_windows.Decrement() if bar.ParentForm.RootNeedsDestroying: try: bar.ParentForm.TKroot.destroy() except: pass bar.ParentForm.RootNeedsDestroying = False bar.ParentForm.__del__() return False return rc
Update the progress meter for a form :param form: class ProgressBar :param value: int :return: True if not cancelled, OK....False if Error
### Input: Update the progress meter for a form :param form: class ProgressBar :param value: int :return: True if not cancelled, OK....False if Error ### Response: def _ProgressMeterUpdate(bar, value, text_elem, *args): global _my_windows if bar == None: return False if bar.BarExpired: return False message, w, h = ConvertArgsToSingleString(*args) text_elem.Update(message) bar.CurrentValue = value rc = bar.UpdateBar(value) if value >= bar.MaxValue or not rc: bar.BarExpired = True bar.ParentForm._Close() if rc: _my_windows.Decrement() if bar.ParentForm.RootNeedsDestroying: try: bar.ParentForm.TKroot.destroy() except: pass bar.ParentForm.RootNeedsDestroying = False bar.ParentForm.__del__() return False return rc
def _unparse_entry_record(self, entry): for attr_type in sorted(entry.keys()): for attr_value in entry[attr_type]: self._unparse_attr(attr_type, attr_value)
:type entry: Dict[string, List[string]] :param entry: Dictionary holding an entry
### Input: :type entry: Dict[string, List[string]] :param entry: Dictionary holding an entry ### Response: def _unparse_entry_record(self, entry): for attr_type in sorted(entry.keys()): for attr_value in entry[attr_type]: self._unparse_attr(attr_type, attr_value)
def parse_headers(self, req, name, field): return get_value(req.headers, name, field)
Pull a value from the header data.
### Input: Pull a value from the header data. ### Response: def parse_headers(self, req, name, field): return get_value(req.headers, name, field)
def send_order( self, code=None, amount=None, time=None, towards=None, price=None, money=None, order_model=None, amount_model=None, *args, **kwargs ): wrong_reason = None assert code is not None and time is not None and towards is not None and order_model is not None and amount_model is not None date = str(time)[0:10] if len(str(time)) == 19 else str(time) time = str(time) if len(str(time)) == 19 else .format( str(time)[0:10] ) if self.allow_margin: amount = amount if amount_model is AMOUNT_MODEL.BY_AMOUNT else int( money / ( self.market_preset.get_unit(code) * self.market_preset.get_frozen(code) * price * (1 + self.commission_coeff) ) / 100 ) * 100 else: amount = amount if amount_model is AMOUNT_MODEL.BY_AMOUNT else int( money / (price * (1 + self.commission_coeff)) / 100 ) * 100 if self.allow_margin: money = amount * price * self.market_preset.get_unit(code)*self.market_preset.get_frozen(code) * \ (1+self.commission_coeff) if amount_model is AMOUNT_MODEL.BY_AMOUNT else money else: money = amount * price * \ (1+self.commission_coeff) if amount_model is AMOUNT_MODEL.BY_AMOUNT else money flag = False assert (int(towards) != 0) if int(towards) in [1, 2, 3]: if self.cash_available >= money: if self.market_type == MARKET_TYPE.STOCK_CN: amount = int(amount / 100) * 100 self.cash_available -= money flag = True if self.running_environment == RUNNING_ENVIRONMENT.TZERO: if abs(self.buy_available.get(code, 0)) >= amount: flag = True self.cash_available -= money self.buy_available[code] -= amount else: flag = False wrong_reason = if self.market_type == MARKET_TYPE.FUTURE_CN: if towards == 3: _hold = self.sell_available.get(code, 0) _money = abs( float(amount * price * (1 + self.commission_coeff)) ) print(_hold) if self.cash_available >= _money: if _hold < 0: self.cash_available -= _money flag = True else: wrong_reason = else: wrong_reason = if towards == 2: self.cash_available -= money flag = True else: wrong_reason = .format( self.cash_available, code, time, amount, towards ) elif int(towards) in [-1, -2, -3]: _hold = self.sell_available.get(code, 0) if _hold >= amount: self.sell_available[code] -= amount flag = True else: if self.allow_sellopen and towards == -2: if self.cash_available >= money: flag = True else: print(, _hold) print(, amount) print(, money) print(, self.cash_available) wrong_reason = "卖空资金不足/不允许裸卖空" else: wrong_reason = "卖出仓位不足" if flag and (amount > 0): _order = QA_Order( user_cookie=self.user_cookie, strategy=self.strategy_name, frequence=self.frequence, account_cookie=self.account_cookie, code=code, market_type=self.market_type, date=date, datetime=time, sending_time=time, callback=self.receive_deal, amount=amount, price=price, order_model=order_model, towards=towards, money=money, broker=self.broker, amount_model=amount_model, commission_coeff=self.commission_coeff, tax_coeff=self.tax_coeff, *args, **kwargs ) self.datetime = time self.orders.insert_order(_order) return _order else: print( .format( code, time, amount, towards ) ) print(wrong_reason) return False
ATTENTION CHANGELOG 1.0.28 修改了Account的send_order方法, 区分按数量下单和按金额下单两种方式 - AMOUNT_MODEL.BY_PRICE ==> AMOUNT_MODEL.BY_MONEY # 按金额下单 - AMOUNT_MODEL.BY_AMOUNT # 按数量下单 在按金额下单的时候,应给予 money参数 在按数量下单的时候,应给予 amount参数 python code: Account=QA.QA_Account() Order_bymoney=Account.send_order(code='000001', price=11, money=0.3*Account.cash_available, time='2018-05-09', towards=QA.ORDER_DIRECTION.BUY, order_model=QA.ORDER_MODEL.MARKET, amount_model=QA.AMOUNT_MODEL.BY_MONEY ) Order_byamount=Account.send_order(code='000001', price=11, amount=100, time='2018-05-09', towards=QA.ORDER_DIRECTION.BUY, order_model=QA.ORDER_MODEL.MARKET, amount_model=QA.AMOUNT_MODEL.BY_AMOUNT ) :param code: 证券代码 :param amount: 买卖 数量多数股 :param time: Timestamp 对象 下单时间 :param towards: int , towards>0 买入 towards<0 卖出 :param price: 买入,卖出 标的证券的价格 :param money: 买卖 价格 :param order_model: 类型 QA.ORDER_MODE :param amount_model:类型 QA.AMOUNT_MODEL :return: QA_Order | False @2018/12/23 send_order 是QA的标准返回, 如需对接其他接口, 只需要对于QA_Order做适配即可 @2018/12/27 在判断账户为期货账户(及 允许双向交易) @2018/12/30 保证金账户的修改 1. 保证金账户冻结的金额 2. 保证金账户的结算 3. 保证金账户的判断
### Input: ATTENTION CHANGELOG 1.0.28 修改了Account的send_order方法, 区分按数量下单和按金额下单两种方式 - AMOUNT_MODEL.BY_PRICE ==> AMOUNT_MODEL.BY_MONEY # 按金额下单 - AMOUNT_MODEL.BY_AMOUNT # 按数量下单 在按金额下单的时候,应给予 money参数 在按数量下单的时候,应给予 amount参数 python code: Account=QA.QA_Account() Order_bymoney=Account.send_order(code='000001', price=11, money=0.3*Account.cash_available, time='2018-05-09', towards=QA.ORDER_DIRECTION.BUY, order_model=QA.ORDER_MODEL.MARKET, amount_model=QA.AMOUNT_MODEL.BY_MONEY ) Order_byamount=Account.send_order(code='000001', price=11, amount=100, time='2018-05-09', towards=QA.ORDER_DIRECTION.BUY, order_model=QA.ORDER_MODEL.MARKET, amount_model=QA.AMOUNT_MODEL.BY_AMOUNT ) :param code: 证券代码 :param amount: 买卖 数量多数股 :param time: Timestamp 对象 下单时间 :param towards: int , towards>0 买入 towards<0 卖出 :param price: 买入,卖出 标的证券的价格 :param money: 买卖 价格 :param order_model: 类型 QA.ORDER_MODE :param amount_model:类型 QA.AMOUNT_MODEL :return: QA_Order | False @2018/12/23 send_order 是QA的标准返回, 如需对接其他接口, 只需要对于QA_Order做适配即可 @2018/12/27 在判断账户为期货账户(及 允许双向交易) @2018/12/30 保证金账户的修改 1. 保证金账户冻结的金额 2. 保证金账户的结算 3. 保证金账户的判断 ### Response: def send_order( self, code=None, amount=None, time=None, towards=None, price=None, money=None, order_model=None, amount_model=None, *args, **kwargs ): wrong_reason = None assert code is not None and time is not None and towards is not None and order_model is not None and amount_model is not None date = str(time)[0:10] if len(str(time)) == 19 else str(time) time = str(time) if len(str(time)) == 19 else .format( str(time)[0:10] ) if self.allow_margin: amount = amount if amount_model is AMOUNT_MODEL.BY_AMOUNT else int( money / ( self.market_preset.get_unit(code) * self.market_preset.get_frozen(code) * price * (1 + self.commission_coeff) ) / 100 ) * 100 else: amount = amount if amount_model is AMOUNT_MODEL.BY_AMOUNT else int( money / (price * (1 + self.commission_coeff)) / 100 ) * 100 if self.allow_margin: money = amount * price * self.market_preset.get_unit(code)*self.market_preset.get_frozen(code) * \ (1+self.commission_coeff) if amount_model is AMOUNT_MODEL.BY_AMOUNT else money else: money = amount * price * \ (1+self.commission_coeff) if amount_model is AMOUNT_MODEL.BY_AMOUNT else money flag = False assert (int(towards) != 0) if int(towards) in [1, 2, 3]: if self.cash_available >= money: if self.market_type == MARKET_TYPE.STOCK_CN: amount = int(amount / 100) * 100 self.cash_available -= money flag = True if self.running_environment == RUNNING_ENVIRONMENT.TZERO: if abs(self.buy_available.get(code, 0)) >= amount: flag = True self.cash_available -= money self.buy_available[code] -= amount else: flag = False wrong_reason = if self.market_type == MARKET_TYPE.FUTURE_CN: if towards == 3: _hold = self.sell_available.get(code, 0) _money = abs( float(amount * price * (1 + self.commission_coeff)) ) print(_hold) if self.cash_available >= _money: if _hold < 0: self.cash_available -= _money flag = True else: wrong_reason = else: wrong_reason = if towards == 2: self.cash_available -= money flag = True else: wrong_reason = .format( self.cash_available, code, time, amount, towards ) elif int(towards) in [-1, -2, -3]: _hold = self.sell_available.get(code, 0) if _hold >= amount: self.sell_available[code] -= amount flag = True else: if self.allow_sellopen and towards == -2: if self.cash_available >= money: flag = True else: print(, _hold) print(, amount) print(, money) print(, self.cash_available) wrong_reason = "卖空资金不足/不允许裸卖空" else: wrong_reason = "卖出仓位不足" if flag and (amount > 0): _order = QA_Order( user_cookie=self.user_cookie, strategy=self.strategy_name, frequence=self.frequence, account_cookie=self.account_cookie, code=code, market_type=self.market_type, date=date, datetime=time, sending_time=time, callback=self.receive_deal, amount=amount, price=price, order_model=order_model, towards=towards, money=money, broker=self.broker, amount_model=amount_model, commission_coeff=self.commission_coeff, tax_coeff=self.tax_coeff, *args, **kwargs ) self.datetime = time self.orders.insert_order(_order) return _order else: print( .format( code, time, amount, towards ) ) print(wrong_reason) return False
def build_reaction_from_string(self, reaction_str, verbose=True, fwd_arrow=None, rev_arrow=None, reversible_arrow=None, term_split="+"): forward_arrow_finder = _forward_arrow_finder if fwd_arrow is None \ else re.compile(re.escape(fwd_arrow)) reverse_arrow_finder = _reverse_arrow_finder if rev_arrow is None \ else re.compile(re.escape(rev_arrow)) reversible_arrow_finder = _reversible_arrow_finder \ if reversible_arrow is None \ else re.compile(re.escape(reversible_arrow)) if self._model is None: warn("no model found") model = None else: model = self._model found_compartments = compartment_finder.findall(reaction_str) if len(found_compartments) == 1: compartment = found_compartments[0] reaction_str = compartment_finder.sub("", reaction_str) else: compartment = "" arrow_match = reversible_arrow_finder.search(reaction_str) if arrow_match is not None: self.lower_bound = -1000 self.upper_bound = 1000 else: arrow_match = forward_arrow_finder.search(reaction_str) if arrow_match is not None: self.upper_bound = 1000 self.lower_bound = 0 else: arrow_match = reverse_arrow_finder.search(reaction_str) if arrow_match is None: raise ValueError("no suitable arrow found in " % reaction_str) else: self.upper_bound = 0 self.lower_bound = -1000 reactant_str = reaction_str[:arrow_match.start()].strip() product_str = reaction_str[arrow_match.end():].strip() self.subtract_metabolites(self.metabolites, combine=True) for substr, factor in ((reactant_str, -1), (product_str, 1)): if len(substr) == 0: continue for term in substr.split(term_split): term = term.strip() if term.lower() == "nothing": continue if " " in term: num_str, met_id = term.split() num = float(num_str.lstrip("(").rstrip(")")) * factor else: met_id = term num = factor met_id += compartment try: met = model.metabolites.get_by_id(met_id) except KeyError: if verbose: print("unknown metabolite created" % met_id) met = Metabolite(met_id) self.add_metabolites({met: num})
Builds reaction from reaction equation reaction_str using parser Takes a string and using the specifications supplied in the optional arguments infers a set of metabolites, metabolite compartments and stoichiometries for the reaction. It also infers the reversibility of the reaction from the reaction arrow. Changes to the associated model are reverted upon exit when using the model as a context. Parameters ---------- reaction_str : string a string containing a reaction formula (equation) verbose: bool setting verbosity of function fwd_arrow : re.compile for forward irreversible reaction arrows rev_arrow : re.compile for backward irreversible reaction arrows reversible_arrow : re.compile for reversible reaction arrows term_split : string dividing individual metabolite entries
### Input: Builds reaction from reaction equation reaction_str using parser Takes a string and using the specifications supplied in the optional arguments infers a set of metabolites, metabolite compartments and stoichiometries for the reaction. It also infers the reversibility of the reaction from the reaction arrow. Changes to the associated model are reverted upon exit when using the model as a context. Parameters ---------- reaction_str : string a string containing a reaction formula (equation) verbose: bool setting verbosity of function fwd_arrow : re.compile for forward irreversible reaction arrows rev_arrow : re.compile for backward irreversible reaction arrows reversible_arrow : re.compile for reversible reaction arrows term_split : string dividing individual metabolite entries ### Response: def build_reaction_from_string(self, reaction_str, verbose=True, fwd_arrow=None, rev_arrow=None, reversible_arrow=None, term_split="+"): forward_arrow_finder = _forward_arrow_finder if fwd_arrow is None \ else re.compile(re.escape(fwd_arrow)) reverse_arrow_finder = _reverse_arrow_finder if rev_arrow is None \ else re.compile(re.escape(rev_arrow)) reversible_arrow_finder = _reversible_arrow_finder \ if reversible_arrow is None \ else re.compile(re.escape(reversible_arrow)) if self._model is None: warn("no model found") model = None else: model = self._model found_compartments = compartment_finder.findall(reaction_str) if len(found_compartments) == 1: compartment = found_compartments[0] reaction_str = compartment_finder.sub("", reaction_str) else: compartment = "" arrow_match = reversible_arrow_finder.search(reaction_str) if arrow_match is not None: self.lower_bound = -1000 self.upper_bound = 1000 else: arrow_match = forward_arrow_finder.search(reaction_str) if arrow_match is not None: self.upper_bound = 1000 self.lower_bound = 0 else: arrow_match = reverse_arrow_finder.search(reaction_str) if arrow_match is None: raise ValueError("no suitable arrow found in " % reaction_str) else: self.upper_bound = 0 self.lower_bound = -1000 reactant_str = reaction_str[:arrow_match.start()].strip() product_str = reaction_str[arrow_match.end():].strip() self.subtract_metabolites(self.metabolites, combine=True) for substr, factor in ((reactant_str, -1), (product_str, 1)): if len(substr) == 0: continue for term in substr.split(term_split): term = term.strip() if term.lower() == "nothing": continue if " " in term: num_str, met_id = term.split() num = float(num_str.lstrip("(").rstrip(")")) * factor else: met_id = term num = factor met_id += compartment try: met = model.metabolites.get_by_id(met_id) except KeyError: if verbose: print("unknown metabolite created" % met_id) met = Metabolite(met_id) self.add_metabolites({met: num})
def _prep_snippet_for_pandoc(self, latex_text): replace_cite = CitationLinker(self.bib_db) latex_text = replace_cite(latex_text) return latex_text
Process a LaTeX snippet of content for better transformation with pandoc. Currently runs the CitationLinker to convert BibTeX citations to href links.
### Input: Process a LaTeX snippet of content for better transformation with pandoc. Currently runs the CitationLinker to convert BibTeX citations to href links. ### Response: def _prep_snippet_for_pandoc(self, latex_text): replace_cite = CitationLinker(self.bib_db) latex_text = replace_cite(latex_text) return latex_text
def run_breiman2(): x, y = build_sample_ace_problem_breiman2(500) ace_solver = ace.ACESolver() ace_solver.specify_data_set(x, y) ace_solver.solve() try: plt = ace.plot_transforms(ace_solver, None) except ImportError: pass plt.subplot(1, 2, 1) phi = numpy.sin(2.0 * numpy.pi * x[0]) plt.plot(x[0], phi, label=) plt.legend() plt.subplot(1, 2, 2) y = numpy.exp(phi) plt.plot(y, phi, label=) plt.legend(loc=) plt.savefig() return ace_solver
Run Breiman's other sample problem.
### Input: Run Breiman's other sample problem. ### Response: def run_breiman2(): x, y = build_sample_ace_problem_breiman2(500) ace_solver = ace.ACESolver() ace_solver.specify_data_set(x, y) ace_solver.solve() try: plt = ace.plot_transforms(ace_solver, None) except ImportError: pass plt.subplot(1, 2, 1) phi = numpy.sin(2.0 * numpy.pi * x[0]) plt.plot(x[0], phi, label=) plt.legend() plt.subplot(1, 2, 2) y = numpy.exp(phi) plt.plot(y, phi, label=) plt.legend(loc=) plt.savefig() return ace_solver
def _cast_output_to_type(value, typ): if typ == : return bool(value) if typ == : return int(value) return value
cast the value depending on the terraform type
### Input: cast the value depending on the terraform type ### Response: def _cast_output_to_type(value, typ): if typ == : return bool(value) if typ == : return int(value) return value
def decodeMessage(self, data): message = proto_pb2.Msg() message.ParseFromString(data) return message
Decode a protobuf message into a list of Tensor events
### Input: Decode a protobuf message into a list of Tensor events ### Response: def decodeMessage(self, data): message = proto_pb2.Msg() message.ParseFromString(data) return message
def template_global(self, arg: Optional[Callable] = None, *, name: Optional[str] = None, pass_context: bool = False, inject: Optional[Union[bool, Iterable[str]]] = None, safe: bool = False, ) -> Callable: def wrapper(fn): fn = _inject(fn, inject) if safe: fn = _make_safe(fn) if pass_context: fn = jinja2.contextfunction(fn) self._defer(lambda app: app.add_template_global(fn, name=name)) return fn if callable(arg): return wrapper(arg) return wrapper
Decorator to mark a function as a Jinja template global (tag). :param name: The name of the tag, if different from the function name. :param pass_context: Whether or not to pass the template context into the tag. If ``True``, the first argument must be the context. :param inject: Whether or not this tag needs any dependencies injected. :param safe: Whether or not to mark the output of this tag as html-safe.
### Input: Decorator to mark a function as a Jinja template global (tag). :param name: The name of the tag, if different from the function name. :param pass_context: Whether or not to pass the template context into the tag. If ``True``, the first argument must be the context. :param inject: Whether or not this tag needs any dependencies injected. :param safe: Whether or not to mark the output of this tag as html-safe. ### Response: def template_global(self, arg: Optional[Callable] = None, *, name: Optional[str] = None, pass_context: bool = False, inject: Optional[Union[bool, Iterable[str]]] = None, safe: bool = False, ) -> Callable: def wrapper(fn): fn = _inject(fn, inject) if safe: fn = _make_safe(fn) if pass_context: fn = jinja2.contextfunction(fn) self._defer(lambda app: app.add_template_global(fn, name=name)) return fn if callable(arg): return wrapper(arg) return wrapper
def gen_states(self, monomer_data, parent): states = {} for atoms in monomer_data: for atom in atoms: state = if not atom[3] else atom[3] if state not in states: states[state] = OrderedDict() states[state][atom[2]] = Atom( tuple(atom[8:11]), atom[13], atom_id=atom[1], res_label=atom[2], occupancy=atom[11], bfactor=atom[12], charge=atom[14], state=state, parent=parent) states_len = [(k, len(x)) for k, x in states.items()] if (len(states) > 1) and (len(set([x[1] for x in states_len])) > 1): for t_state, t_state_d in states.items(): new_s_dict = OrderedDict() for k, v in states[sorted(states_len, key=lambda x: x[0])[0][0]].items(): if k not in t_state_d: c_atom = Atom( v._vector, v.element, atom_id=v.id, res_label=v.res_label, occupancy=v.tags[], bfactor=v.tags[], charge=v.tags[], state=t_state[0], parent=v.parent) new_s_dict[k] = c_atom else: new_s_dict[k] = t_state_d[k] states[t_state] = new_s_dict return states
Generates the `states` dictionary for a `Monomer`. monomer_data : list A list of atom data parsed from the input PDB. parent : ampal.Monomer `Monomer` used to assign `parent` on created `Atoms`.
### Input: Generates the `states` dictionary for a `Monomer`. monomer_data : list A list of atom data parsed from the input PDB. parent : ampal.Monomer `Monomer` used to assign `parent` on created `Atoms`. ### Response: def gen_states(self, monomer_data, parent): states = {} for atoms in monomer_data: for atom in atoms: state = if not atom[3] else atom[3] if state not in states: states[state] = OrderedDict() states[state][atom[2]] = Atom( tuple(atom[8:11]), atom[13], atom_id=atom[1], res_label=atom[2], occupancy=atom[11], bfactor=atom[12], charge=atom[14], state=state, parent=parent) states_len = [(k, len(x)) for k, x in states.items()] if (len(states) > 1) and (len(set([x[1] for x in states_len])) > 1): for t_state, t_state_d in states.items(): new_s_dict = OrderedDict() for k, v in states[sorted(states_len, key=lambda x: x[0])[0][0]].items(): if k not in t_state_d: c_atom = Atom( v._vector, v.element, atom_id=v.id, res_label=v.res_label, occupancy=v.tags[], bfactor=v.tags[], charge=v.tags[], state=t_state[0], parent=v.parent) new_s_dict[k] = c_atom else: new_s_dict[k] = t_state_d[k] states[t_state] = new_s_dict return states
def compare_nouns(self, word1, word2): return self._plequal(word1, word2, self.plural_noun)
compare word1 and word2 for equality regardless of plurality word1 and word2 are to be treated as nouns return values: eq - the strings are equal p:s - word1 is the plural of word2 s:p - word2 is the plural of word1 p:p - word1 and word2 are two different plural forms of the one word False - otherwise
### Input: compare word1 and word2 for equality regardless of plurality word1 and word2 are to be treated as nouns return values: eq - the strings are equal p:s - word1 is the plural of word2 s:p - word2 is the plural of word1 p:p - word1 and word2 are two different plural forms of the one word False - otherwise ### Response: def compare_nouns(self, word1, word2): return self._plequal(word1, word2, self.plural_noun)
def export_png(obj, filename=None, height=None, width=None, webdriver=None, timeout=5): s a Plot instance. Otherwise the height kwarg is ignored. width (int) : the desired width of the exported layout obj only if it image = get_screenshot_as_png(obj, height=height, width=width, driver=webdriver, timeout=timeout) if filename is None: filename = default_filename("png") if image.width == 0 or image.height == 0: raise ValueError("unable to save an empty image") image.save(filename) return abspath(filename)
Export the ``LayoutDOM`` object or document as a PNG. If the filename is not given, it is derived from the script name (e.g. ``/foo/myplot.py`` will create ``/foo/myplot.png``) Args: obj (LayoutDOM or Document) : a Layout (Row/Column), Plot or Widget object or Document to export. filename (str, optional) : filename to save document under (default: None) If None, infer from the filename. height (int) : the desired height of the exported layout obj only if it's a Plot instance. Otherwise the height kwarg is ignored. width (int) : the desired width of the exported layout obj only if it's a Plot instance. Otherwise the width kwarg is ignored. webdriver (selenium.webdriver) : a selenium webdriver instance to use to export the image. timeout (int) : the maximum amount of time (in seconds) to wait for Bokeh to initialize (default: 5) (Added in 1.1.1). Returns: filename (str) : the filename where the static file is saved. If you would like to access an Image object directly, rather than save a file to disk, use the lower-level :func:`~bokeh.io.export.get_screenshot_as_png` function. .. warning:: Responsive sizing_modes may generate layouts with unexpected size and aspect ratios. It is recommended to use the default ``fixed`` sizing mode.
### Input: Export the ``LayoutDOM`` object or document as a PNG. If the filename is not given, it is derived from the script name (e.g. ``/foo/myplot.py`` will create ``/foo/myplot.png``) Args: obj (LayoutDOM or Document) : a Layout (Row/Column), Plot or Widget object or Document to export. filename (str, optional) : filename to save document under (default: None) If None, infer from the filename. height (int) : the desired height of the exported layout obj only if it's a Plot instance. Otherwise the height kwarg is ignored. width (int) : the desired width of the exported layout obj only if it's a Plot instance. Otherwise the width kwarg is ignored. webdriver (selenium.webdriver) : a selenium webdriver instance to use to export the image. timeout (int) : the maximum amount of time (in seconds) to wait for Bokeh to initialize (default: 5) (Added in 1.1.1). Returns: filename (str) : the filename where the static file is saved. If you would like to access an Image object directly, rather than save a file to disk, use the lower-level :func:`~bokeh.io.export.get_screenshot_as_png` function. .. warning:: Responsive sizing_modes may generate layouts with unexpected size and aspect ratios. It is recommended to use the default ``fixed`` sizing mode. ### Response: def export_png(obj, filename=None, height=None, width=None, webdriver=None, timeout=5): s a Plot instance. Otherwise the height kwarg is ignored. width (int) : the desired width of the exported layout obj only if it image = get_screenshot_as_png(obj, height=height, width=width, driver=webdriver, timeout=timeout) if filename is None: filename = default_filename("png") if image.width == 0 or image.height == 0: raise ValueError("unable to save an empty image") image.save(filename) return abspath(filename)