code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def count(self, object_class=None, params=None, **kwargs): # pragma: no cover """Retrieve the attribute configuration object. Retrieve a count of all directory entries that belong to the identified objectClass. The count is limited to a single domain. Args: params (dict): Payload/request dictionary. object_class (str): Directory object class. **kwargs: Supported :meth:`~pancloud.httpclient.HTTPClient.request` parameters. Returns: requests.Response: Requests Response() object. Examples: Coming soon. """ path = "/directory-sync-service/v1/{}/count".format( object_class ) r = self._httpclient.request( method="GET", path=path, url=self.url, params=params, **kwargs ) return r
Retrieve the attribute configuration object. Retrieve a count of all directory entries that belong to the identified objectClass. The count is limited to a single domain. Args: params (dict): Payload/request dictionary. object_class (str): Directory object class. **kwargs: Supported :meth:`~pancloud.httpclient.HTTPClient.request` parameters. Returns: requests.Response: Requests Response() object. Examples: Coming soon.
Below is the the instruction that describes the task: ### Input: Retrieve the attribute configuration object. Retrieve a count of all directory entries that belong to the identified objectClass. The count is limited to a single domain. Args: params (dict): Payload/request dictionary. object_class (str): Directory object class. **kwargs: Supported :meth:`~pancloud.httpclient.HTTPClient.request` parameters. Returns: requests.Response: Requests Response() object. Examples: Coming soon. ### Response: def count(self, object_class=None, params=None, **kwargs): # pragma: no cover """Retrieve the attribute configuration object. Retrieve a count of all directory entries that belong to the identified objectClass. The count is limited to a single domain. Args: params (dict): Payload/request dictionary. object_class (str): Directory object class. **kwargs: Supported :meth:`~pancloud.httpclient.HTTPClient.request` parameters. Returns: requests.Response: Requests Response() object. Examples: Coming soon. """ path = "/directory-sync-service/v1/{}/count".format( object_class ) r = self._httpclient.request( method="GET", path=path, url=self.url, params=params, **kwargs ) return r
def add_fields(self, field_dict): """Add a mapping of field names to PayloadField instances. :API: public """ for key, field in field_dict.items(): self.add_field(key, field)
Add a mapping of field names to PayloadField instances. :API: public
Below is the the instruction that describes the task: ### Input: Add a mapping of field names to PayloadField instances. :API: public ### Response: def add_fields(self, field_dict): """Add a mapping of field names to PayloadField instances. :API: public """ for key, field in field_dict.items(): self.add_field(key, field)
def get_info(handle): """Get information about this current console window (for Microsoft Windows only). Raises IOError if attempt to get information fails (if there is no console window). Don't forget to call _WindowsCSBI.initialize() once in your application before calling this method. Positional arguments: handle -- either _WindowsCSBI.HANDLE_STDERR or _WindowsCSBI.HANDLE_STDOUT. Returns: Dictionary with different integer values. Keys are: buffer_width -- width of the buffer (Screen Buffer Size in cmd.exe layout tab). buffer_height -- height of the buffer (Screen Buffer Size in cmd.exe layout tab). terminal_width -- width of the terminal window. terminal_height -- height of the terminal window. bg_color -- current background color (http://msdn.microsoft.com/en-us/library/windows/desktop/ms682088). fg_color -- current text color code. """ # Query Win32 API. csbi = _WindowsCSBI.CSBI() try: if not _WindowsCSBI.WINDLL.kernel32.GetConsoleScreenBufferInfo(handle, ctypes.byref(csbi)): raise IOError('Unable to get console screen buffer info from win32 API.') except ctypes.ArgumentError: raise IOError('Unable to get console screen buffer info from win32 API.') # Parse data. result = dict( buffer_width=int(csbi.dwSize.X - 1), buffer_height=int(csbi.dwSize.Y), terminal_width=int(csbi.srWindow.Right - csbi.srWindow.Left), terminal_height=int(csbi.srWindow.Bottom - csbi.srWindow.Top), bg_color=int(csbi.wAttributes & 240), fg_color=int(csbi.wAttributes % 16), ) return result
Get information about this current console window (for Microsoft Windows only). Raises IOError if attempt to get information fails (if there is no console window). Don't forget to call _WindowsCSBI.initialize() once in your application before calling this method. Positional arguments: handle -- either _WindowsCSBI.HANDLE_STDERR or _WindowsCSBI.HANDLE_STDOUT. Returns: Dictionary with different integer values. Keys are: buffer_width -- width of the buffer (Screen Buffer Size in cmd.exe layout tab). buffer_height -- height of the buffer (Screen Buffer Size in cmd.exe layout tab). terminal_width -- width of the terminal window. terminal_height -- height of the terminal window. bg_color -- current background color (http://msdn.microsoft.com/en-us/library/windows/desktop/ms682088). fg_color -- current text color code.
Below is the the instruction that describes the task: ### Input: Get information about this current console window (for Microsoft Windows only). Raises IOError if attempt to get information fails (if there is no console window). Don't forget to call _WindowsCSBI.initialize() once in your application before calling this method. Positional arguments: handle -- either _WindowsCSBI.HANDLE_STDERR or _WindowsCSBI.HANDLE_STDOUT. Returns: Dictionary with different integer values. Keys are: buffer_width -- width of the buffer (Screen Buffer Size in cmd.exe layout tab). buffer_height -- height of the buffer (Screen Buffer Size in cmd.exe layout tab). terminal_width -- width of the terminal window. terminal_height -- height of the terminal window. bg_color -- current background color (http://msdn.microsoft.com/en-us/library/windows/desktop/ms682088). fg_color -- current text color code. ### Response: def get_info(handle): """Get information about this current console window (for Microsoft Windows only). Raises IOError if attempt to get information fails (if there is no console window). Don't forget to call _WindowsCSBI.initialize() once in your application before calling this method. Positional arguments: handle -- either _WindowsCSBI.HANDLE_STDERR or _WindowsCSBI.HANDLE_STDOUT. Returns: Dictionary with different integer values. Keys are: buffer_width -- width of the buffer (Screen Buffer Size in cmd.exe layout tab). buffer_height -- height of the buffer (Screen Buffer Size in cmd.exe layout tab). terminal_width -- width of the terminal window. terminal_height -- height of the terminal window. bg_color -- current background color (http://msdn.microsoft.com/en-us/library/windows/desktop/ms682088). fg_color -- current text color code. """ # Query Win32 API. csbi = _WindowsCSBI.CSBI() try: if not _WindowsCSBI.WINDLL.kernel32.GetConsoleScreenBufferInfo(handle, ctypes.byref(csbi)): raise IOError('Unable to get console screen buffer info from win32 API.') except ctypes.ArgumentError: raise IOError('Unable to get console screen buffer info from win32 API.') # Parse data. result = dict( buffer_width=int(csbi.dwSize.X - 1), buffer_height=int(csbi.dwSize.Y), terminal_width=int(csbi.srWindow.Right - csbi.srWindow.Left), terminal_height=int(csbi.srWindow.Bottom - csbi.srWindow.Top), bg_color=int(csbi.wAttributes & 240), fg_color=int(csbi.wAttributes % 16), ) return result
def usernames(urls): '''Take an iterable of `urls` of normalized URL or file paths and attempt to extract usernames. Returns a list. ''' usernames = StringCounter() for url, count in urls.items(): uparse = urlparse(url) path = uparse.path hostname = uparse.hostname m = username_re.match(path) if m: usernames[m.group('username')] += count elif hostname in ['twitter.com', 'www.facebook.com']: usernames[path.lstrip('/')] += count return usernames
Take an iterable of `urls` of normalized URL or file paths and attempt to extract usernames. Returns a list.
Below is the the instruction that describes the task: ### Input: Take an iterable of `urls` of normalized URL or file paths and attempt to extract usernames. Returns a list. ### Response: def usernames(urls): '''Take an iterable of `urls` of normalized URL or file paths and attempt to extract usernames. Returns a list. ''' usernames = StringCounter() for url, count in urls.items(): uparse = urlparse(url) path = uparse.path hostname = uparse.hostname m = username_re.match(path) if m: usernames[m.group('username')] += count elif hostname in ['twitter.com', 'www.facebook.com']: usernames[path.lstrip('/')] += count return usernames
def _load_properties(self): """Load User properties from Flickr.""" method = 'flickr.people.getInfo' data = _doget(method, user_id=self.__id) self.__loaded = True person = data.rsp.person self.__isadmin = person.isadmin self.__ispro = person.ispro self.__icon_server = person.iconserver if int(person.iconserver) > 0: self.__icon_url = 'http://photos%s.flickr.com/buddyicons/%s.jpg' \ % (person.iconserver, self.__id) else: self.__icon_url = 'http://www.flickr.com/images/buddyicon.jpg' self.__username = person.username.text self.__realname = person.realname.text self.__location = person.location.text self.__photos_firstdate = person.photos.firstdate.text self.__photos_firstdatetaken = person.photos.firstdatetaken.text self.__photos_count = person.photos.count.text
Load User properties from Flickr.
Below is the the instruction that describes the task: ### Input: Load User properties from Flickr. ### Response: def _load_properties(self): """Load User properties from Flickr.""" method = 'flickr.people.getInfo' data = _doget(method, user_id=self.__id) self.__loaded = True person = data.rsp.person self.__isadmin = person.isadmin self.__ispro = person.ispro self.__icon_server = person.iconserver if int(person.iconserver) > 0: self.__icon_url = 'http://photos%s.flickr.com/buddyicons/%s.jpg' \ % (person.iconserver, self.__id) else: self.__icon_url = 'http://www.flickr.com/images/buddyicon.jpg' self.__username = person.username.text self.__realname = person.realname.text self.__location = person.location.text self.__photos_firstdate = person.photos.firstdate.text self.__photos_firstdatetaken = person.photos.firstdatetaken.text self.__photos_count = person.photos.count.text
def build(self, words): """Construct dictionary DAWG from tokenized words.""" words = [self._normalize(tokens) for tokens in words] self._dawg = dawg.CompletionDAWG(words) self._loaded_model = True
Construct dictionary DAWG from tokenized words.
Below is the the instruction that describes the task: ### Input: Construct dictionary DAWG from tokenized words. ### Response: def build(self, words): """Construct dictionary DAWG from tokenized words.""" words = [self._normalize(tokens) for tokens in words] self._dawg = dawg.CompletionDAWG(words) self._loaded_model = True
def make_processor(func, arg=None): """ A pre-called processor that wraps the execution of the target callable ``func``. This is useful for when ``func`` is a third party mapping function that can take your column's value and return an expected result, but doesn't understand all of the extra kwargs that get sent to processor callbacks. Because this helper proxies access to ``func``, it can hold back the extra kwargs for a successful call. ``func`` will be called once per object record, a single positional argument being the column data retrieved via the column's :py:attr:`~datatableview.columns.Column.sources` An optional ``arg`` may be given, which will be forwarded as a second positional argument to ``func``. This was originally intended to simplify using Django template filter functions as ``func``. If you need to sent more arguments, consider wrapping your ``func`` in a ``functools.partial``, and use that as ``func`` instead. """ def helper(instance, *args, **kwargs): value = kwargs.get('default_value') if value is None: value = instance if arg is not None: extra_arg = [arg] else: extra_arg = [] return func(value, *extra_arg) return helper
A pre-called processor that wraps the execution of the target callable ``func``. This is useful for when ``func`` is a third party mapping function that can take your column's value and return an expected result, but doesn't understand all of the extra kwargs that get sent to processor callbacks. Because this helper proxies access to ``func``, it can hold back the extra kwargs for a successful call. ``func`` will be called once per object record, a single positional argument being the column data retrieved via the column's :py:attr:`~datatableview.columns.Column.sources` An optional ``arg`` may be given, which will be forwarded as a second positional argument to ``func``. This was originally intended to simplify using Django template filter functions as ``func``. If you need to sent more arguments, consider wrapping your ``func`` in a ``functools.partial``, and use that as ``func`` instead.
Below is the the instruction that describes the task: ### Input: A pre-called processor that wraps the execution of the target callable ``func``. This is useful for when ``func`` is a third party mapping function that can take your column's value and return an expected result, but doesn't understand all of the extra kwargs that get sent to processor callbacks. Because this helper proxies access to ``func``, it can hold back the extra kwargs for a successful call. ``func`` will be called once per object record, a single positional argument being the column data retrieved via the column's :py:attr:`~datatableview.columns.Column.sources` An optional ``arg`` may be given, which will be forwarded as a second positional argument to ``func``. This was originally intended to simplify using Django template filter functions as ``func``. If you need to sent more arguments, consider wrapping your ``func`` in a ``functools.partial``, and use that as ``func`` instead. ### Response: def make_processor(func, arg=None): """ A pre-called processor that wraps the execution of the target callable ``func``. This is useful for when ``func`` is a third party mapping function that can take your column's value and return an expected result, but doesn't understand all of the extra kwargs that get sent to processor callbacks. Because this helper proxies access to ``func``, it can hold back the extra kwargs for a successful call. ``func`` will be called once per object record, a single positional argument being the column data retrieved via the column's :py:attr:`~datatableview.columns.Column.sources` An optional ``arg`` may be given, which will be forwarded as a second positional argument to ``func``. This was originally intended to simplify using Django template filter functions as ``func``. If you need to sent more arguments, consider wrapping your ``func`` in a ``functools.partial``, and use that as ``func`` instead. """ def helper(instance, *args, **kwargs): value = kwargs.get('default_value') if value is None: value = instance if arg is not None: extra_arg = [arg] else: extra_arg = [] return func(value, *extra_arg) return helper
def get_api( profile=None, config_file=None, requirements=None): ''' Generate a datafs.DataAPI object from a config profile ``get_api`` generates a DataAPI object based on a pre-configured datafs profile specified in your datafs config file. To create a datafs config file, use the command line tool ``datafs configure --helper`` or export an existing DataAPI object with :py:meth:`datafs.ConfigFile.write_config_from_api` Parameters ---------- profile : str (optional) name of a profile in your datafs config file. If profile is not provided, the default profile specified in the file will be used. config_file : str or file (optional) path to your datafs configuration file. By default, get_api uses your OS's default datafs application directory. Examples -------- The following specifies a simple API with a MongoDB manager and a temporary storage service: .. code-block:: python >>> try: ... from StringIO import StringIO ... except ImportError: ... from io import StringIO ... >>> import tempfile >>> tempdir = tempfile.mkdtemp() >>> >>> config_file = StringIO(""" ... default-profile: my-data ... profiles: ... my-data: ... manager: ... class: MongoDBManager ... kwargs: ... database_name: 'MyDatabase' ... table_name: 'DataFiles' ... ... authorities: ... local: ... service: OSFS ... args: ['{}'] ... """.format(tempdir)) >>> >>> # This file can be read in using the datafs.get_api helper function ... >>> >>> api = get_api(profile='my-data', config_file=config_file) >>> api.manager.create_archive_table( ... 'DataFiles', ... raise_on_err=False) >>> >>> archive = api.create( ... 'my_first_archive', ... metadata = dict(description = 'My test data archive'), ... raise_on_err=False) >>> >>> with archive.open('w+') as f: ... res = f.write(u'hello!') ... >>> with archive.open('r') as f: ... print(f.read()) ... hello! >>> >>> # clean up ... >>> archive.delete() >>> import shutil >>> shutil.rmtree(tempdir) ''' config = ConfigFile(config_file=config_file) config.read_config() if profile is None: profile = config.config['default-profile'] profile_config = config.get_profile_config(profile) default_versions = {} if requirements is None: requirements = config.config.get('requirements', None) if requirements is not None and not os.path.isfile(requirements): for reqline in re.split(r'[\r\n;]+', requirements): if re.search(r'^\s*$', reqline): continue archive, version = _parse_requirement(reqline) default_versions[archive] = version else: if requirements is None: requirements = 'requirements_data.txt' if os.path.isfile(requirements): with open_filelike(requirements, 'r') as reqfile: for reqline in reqfile.readlines(): if re.search(r'^\s*$', reqline): continue archive, version = _parse_requirement(reqline) default_versions[archive] = version api = APIConstructor.generate_api_from_config(profile_config) api.default_versions = default_versions APIConstructor.attach_manager_from_config(api, profile_config) APIConstructor.attach_services_from_config(api, profile_config) APIConstructor.attach_cache_from_config(api, profile_config) return api
Generate a datafs.DataAPI object from a config profile ``get_api`` generates a DataAPI object based on a pre-configured datafs profile specified in your datafs config file. To create a datafs config file, use the command line tool ``datafs configure --helper`` or export an existing DataAPI object with :py:meth:`datafs.ConfigFile.write_config_from_api` Parameters ---------- profile : str (optional) name of a profile in your datafs config file. If profile is not provided, the default profile specified in the file will be used. config_file : str or file (optional) path to your datafs configuration file. By default, get_api uses your OS's default datafs application directory. Examples -------- The following specifies a simple API with a MongoDB manager and a temporary storage service: .. code-block:: python >>> try: ... from StringIO import StringIO ... except ImportError: ... from io import StringIO ... >>> import tempfile >>> tempdir = tempfile.mkdtemp() >>> >>> config_file = StringIO(""" ... default-profile: my-data ... profiles: ... my-data: ... manager: ... class: MongoDBManager ... kwargs: ... database_name: 'MyDatabase' ... table_name: 'DataFiles' ... ... authorities: ... local: ... service: OSFS ... args: ['{}'] ... """.format(tempdir)) >>> >>> # This file can be read in using the datafs.get_api helper function ... >>> >>> api = get_api(profile='my-data', config_file=config_file) >>> api.manager.create_archive_table( ... 'DataFiles', ... raise_on_err=False) >>> >>> archive = api.create( ... 'my_first_archive', ... metadata = dict(description = 'My test data archive'), ... raise_on_err=False) >>> >>> with archive.open('w+') as f: ... res = f.write(u'hello!') ... >>> with archive.open('r') as f: ... print(f.read()) ... hello! >>> >>> # clean up ... >>> archive.delete() >>> import shutil >>> shutil.rmtree(tempdir)
Below is the the instruction that describes the task: ### Input: Generate a datafs.DataAPI object from a config profile ``get_api`` generates a DataAPI object based on a pre-configured datafs profile specified in your datafs config file. To create a datafs config file, use the command line tool ``datafs configure --helper`` or export an existing DataAPI object with :py:meth:`datafs.ConfigFile.write_config_from_api` Parameters ---------- profile : str (optional) name of a profile in your datafs config file. If profile is not provided, the default profile specified in the file will be used. config_file : str or file (optional) path to your datafs configuration file. By default, get_api uses your OS's default datafs application directory. Examples -------- The following specifies a simple API with a MongoDB manager and a temporary storage service: .. code-block:: python >>> try: ... from StringIO import StringIO ... except ImportError: ... from io import StringIO ... >>> import tempfile >>> tempdir = tempfile.mkdtemp() >>> >>> config_file = StringIO(""" ... default-profile: my-data ... profiles: ... my-data: ... manager: ... class: MongoDBManager ... kwargs: ... database_name: 'MyDatabase' ... table_name: 'DataFiles' ... ... authorities: ... local: ... service: OSFS ... args: ['{}'] ... """.format(tempdir)) >>> >>> # This file can be read in using the datafs.get_api helper function ... >>> >>> api = get_api(profile='my-data', config_file=config_file) >>> api.manager.create_archive_table( ... 'DataFiles', ... raise_on_err=False) >>> >>> archive = api.create( ... 'my_first_archive', ... metadata = dict(description = 'My test data archive'), ... raise_on_err=False) >>> >>> with archive.open('w+') as f: ... res = f.write(u'hello!') ... >>> with archive.open('r') as f: ... print(f.read()) ... hello! >>> >>> # clean up ... >>> archive.delete() >>> import shutil >>> shutil.rmtree(tempdir) ### Response: def get_api( profile=None, config_file=None, requirements=None): ''' Generate a datafs.DataAPI object from a config profile ``get_api`` generates a DataAPI object based on a pre-configured datafs profile specified in your datafs config file. To create a datafs config file, use the command line tool ``datafs configure --helper`` or export an existing DataAPI object with :py:meth:`datafs.ConfigFile.write_config_from_api` Parameters ---------- profile : str (optional) name of a profile in your datafs config file. If profile is not provided, the default profile specified in the file will be used. config_file : str or file (optional) path to your datafs configuration file. By default, get_api uses your OS's default datafs application directory. Examples -------- The following specifies a simple API with a MongoDB manager and a temporary storage service: .. code-block:: python >>> try: ... from StringIO import StringIO ... except ImportError: ... from io import StringIO ... >>> import tempfile >>> tempdir = tempfile.mkdtemp() >>> >>> config_file = StringIO(""" ... default-profile: my-data ... profiles: ... my-data: ... manager: ... class: MongoDBManager ... kwargs: ... database_name: 'MyDatabase' ... table_name: 'DataFiles' ... ... authorities: ... local: ... service: OSFS ... args: ['{}'] ... """.format(tempdir)) >>> >>> # This file can be read in using the datafs.get_api helper function ... >>> >>> api = get_api(profile='my-data', config_file=config_file) >>> api.manager.create_archive_table( ... 'DataFiles', ... raise_on_err=False) >>> >>> archive = api.create( ... 'my_first_archive', ... metadata = dict(description = 'My test data archive'), ... raise_on_err=False) >>> >>> with archive.open('w+') as f: ... res = f.write(u'hello!') ... >>> with archive.open('r') as f: ... print(f.read()) ... hello! >>> >>> # clean up ... >>> archive.delete() >>> import shutil >>> shutil.rmtree(tempdir) ''' config = ConfigFile(config_file=config_file) config.read_config() if profile is None: profile = config.config['default-profile'] profile_config = config.get_profile_config(profile) default_versions = {} if requirements is None: requirements = config.config.get('requirements', None) if requirements is not None and not os.path.isfile(requirements): for reqline in re.split(r'[\r\n;]+', requirements): if re.search(r'^\s*$', reqline): continue archive, version = _parse_requirement(reqline) default_versions[archive] = version else: if requirements is None: requirements = 'requirements_data.txt' if os.path.isfile(requirements): with open_filelike(requirements, 'r') as reqfile: for reqline in reqfile.readlines(): if re.search(r'^\s*$', reqline): continue archive, version = _parse_requirement(reqline) default_versions[archive] = version api = APIConstructor.generate_api_from_config(profile_config) api.default_versions = default_versions APIConstructor.attach_manager_from_config(api, profile_config) APIConstructor.attach_services_from_config(api, profile_config) APIConstructor.attach_cache_from_config(api, profile_config) return api
def scale_and_crop(im, crop_spec): """ Scale and Crop. """ im = im.crop((crop_spec.x, crop_spec.y, crop_spec.x2, crop_spec.y2)) if crop_spec.width and crop_spec.height: im = im.resize((crop_spec.width, crop_spec.height), resample=Image.ANTIALIAS) return im
Scale and Crop.
Below is the the instruction that describes the task: ### Input: Scale and Crop. ### Response: def scale_and_crop(im, crop_spec): """ Scale and Crop. """ im = im.crop((crop_spec.x, crop_spec.y, crop_spec.x2, crop_spec.y2)) if crop_spec.width and crop_spec.height: im = im.resize((crop_spec.width, crop_spec.height), resample=Image.ANTIALIAS) return im
def listFigures(self,walkTrace=tuple(),case=None,element=None): """List section figures. """ if case == 'sectionmain': print(walkTrace,self.title) if case == 'figure': caption,fig = element try: print(walkTrace,fig._leopardref,caption) except AttributeError: fig._leopardref = next(self._reportSection._fignr) print(walkTrace,fig._leopardref,caption)
List section figures.
Below is the the instruction that describes the task: ### Input: List section figures. ### Response: def listFigures(self,walkTrace=tuple(),case=None,element=None): """List section figures. """ if case == 'sectionmain': print(walkTrace,self.title) if case == 'figure': caption,fig = element try: print(walkTrace,fig._leopardref,caption) except AttributeError: fig._leopardref = next(self._reportSection._fignr) print(walkTrace,fig._leopardref,caption)
def pformat(self): """ Pretty string format """ lines = [] lines.append(("%s (%s)" % (self.name, self.status)).center(50, "-")) lines.append("items: {0:,} ({1:,} bytes)".format(self.item_count, self.size)) cap = self.consumed_capacity.get("__table__", {}) read = "Read: " + format_throughput(self.read_throughput, cap.get("read")) write = "Write: " + format_throughput(self.write_throughput, cap.get("write")) lines.append(read + " " + write) if self.decreases_today > 0: lines.append("decreases today: %d" % self.decreases_today) if self.range_key is None: lines.append(str(self.hash_key)) else: lines.append("%s, %s" % (self.hash_key, self.range_key)) for field in itervalues(self.attrs): if field.key_type == "INDEX": lines.append(str(field)) for index_name, gindex in iteritems(self.global_indexes): cap = self.consumed_capacity.get(index_name) lines.append(gindex.pformat(cap)) return "\n".join(lines)
Pretty string format
Below is the the instruction that describes the task: ### Input: Pretty string format ### Response: def pformat(self): """ Pretty string format """ lines = [] lines.append(("%s (%s)" % (self.name, self.status)).center(50, "-")) lines.append("items: {0:,} ({1:,} bytes)".format(self.item_count, self.size)) cap = self.consumed_capacity.get("__table__", {}) read = "Read: " + format_throughput(self.read_throughput, cap.get("read")) write = "Write: " + format_throughput(self.write_throughput, cap.get("write")) lines.append(read + " " + write) if self.decreases_today > 0: lines.append("decreases today: %d" % self.decreases_today) if self.range_key is None: lines.append(str(self.hash_key)) else: lines.append("%s, %s" % (self.hash_key, self.range_key)) for field in itervalues(self.attrs): if field.key_type == "INDEX": lines.append(str(field)) for index_name, gindex in iteritems(self.global_indexes): cap = self.consumed_capacity.get(index_name) lines.append(gindex.pformat(cap)) return "\n".join(lines)
def validate_scopes(self, request): """ :param request: OAuthlib request. :type request: oauthlib.common.Request """ if not request.scopes: request.scopes = utils.scope_to_list(request.scope) or utils.scope_to_list( self.request_validator.get_default_scopes(request.client_id, request)) log.debug('Validating access to scopes %r for client %r (%r).', request.scopes, request.client_id, request.client) if not self.request_validator.validate_scopes(request.client_id, request.scopes, request.client, request): raise errors.InvalidScopeError(request=request)
:param request: OAuthlib request. :type request: oauthlib.common.Request
Below is the the instruction that describes the task: ### Input: :param request: OAuthlib request. :type request: oauthlib.common.Request ### Response: def validate_scopes(self, request): """ :param request: OAuthlib request. :type request: oauthlib.common.Request """ if not request.scopes: request.scopes = utils.scope_to_list(request.scope) or utils.scope_to_list( self.request_validator.get_default_scopes(request.client_id, request)) log.debug('Validating access to scopes %r for client %r (%r).', request.scopes, request.client_id, request.client) if not self.request_validator.validate_scopes(request.client_id, request.scopes, request.client, request): raise errors.InvalidScopeError(request=request)
def transform(self, X, **kwargs): """ The transform method is the primary drawing hook for ranking classes. Parameters ---------- X : ndarray or DataFrame of shape n x m A matrix of n instances with m features kwargs : dict Pass generic arguments to the drawing method Returns ------- X : ndarray Typically a transformed matrix, X' is returned. However, this method performs no transformation on the original data, instead simply ranking the features that are in the input data and returns the original data, unmodified. """ self.ranks_ = self.rank(X) self.draw(**kwargs) # Return the X matrix, unchanged return X
The transform method is the primary drawing hook for ranking classes. Parameters ---------- X : ndarray or DataFrame of shape n x m A matrix of n instances with m features kwargs : dict Pass generic arguments to the drawing method Returns ------- X : ndarray Typically a transformed matrix, X' is returned. However, this method performs no transformation on the original data, instead simply ranking the features that are in the input data and returns the original data, unmodified.
Below is the the instruction that describes the task: ### Input: The transform method is the primary drawing hook for ranking classes. Parameters ---------- X : ndarray or DataFrame of shape n x m A matrix of n instances with m features kwargs : dict Pass generic arguments to the drawing method Returns ------- X : ndarray Typically a transformed matrix, X' is returned. However, this method performs no transformation on the original data, instead simply ranking the features that are in the input data and returns the original data, unmodified. ### Response: def transform(self, X, **kwargs): """ The transform method is the primary drawing hook for ranking classes. Parameters ---------- X : ndarray or DataFrame of shape n x m A matrix of n instances with m features kwargs : dict Pass generic arguments to the drawing method Returns ------- X : ndarray Typically a transformed matrix, X' is returned. However, this method performs no transformation on the original data, instead simply ranking the features that are in the input data and returns the original data, unmodified. """ self.ranks_ = self.rank(X) self.draw(**kwargs) # Return the X matrix, unchanged return X
def spell_checker( self, text, accept_language=None, pragma=None, user_agent=None, client_id=None, client_ip=None, location=None, action_type=None, app_name=None, country_code=None, client_machine_name=None, doc_id=None, market=None, session_id=None, set_lang=None, user_id=None, mode=None, pre_context_text=None, post_context_text=None, custom_headers=None, raw=False, **operation_config): """The Bing Spell Check API lets you perform contextual grammar and spell checking. Bing has developed a web-based spell-checker that leverages machine learning and statistical machine translation to dynamically train a constantly evolving and highly contextual algorithm. The spell-checker is based on a massive corpus of web searches and documents. :param text: The text string to check for spelling and grammar errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. Because of the query string length limit, you'll typically use a POST request unless you're checking only short strings. :type text: str :param accept_language: A comma-delimited list of one or more languages to use for user interface strings. The list is in decreasing order of preference. For additional information, including expected format, see [RFC2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). This header and the setLang query parameter are mutually exclusive; do not specify both. If you set this header, you must also specify the cc query parameter. Bing will use the first supported language it finds from the list, and combine that language with the cc parameter value to determine the market to return results for. If the list does not include a supported language, Bing will find the closest language and market that supports the request, and may use an aggregated or default market for the results instead of a specified one. You should use this header and the cc query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. A user interface string is a string that's used as a label in a user interface. There are very few user interface strings in the JSON response objects. Any links in the response objects to Bing.com properties will apply the specified language. :type accept_language: str :param pragma: By default, Bing returns cached content, if available. To prevent Bing from returning cached content, set the Pragma header to no-cache (for example, Pragma: no-cache). :type pragma: str :param user_agent: The user agent originating the request. Bing uses the user agent to provide mobile users with an optimized experience. Although optional, you are strongly encouraged to always specify this header. The user-agent should be the same string that any commonly used browser would send. For information about user agents, see [RFC 2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). :type user_agent: str :param client_id: Bing uses this header to provide users with consistent behavior across Bing API calls. Bing often flights new features and improvements, and it uses the client ID as a key for assigning traffic on different flights. If you do not use the same client ID for a user across multiple requests, then Bing may assign the user to multiple conflicting flights. Being assigned to multiple conflicting flights can lead to an inconsistent user experience. For example, if the second request has a different flight assignment than the first, the experience may be unexpected. Also, Bing can use the client ID to tailor web results to that client ID’s search history, providing a richer experience for the user. Bing also uses this header to help improve result rankings by analyzing the activity generated by a client ID. The relevance improvements help with better quality of results delivered by Bing APIs and in turn enables higher click-through rates for the API consumer. IMPORTANT: Although optional, you should consider this header required. Persisting the client ID across multiple requests for the same end user and device combination enables 1) the API consumer to receive a consistent user experience, and 2) higher click-through rates via better quality of results from the Bing APIs. Each user that uses your application on the device must have a unique, Bing generated client ID. If you do not include this header in the request, Bing generates an ID and returns it in the X-MSEdge-ClientID response header. The only time that you should NOT include this header in a request is the first time the user uses your app on that device. Use the client ID for each Bing API request that your app makes for this user on the device. Persist the client ID. To persist the ID in a browser app, use a persistent HTTP cookie to ensure the ID is used across all sessions. Do not use a session cookie. For other apps such as mobile apps, use the device's persistent storage to persist the ID. The next time the user uses your app on that device, get the client ID that you persisted. Bing responses may or may not include this header. If the response includes this header, capture the client ID and use it for all subsequent Bing requests for the user on that device. If you include the X-MSEdge-ClientID, you must not include cookies in the request. :type client_id: str :param client_ip: The IPv4 or IPv6 address of the client device. The IP address is used to discover the user's location. Bing uses the location information to determine safe search behavior. Although optional, you are encouraged to always specify this header and the X-Search-Location header. Do not obfuscate the address (for example, by changing the last octet to 0). Obfuscating the address results in the location not being anywhere near the device's actual location, which may result in Bing serving erroneous results. :type client_ip: str :param location: A semicolon-delimited list of key/value pairs that describe the client's geographical location. Bing uses the location information to determine safe search behavior and to return relevant local content. Specify the key/value pair as <key>:<value>. The following are the keys that you use to specify the user's location. lat (required): The latitude of the client's location, in degrees. The latitude must be greater than or equal to -90.0 and less than or equal to +90.0. Negative values indicate southern latitudes and positive values indicate northern latitudes. long (required): The longitude of the client's location, in degrees. The longitude must be greater than or equal to -180.0 and less than or equal to +180.0. Negative values indicate western longitudes and positive values indicate eastern longitudes. re (required): The radius, in meters, which specifies the horizontal accuracy of the coordinates. Pass the value returned by the device's location service. Typical values might be 22m for GPS/Wi-Fi, 380m for cell tower triangulation, and 18,000m for reverse IP lookup. ts (optional): The UTC UNIX timestamp of when the client was at the location. (The UNIX timestamp is the number of seconds since January 1, 1970.) head (optional): The client's relative heading or direction of travel. Specify the direction of travel as degrees from 0 through 360, counting clockwise relative to true north. Specify this key only if the sp key is nonzero. sp (optional): The horizontal velocity (speed), in meters per second, that the client device is traveling. alt (optional): The altitude of the client device, in meters. are (optional): The radius, in meters, that specifies the vertical accuracy of the coordinates. Specify this key only if you specify the alt key. Although many of the keys are optional, the more information that you provide, the more accurate the location results are. Although optional, you are encouraged to always specify the user's geographical location. Providing the location is especially important if the client's IP address does not accurately reflect the user's physical location (for example, if the client uses VPN). For optimal results, you should include this header and the X-Search-ClientIP header, but at a minimum, you should include this header. :type location: str :param action_type: A string that's used by logging to determine whether the request is coming from an interactive session or a page load. The following are the possible values. 1) Edit—The request is from an interactive session 2) Load—The request is from a page load. Possible values include: 'Edit', 'Load' :type action_type: str or ~azure.cognitiveservices.language.spellcheck.models.ActionType :param app_name: The unique name of your app. The name must be known by Bing. Do not include this parameter unless you have previously contacted Bing to get a unique app name. To get a unique name, contact your Bing Business Development manager. :type app_name: str :param country_code: A 2-character country code of the country where the results come from. This API supports only the United States market. If you specify this query parameter, it must be set to us. If you set this parameter, you must also specify the Accept-Language header. Bing uses the first supported language it finds from the languages list, and combine that language with the country code that you specify to determine the market to return results for. If the languages list does not include a supported language, Bing finds the closest language and market that supports the request, or it may use an aggregated or default market for the results instead of a specified one. You should use this query parameter and the Accept-Language query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. This parameter and the mkt query parameter are mutually exclusive—do not specify both. :type country_code: str :param client_machine_name: A unique name of the device that the request is being made from. Generate a unique value for each device (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type client_machine_name: str :param doc_id: A unique ID that identifies the document that the text belongs to. Generate a unique value for each document (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type doc_id: str :param market: The market where the results come from. You are strongly encouraged to always specify the market, if known. Specifying the market helps Bing route the request and return an appropriate and optimal response. This parameter and the cc query parameter are mutually exclusive—do not specify both. :type market: str :param session_id: A unique ID that identifies this user session. Generate a unique value for each user session (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections :type session_id: str :param set_lang: The language to use for user interface strings. Specify the language using the ISO 639-1 2-letter language code. For example, the language code for English is EN. The default is EN (English). Although optional, you should always specify the language. Typically, you set setLang to the same language specified by mkt unless the user wants the user interface strings displayed in a different language. This parameter and the Accept-Language header are mutually exclusive—do not specify both. A user interface string is a string that's used as a label in a user interface. There are few user interface strings in the JSON response objects. Also, any links to Bing.com properties in the response objects apply the specified language. :type set_lang: str :param user_id: A unique ID that identifies the user. Generate a unique value for each user (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type user_id: str :param mode: The type of spelling and grammar checks to perform. The following are the possible values (the values are case insensitive). The default is Proof. 1) Proof—Finds most spelling and grammar mistakes. 2) Spell—Finds most spelling mistakes but does not find some of the grammar errors that Proof catches (for example, capitalization and repeated words). Possible values include: 'proof', 'spell' :type mode: str :param pre_context_text: A string that gives context to the text string. For example, the text string petal is valid. However, if you set preContextText to bike, the context changes and the text string becomes not valid. In this case, the API suggests that you change petal to pedal (as in bike pedal). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type pre_context_text: str :param post_context_text: A string that gives context to the text string. For example, the text string read is valid. However, if you set postContextText to carpet, the context changes and the text string becomes not valid. In this case, the API suggests that you change read to red (as in red carpet). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type post_context_text: str :param dict custom_headers: headers that will be added to the request :param bool raw: returns the direct response alongside the deserialized response :param operation_config: :ref:`Operation configuration overrides<msrest:optionsforoperations>`. :return: SpellCheck or ClientRawResponse if raw=true :rtype: ~azure.cognitiveservices.language.spellcheck.models.SpellCheck or ~msrest.pipeline.ClientRawResponse :raises: :class:`ErrorResponseException<azure.cognitiveservices.language.spellcheck.models.ErrorResponseException>` """ x_bing_apis_sdk = "true" # Construct URL url = self.spell_checker.metadata['url'] # Construct parameters query_parameters = {} if action_type is not None: query_parameters['ActionType'] = self._serialize.query("action_type", action_type, 'str') if app_name is not None: query_parameters['AppName'] = self._serialize.query("app_name", app_name, 'str') if country_code is not None: query_parameters['cc'] = self._serialize.query("country_code", country_code, 'str') if client_machine_name is not None: query_parameters['ClientMachineName'] = self._serialize.query("client_machine_name", client_machine_name, 'str') if doc_id is not None: query_parameters['DocId'] = self._serialize.query("doc_id", doc_id, 'str') if market is not None: query_parameters['mkt'] = self._serialize.query("market", market, 'str') if session_id is not None: query_parameters['SessionId'] = self._serialize.query("session_id", session_id, 'str') if set_lang is not None: query_parameters['SetLang'] = self._serialize.query("set_lang", set_lang, 'str') if user_id is not None: query_parameters['UserId'] = self._serialize.query("user_id", user_id, 'str') # Construct headers header_parameters = {} header_parameters['Content-Type'] = 'application/x-www-form-urlencoded' if custom_headers: header_parameters.update(custom_headers) header_parameters['X-BingApis-SDK'] = self._serialize.header("x_bing_apis_sdk", x_bing_apis_sdk, 'str') if accept_language is not None: header_parameters['Accept-Language'] = self._serialize.header("accept_language", accept_language, 'str') if pragma is not None: header_parameters['Pragma'] = self._serialize.header("pragma", pragma, 'str') if user_agent is not None: header_parameters['User-Agent'] = self._serialize.header("user_agent", user_agent, 'str') if client_id is not None: header_parameters['X-MSEdge-ClientID'] = self._serialize.header("client_id", client_id, 'str') if client_ip is not None: header_parameters['X-MSEdge-ClientIP'] = self._serialize.header("client_ip", client_ip, 'str') if location is not None: header_parameters['X-Search-Location'] = self._serialize.header("location", location, 'str') # Construct form data form_data_content = { 'Text': text, 'Mode': mode, 'PreContextText': pre_context_text, 'PostContextText': post_context_text, } # Construct and send request request = self._client.post(url, query_parameters) response = self._client.send_formdata( request, header_parameters, form_data_content, stream=False, **operation_config) if response.status_code not in [200]: raise models.ErrorResponseException(self._deserialize, response) deserialized = None if response.status_code == 200: deserialized = self._deserialize('SpellCheck', response) if raw: client_raw_response = ClientRawResponse(deserialized, response) return client_raw_response return deserialized
The Bing Spell Check API lets you perform contextual grammar and spell checking. Bing has developed a web-based spell-checker that leverages machine learning and statistical machine translation to dynamically train a constantly evolving and highly contextual algorithm. The spell-checker is based on a massive corpus of web searches and documents. :param text: The text string to check for spelling and grammar errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. Because of the query string length limit, you'll typically use a POST request unless you're checking only short strings. :type text: str :param accept_language: A comma-delimited list of one or more languages to use for user interface strings. The list is in decreasing order of preference. For additional information, including expected format, see [RFC2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). This header and the setLang query parameter are mutually exclusive; do not specify both. If you set this header, you must also specify the cc query parameter. Bing will use the first supported language it finds from the list, and combine that language with the cc parameter value to determine the market to return results for. If the list does not include a supported language, Bing will find the closest language and market that supports the request, and may use an aggregated or default market for the results instead of a specified one. You should use this header and the cc query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. A user interface string is a string that's used as a label in a user interface. There are very few user interface strings in the JSON response objects. Any links in the response objects to Bing.com properties will apply the specified language. :type accept_language: str :param pragma: By default, Bing returns cached content, if available. To prevent Bing from returning cached content, set the Pragma header to no-cache (for example, Pragma: no-cache). :type pragma: str :param user_agent: The user agent originating the request. Bing uses the user agent to provide mobile users with an optimized experience. Although optional, you are strongly encouraged to always specify this header. The user-agent should be the same string that any commonly used browser would send. For information about user agents, see [RFC 2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). :type user_agent: str :param client_id: Bing uses this header to provide users with consistent behavior across Bing API calls. Bing often flights new features and improvements, and it uses the client ID as a key for assigning traffic on different flights. If you do not use the same client ID for a user across multiple requests, then Bing may assign the user to multiple conflicting flights. Being assigned to multiple conflicting flights can lead to an inconsistent user experience. For example, if the second request has a different flight assignment than the first, the experience may be unexpected. Also, Bing can use the client ID to tailor web results to that client ID’s search history, providing a richer experience for the user. Bing also uses this header to help improve result rankings by analyzing the activity generated by a client ID. The relevance improvements help with better quality of results delivered by Bing APIs and in turn enables higher click-through rates for the API consumer. IMPORTANT: Although optional, you should consider this header required. Persisting the client ID across multiple requests for the same end user and device combination enables 1) the API consumer to receive a consistent user experience, and 2) higher click-through rates via better quality of results from the Bing APIs. Each user that uses your application on the device must have a unique, Bing generated client ID. If you do not include this header in the request, Bing generates an ID and returns it in the X-MSEdge-ClientID response header. The only time that you should NOT include this header in a request is the first time the user uses your app on that device. Use the client ID for each Bing API request that your app makes for this user on the device. Persist the client ID. To persist the ID in a browser app, use a persistent HTTP cookie to ensure the ID is used across all sessions. Do not use a session cookie. For other apps such as mobile apps, use the device's persistent storage to persist the ID. The next time the user uses your app on that device, get the client ID that you persisted. Bing responses may or may not include this header. If the response includes this header, capture the client ID and use it for all subsequent Bing requests for the user on that device. If you include the X-MSEdge-ClientID, you must not include cookies in the request. :type client_id: str :param client_ip: The IPv4 or IPv6 address of the client device. The IP address is used to discover the user's location. Bing uses the location information to determine safe search behavior. Although optional, you are encouraged to always specify this header and the X-Search-Location header. Do not obfuscate the address (for example, by changing the last octet to 0). Obfuscating the address results in the location not being anywhere near the device's actual location, which may result in Bing serving erroneous results. :type client_ip: str :param location: A semicolon-delimited list of key/value pairs that describe the client's geographical location. Bing uses the location information to determine safe search behavior and to return relevant local content. Specify the key/value pair as <key>:<value>. The following are the keys that you use to specify the user's location. lat (required): The latitude of the client's location, in degrees. The latitude must be greater than or equal to -90.0 and less than or equal to +90.0. Negative values indicate southern latitudes and positive values indicate northern latitudes. long (required): The longitude of the client's location, in degrees. The longitude must be greater than or equal to -180.0 and less than or equal to +180.0. Negative values indicate western longitudes and positive values indicate eastern longitudes. re (required): The radius, in meters, which specifies the horizontal accuracy of the coordinates. Pass the value returned by the device's location service. Typical values might be 22m for GPS/Wi-Fi, 380m for cell tower triangulation, and 18,000m for reverse IP lookup. ts (optional): The UTC UNIX timestamp of when the client was at the location. (The UNIX timestamp is the number of seconds since January 1, 1970.) head (optional): The client's relative heading or direction of travel. Specify the direction of travel as degrees from 0 through 360, counting clockwise relative to true north. Specify this key only if the sp key is nonzero. sp (optional): The horizontal velocity (speed), in meters per second, that the client device is traveling. alt (optional): The altitude of the client device, in meters. are (optional): The radius, in meters, that specifies the vertical accuracy of the coordinates. Specify this key only if you specify the alt key. Although many of the keys are optional, the more information that you provide, the more accurate the location results are. Although optional, you are encouraged to always specify the user's geographical location. Providing the location is especially important if the client's IP address does not accurately reflect the user's physical location (for example, if the client uses VPN). For optimal results, you should include this header and the X-Search-ClientIP header, but at a minimum, you should include this header. :type location: str :param action_type: A string that's used by logging to determine whether the request is coming from an interactive session or a page load. The following are the possible values. 1) Edit—The request is from an interactive session 2) Load—The request is from a page load. Possible values include: 'Edit', 'Load' :type action_type: str or ~azure.cognitiveservices.language.spellcheck.models.ActionType :param app_name: The unique name of your app. The name must be known by Bing. Do not include this parameter unless you have previously contacted Bing to get a unique app name. To get a unique name, contact your Bing Business Development manager. :type app_name: str :param country_code: A 2-character country code of the country where the results come from. This API supports only the United States market. If you specify this query parameter, it must be set to us. If you set this parameter, you must also specify the Accept-Language header. Bing uses the first supported language it finds from the languages list, and combine that language with the country code that you specify to determine the market to return results for. If the languages list does not include a supported language, Bing finds the closest language and market that supports the request, or it may use an aggregated or default market for the results instead of a specified one. You should use this query parameter and the Accept-Language query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. This parameter and the mkt query parameter are mutually exclusive—do not specify both. :type country_code: str :param client_machine_name: A unique name of the device that the request is being made from. Generate a unique value for each device (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type client_machine_name: str :param doc_id: A unique ID that identifies the document that the text belongs to. Generate a unique value for each document (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type doc_id: str :param market: The market where the results come from. You are strongly encouraged to always specify the market, if known. Specifying the market helps Bing route the request and return an appropriate and optimal response. This parameter and the cc query parameter are mutually exclusive—do not specify both. :type market: str :param session_id: A unique ID that identifies this user session. Generate a unique value for each user session (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections :type session_id: str :param set_lang: The language to use for user interface strings. Specify the language using the ISO 639-1 2-letter language code. For example, the language code for English is EN. The default is EN (English). Although optional, you should always specify the language. Typically, you set setLang to the same language specified by mkt unless the user wants the user interface strings displayed in a different language. This parameter and the Accept-Language header are mutually exclusive—do not specify both. A user interface string is a string that's used as a label in a user interface. There are few user interface strings in the JSON response objects. Also, any links to Bing.com properties in the response objects apply the specified language. :type set_lang: str :param user_id: A unique ID that identifies the user. Generate a unique value for each user (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type user_id: str :param mode: The type of spelling and grammar checks to perform. The following are the possible values (the values are case insensitive). The default is Proof. 1) Proof—Finds most spelling and grammar mistakes. 2) Spell—Finds most spelling mistakes but does not find some of the grammar errors that Proof catches (for example, capitalization and repeated words). Possible values include: 'proof', 'spell' :type mode: str :param pre_context_text: A string that gives context to the text string. For example, the text string petal is valid. However, if you set preContextText to bike, the context changes and the text string becomes not valid. In this case, the API suggests that you change petal to pedal (as in bike pedal). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type pre_context_text: str :param post_context_text: A string that gives context to the text string. For example, the text string read is valid. However, if you set postContextText to carpet, the context changes and the text string becomes not valid. In this case, the API suggests that you change read to red (as in red carpet). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type post_context_text: str :param dict custom_headers: headers that will be added to the request :param bool raw: returns the direct response alongside the deserialized response :param operation_config: :ref:`Operation configuration overrides<msrest:optionsforoperations>`. :return: SpellCheck or ClientRawResponse if raw=true :rtype: ~azure.cognitiveservices.language.spellcheck.models.SpellCheck or ~msrest.pipeline.ClientRawResponse :raises: :class:`ErrorResponseException<azure.cognitiveservices.language.spellcheck.models.ErrorResponseException>`
Below is the the instruction that describes the task: ### Input: The Bing Spell Check API lets you perform contextual grammar and spell checking. Bing has developed a web-based spell-checker that leverages machine learning and statistical machine translation to dynamically train a constantly evolving and highly contextual algorithm. The spell-checker is based on a massive corpus of web searches and documents. :param text: The text string to check for spelling and grammar errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. Because of the query string length limit, you'll typically use a POST request unless you're checking only short strings. :type text: str :param accept_language: A comma-delimited list of one or more languages to use for user interface strings. The list is in decreasing order of preference. For additional information, including expected format, see [RFC2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). This header and the setLang query parameter are mutually exclusive; do not specify both. If you set this header, you must also specify the cc query parameter. Bing will use the first supported language it finds from the list, and combine that language with the cc parameter value to determine the market to return results for. If the list does not include a supported language, Bing will find the closest language and market that supports the request, and may use an aggregated or default market for the results instead of a specified one. You should use this header and the cc query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. A user interface string is a string that's used as a label in a user interface. There are very few user interface strings in the JSON response objects. Any links in the response objects to Bing.com properties will apply the specified language. :type accept_language: str :param pragma: By default, Bing returns cached content, if available. To prevent Bing from returning cached content, set the Pragma header to no-cache (for example, Pragma: no-cache). :type pragma: str :param user_agent: The user agent originating the request. Bing uses the user agent to provide mobile users with an optimized experience. Although optional, you are strongly encouraged to always specify this header. The user-agent should be the same string that any commonly used browser would send. For information about user agents, see [RFC 2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). :type user_agent: str :param client_id: Bing uses this header to provide users with consistent behavior across Bing API calls. Bing often flights new features and improvements, and it uses the client ID as a key for assigning traffic on different flights. If you do not use the same client ID for a user across multiple requests, then Bing may assign the user to multiple conflicting flights. Being assigned to multiple conflicting flights can lead to an inconsistent user experience. For example, if the second request has a different flight assignment than the first, the experience may be unexpected. Also, Bing can use the client ID to tailor web results to that client ID’s search history, providing a richer experience for the user. Bing also uses this header to help improve result rankings by analyzing the activity generated by a client ID. The relevance improvements help with better quality of results delivered by Bing APIs and in turn enables higher click-through rates for the API consumer. IMPORTANT: Although optional, you should consider this header required. Persisting the client ID across multiple requests for the same end user and device combination enables 1) the API consumer to receive a consistent user experience, and 2) higher click-through rates via better quality of results from the Bing APIs. Each user that uses your application on the device must have a unique, Bing generated client ID. If you do not include this header in the request, Bing generates an ID and returns it in the X-MSEdge-ClientID response header. The only time that you should NOT include this header in a request is the first time the user uses your app on that device. Use the client ID for each Bing API request that your app makes for this user on the device. Persist the client ID. To persist the ID in a browser app, use a persistent HTTP cookie to ensure the ID is used across all sessions. Do not use a session cookie. For other apps such as mobile apps, use the device's persistent storage to persist the ID. The next time the user uses your app on that device, get the client ID that you persisted. Bing responses may or may not include this header. If the response includes this header, capture the client ID and use it for all subsequent Bing requests for the user on that device. If you include the X-MSEdge-ClientID, you must not include cookies in the request. :type client_id: str :param client_ip: The IPv4 or IPv6 address of the client device. The IP address is used to discover the user's location. Bing uses the location information to determine safe search behavior. Although optional, you are encouraged to always specify this header and the X-Search-Location header. Do not obfuscate the address (for example, by changing the last octet to 0). Obfuscating the address results in the location not being anywhere near the device's actual location, which may result in Bing serving erroneous results. :type client_ip: str :param location: A semicolon-delimited list of key/value pairs that describe the client's geographical location. Bing uses the location information to determine safe search behavior and to return relevant local content. Specify the key/value pair as <key>:<value>. The following are the keys that you use to specify the user's location. lat (required): The latitude of the client's location, in degrees. The latitude must be greater than or equal to -90.0 and less than or equal to +90.0. Negative values indicate southern latitudes and positive values indicate northern latitudes. long (required): The longitude of the client's location, in degrees. The longitude must be greater than or equal to -180.0 and less than or equal to +180.0. Negative values indicate western longitudes and positive values indicate eastern longitudes. re (required): The radius, in meters, which specifies the horizontal accuracy of the coordinates. Pass the value returned by the device's location service. Typical values might be 22m for GPS/Wi-Fi, 380m for cell tower triangulation, and 18,000m for reverse IP lookup. ts (optional): The UTC UNIX timestamp of when the client was at the location. (The UNIX timestamp is the number of seconds since January 1, 1970.) head (optional): The client's relative heading or direction of travel. Specify the direction of travel as degrees from 0 through 360, counting clockwise relative to true north. Specify this key only if the sp key is nonzero. sp (optional): The horizontal velocity (speed), in meters per second, that the client device is traveling. alt (optional): The altitude of the client device, in meters. are (optional): The radius, in meters, that specifies the vertical accuracy of the coordinates. Specify this key only if you specify the alt key. Although many of the keys are optional, the more information that you provide, the more accurate the location results are. Although optional, you are encouraged to always specify the user's geographical location. Providing the location is especially important if the client's IP address does not accurately reflect the user's physical location (for example, if the client uses VPN). For optimal results, you should include this header and the X-Search-ClientIP header, but at a minimum, you should include this header. :type location: str :param action_type: A string that's used by logging to determine whether the request is coming from an interactive session or a page load. The following are the possible values. 1) Edit—The request is from an interactive session 2) Load—The request is from a page load. Possible values include: 'Edit', 'Load' :type action_type: str or ~azure.cognitiveservices.language.spellcheck.models.ActionType :param app_name: The unique name of your app. The name must be known by Bing. Do not include this parameter unless you have previously contacted Bing to get a unique app name. To get a unique name, contact your Bing Business Development manager. :type app_name: str :param country_code: A 2-character country code of the country where the results come from. This API supports only the United States market. If you specify this query parameter, it must be set to us. If you set this parameter, you must also specify the Accept-Language header. Bing uses the first supported language it finds from the languages list, and combine that language with the country code that you specify to determine the market to return results for. If the languages list does not include a supported language, Bing finds the closest language and market that supports the request, or it may use an aggregated or default market for the results instead of a specified one. You should use this query parameter and the Accept-Language query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. This parameter and the mkt query parameter are mutually exclusive—do not specify both. :type country_code: str :param client_machine_name: A unique name of the device that the request is being made from. Generate a unique value for each device (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type client_machine_name: str :param doc_id: A unique ID that identifies the document that the text belongs to. Generate a unique value for each document (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type doc_id: str :param market: The market where the results come from. You are strongly encouraged to always specify the market, if known. Specifying the market helps Bing route the request and return an appropriate and optimal response. This parameter and the cc query parameter are mutually exclusive—do not specify both. :type market: str :param session_id: A unique ID that identifies this user session. Generate a unique value for each user session (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections :type session_id: str :param set_lang: The language to use for user interface strings. Specify the language using the ISO 639-1 2-letter language code. For example, the language code for English is EN. The default is EN (English). Although optional, you should always specify the language. Typically, you set setLang to the same language specified by mkt unless the user wants the user interface strings displayed in a different language. This parameter and the Accept-Language header are mutually exclusive—do not specify both. A user interface string is a string that's used as a label in a user interface. There are few user interface strings in the JSON response objects. Also, any links to Bing.com properties in the response objects apply the specified language. :type set_lang: str :param user_id: A unique ID that identifies the user. Generate a unique value for each user (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type user_id: str :param mode: The type of spelling and grammar checks to perform. The following are the possible values (the values are case insensitive). The default is Proof. 1) Proof—Finds most spelling and grammar mistakes. 2) Spell—Finds most spelling mistakes but does not find some of the grammar errors that Proof catches (for example, capitalization and repeated words). Possible values include: 'proof', 'spell' :type mode: str :param pre_context_text: A string that gives context to the text string. For example, the text string petal is valid. However, if you set preContextText to bike, the context changes and the text string becomes not valid. In this case, the API suggests that you change petal to pedal (as in bike pedal). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type pre_context_text: str :param post_context_text: A string that gives context to the text string. For example, the text string read is valid. However, if you set postContextText to carpet, the context changes and the text string becomes not valid. In this case, the API suggests that you change read to red (as in red carpet). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type post_context_text: str :param dict custom_headers: headers that will be added to the request :param bool raw: returns the direct response alongside the deserialized response :param operation_config: :ref:`Operation configuration overrides<msrest:optionsforoperations>`. :return: SpellCheck or ClientRawResponse if raw=true :rtype: ~azure.cognitiveservices.language.spellcheck.models.SpellCheck or ~msrest.pipeline.ClientRawResponse :raises: :class:`ErrorResponseException<azure.cognitiveservices.language.spellcheck.models.ErrorResponseException>` ### Response: def spell_checker( self, text, accept_language=None, pragma=None, user_agent=None, client_id=None, client_ip=None, location=None, action_type=None, app_name=None, country_code=None, client_machine_name=None, doc_id=None, market=None, session_id=None, set_lang=None, user_id=None, mode=None, pre_context_text=None, post_context_text=None, custom_headers=None, raw=False, **operation_config): """The Bing Spell Check API lets you perform contextual grammar and spell checking. Bing has developed a web-based spell-checker that leverages machine learning and statistical machine translation to dynamically train a constantly evolving and highly contextual algorithm. The spell-checker is based on a massive corpus of web searches and documents. :param text: The text string to check for spelling and grammar errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. Because of the query string length limit, you'll typically use a POST request unless you're checking only short strings. :type text: str :param accept_language: A comma-delimited list of one or more languages to use for user interface strings. The list is in decreasing order of preference. For additional information, including expected format, see [RFC2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). This header and the setLang query parameter are mutually exclusive; do not specify both. If you set this header, you must also specify the cc query parameter. Bing will use the first supported language it finds from the list, and combine that language with the cc parameter value to determine the market to return results for. If the list does not include a supported language, Bing will find the closest language and market that supports the request, and may use an aggregated or default market for the results instead of a specified one. You should use this header and the cc query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. A user interface string is a string that's used as a label in a user interface. There are very few user interface strings in the JSON response objects. Any links in the response objects to Bing.com properties will apply the specified language. :type accept_language: str :param pragma: By default, Bing returns cached content, if available. To prevent Bing from returning cached content, set the Pragma header to no-cache (for example, Pragma: no-cache). :type pragma: str :param user_agent: The user agent originating the request. Bing uses the user agent to provide mobile users with an optimized experience. Although optional, you are strongly encouraged to always specify this header. The user-agent should be the same string that any commonly used browser would send. For information about user agents, see [RFC 2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). :type user_agent: str :param client_id: Bing uses this header to provide users with consistent behavior across Bing API calls. Bing often flights new features and improvements, and it uses the client ID as a key for assigning traffic on different flights. If you do not use the same client ID for a user across multiple requests, then Bing may assign the user to multiple conflicting flights. Being assigned to multiple conflicting flights can lead to an inconsistent user experience. For example, if the second request has a different flight assignment than the first, the experience may be unexpected. Also, Bing can use the client ID to tailor web results to that client ID’s search history, providing a richer experience for the user. Bing also uses this header to help improve result rankings by analyzing the activity generated by a client ID. The relevance improvements help with better quality of results delivered by Bing APIs and in turn enables higher click-through rates for the API consumer. IMPORTANT: Although optional, you should consider this header required. Persisting the client ID across multiple requests for the same end user and device combination enables 1) the API consumer to receive a consistent user experience, and 2) higher click-through rates via better quality of results from the Bing APIs. Each user that uses your application on the device must have a unique, Bing generated client ID. If you do not include this header in the request, Bing generates an ID and returns it in the X-MSEdge-ClientID response header. The only time that you should NOT include this header in a request is the first time the user uses your app on that device. Use the client ID for each Bing API request that your app makes for this user on the device. Persist the client ID. To persist the ID in a browser app, use a persistent HTTP cookie to ensure the ID is used across all sessions. Do not use a session cookie. For other apps such as mobile apps, use the device's persistent storage to persist the ID. The next time the user uses your app on that device, get the client ID that you persisted. Bing responses may or may not include this header. If the response includes this header, capture the client ID and use it for all subsequent Bing requests for the user on that device. If you include the X-MSEdge-ClientID, you must not include cookies in the request. :type client_id: str :param client_ip: The IPv4 or IPv6 address of the client device. The IP address is used to discover the user's location. Bing uses the location information to determine safe search behavior. Although optional, you are encouraged to always specify this header and the X-Search-Location header. Do not obfuscate the address (for example, by changing the last octet to 0). Obfuscating the address results in the location not being anywhere near the device's actual location, which may result in Bing serving erroneous results. :type client_ip: str :param location: A semicolon-delimited list of key/value pairs that describe the client's geographical location. Bing uses the location information to determine safe search behavior and to return relevant local content. Specify the key/value pair as <key>:<value>. The following are the keys that you use to specify the user's location. lat (required): The latitude of the client's location, in degrees. The latitude must be greater than or equal to -90.0 and less than or equal to +90.0. Negative values indicate southern latitudes and positive values indicate northern latitudes. long (required): The longitude of the client's location, in degrees. The longitude must be greater than or equal to -180.0 and less than or equal to +180.0. Negative values indicate western longitudes and positive values indicate eastern longitudes. re (required): The radius, in meters, which specifies the horizontal accuracy of the coordinates. Pass the value returned by the device's location service. Typical values might be 22m for GPS/Wi-Fi, 380m for cell tower triangulation, and 18,000m for reverse IP lookup. ts (optional): The UTC UNIX timestamp of when the client was at the location. (The UNIX timestamp is the number of seconds since January 1, 1970.) head (optional): The client's relative heading or direction of travel. Specify the direction of travel as degrees from 0 through 360, counting clockwise relative to true north. Specify this key only if the sp key is nonzero. sp (optional): The horizontal velocity (speed), in meters per second, that the client device is traveling. alt (optional): The altitude of the client device, in meters. are (optional): The radius, in meters, that specifies the vertical accuracy of the coordinates. Specify this key only if you specify the alt key. Although many of the keys are optional, the more information that you provide, the more accurate the location results are. Although optional, you are encouraged to always specify the user's geographical location. Providing the location is especially important if the client's IP address does not accurately reflect the user's physical location (for example, if the client uses VPN). For optimal results, you should include this header and the X-Search-ClientIP header, but at a minimum, you should include this header. :type location: str :param action_type: A string that's used by logging to determine whether the request is coming from an interactive session or a page load. The following are the possible values. 1) Edit—The request is from an interactive session 2) Load—The request is from a page load. Possible values include: 'Edit', 'Load' :type action_type: str or ~azure.cognitiveservices.language.spellcheck.models.ActionType :param app_name: The unique name of your app. The name must be known by Bing. Do not include this parameter unless you have previously contacted Bing to get a unique app name. To get a unique name, contact your Bing Business Development manager. :type app_name: str :param country_code: A 2-character country code of the country where the results come from. This API supports only the United States market. If you specify this query parameter, it must be set to us. If you set this parameter, you must also specify the Accept-Language header. Bing uses the first supported language it finds from the languages list, and combine that language with the country code that you specify to determine the market to return results for. If the languages list does not include a supported language, Bing finds the closest language and market that supports the request, or it may use an aggregated or default market for the results instead of a specified one. You should use this query parameter and the Accept-Language query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. This parameter and the mkt query parameter are mutually exclusive—do not specify both. :type country_code: str :param client_machine_name: A unique name of the device that the request is being made from. Generate a unique value for each device (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type client_machine_name: str :param doc_id: A unique ID that identifies the document that the text belongs to. Generate a unique value for each document (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type doc_id: str :param market: The market where the results come from. You are strongly encouraged to always specify the market, if known. Specifying the market helps Bing route the request and return an appropriate and optimal response. This parameter and the cc query parameter are mutually exclusive—do not specify both. :type market: str :param session_id: A unique ID that identifies this user session. Generate a unique value for each user session (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections :type session_id: str :param set_lang: The language to use for user interface strings. Specify the language using the ISO 639-1 2-letter language code. For example, the language code for English is EN. The default is EN (English). Although optional, you should always specify the language. Typically, you set setLang to the same language specified by mkt unless the user wants the user interface strings displayed in a different language. This parameter and the Accept-Language header are mutually exclusive—do not specify both. A user interface string is a string that's used as a label in a user interface. There are few user interface strings in the JSON response objects. Also, any links to Bing.com properties in the response objects apply the specified language. :type set_lang: str :param user_id: A unique ID that identifies the user. Generate a unique value for each user (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type user_id: str :param mode: The type of spelling and grammar checks to perform. The following are the possible values (the values are case insensitive). The default is Proof. 1) Proof—Finds most spelling and grammar mistakes. 2) Spell—Finds most spelling mistakes but does not find some of the grammar errors that Proof catches (for example, capitalization and repeated words). Possible values include: 'proof', 'spell' :type mode: str :param pre_context_text: A string that gives context to the text string. For example, the text string petal is valid. However, if you set preContextText to bike, the context changes and the text string becomes not valid. In this case, the API suggests that you change petal to pedal (as in bike pedal). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type pre_context_text: str :param post_context_text: A string that gives context to the text string. For example, the text string read is valid. However, if you set postContextText to carpet, the context changes and the text string becomes not valid. In this case, the API suggests that you change read to red (as in red carpet). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type post_context_text: str :param dict custom_headers: headers that will be added to the request :param bool raw: returns the direct response alongside the deserialized response :param operation_config: :ref:`Operation configuration overrides<msrest:optionsforoperations>`. :return: SpellCheck or ClientRawResponse if raw=true :rtype: ~azure.cognitiveservices.language.spellcheck.models.SpellCheck or ~msrest.pipeline.ClientRawResponse :raises: :class:`ErrorResponseException<azure.cognitiveservices.language.spellcheck.models.ErrorResponseException>` """ x_bing_apis_sdk = "true" # Construct URL url = self.spell_checker.metadata['url'] # Construct parameters query_parameters = {} if action_type is not None: query_parameters['ActionType'] = self._serialize.query("action_type", action_type, 'str') if app_name is not None: query_parameters['AppName'] = self._serialize.query("app_name", app_name, 'str') if country_code is not None: query_parameters['cc'] = self._serialize.query("country_code", country_code, 'str') if client_machine_name is not None: query_parameters['ClientMachineName'] = self._serialize.query("client_machine_name", client_machine_name, 'str') if doc_id is not None: query_parameters['DocId'] = self._serialize.query("doc_id", doc_id, 'str') if market is not None: query_parameters['mkt'] = self._serialize.query("market", market, 'str') if session_id is not None: query_parameters['SessionId'] = self._serialize.query("session_id", session_id, 'str') if set_lang is not None: query_parameters['SetLang'] = self._serialize.query("set_lang", set_lang, 'str') if user_id is not None: query_parameters['UserId'] = self._serialize.query("user_id", user_id, 'str') # Construct headers header_parameters = {} header_parameters['Content-Type'] = 'application/x-www-form-urlencoded' if custom_headers: header_parameters.update(custom_headers) header_parameters['X-BingApis-SDK'] = self._serialize.header("x_bing_apis_sdk", x_bing_apis_sdk, 'str') if accept_language is not None: header_parameters['Accept-Language'] = self._serialize.header("accept_language", accept_language, 'str') if pragma is not None: header_parameters['Pragma'] = self._serialize.header("pragma", pragma, 'str') if user_agent is not None: header_parameters['User-Agent'] = self._serialize.header("user_agent", user_agent, 'str') if client_id is not None: header_parameters['X-MSEdge-ClientID'] = self._serialize.header("client_id", client_id, 'str') if client_ip is not None: header_parameters['X-MSEdge-ClientIP'] = self._serialize.header("client_ip", client_ip, 'str') if location is not None: header_parameters['X-Search-Location'] = self._serialize.header("location", location, 'str') # Construct form data form_data_content = { 'Text': text, 'Mode': mode, 'PreContextText': pre_context_text, 'PostContextText': post_context_text, } # Construct and send request request = self._client.post(url, query_parameters) response = self._client.send_formdata( request, header_parameters, form_data_content, stream=False, **operation_config) if response.status_code not in [200]: raise models.ErrorResponseException(self._deserialize, response) deserialized = None if response.status_code == 200: deserialized = self._deserialize('SpellCheck', response) if raw: client_raw_response = ClientRawResponse(deserialized, response) return client_raw_response return deserialized
def _sign_payload(self, payload): """使用 appkey 对 payload 进行签名,返回新的请求参数 """ app_key = self._app_key t = int(time.time() * 1000) requestStr = { 'header': self._req_header, 'model': payload } data = json.dumps({'requestStr': json.dumps(requestStr)}) data_str = '{}&{}&{}&{}'.format(self._req_token, t, app_key, data) sign = hashlib.md5(data_str.encode('utf-8')).hexdigest() params = { 't': t, 'appKey': app_key, 'sign': sign, 'data': data, } return params
使用 appkey 对 payload 进行签名,返回新的请求参数
Below is the the instruction that describes the task: ### Input: 使用 appkey 对 payload 进行签名,返回新的请求参数 ### Response: def _sign_payload(self, payload): """使用 appkey 对 payload 进行签名,返回新的请求参数 """ app_key = self._app_key t = int(time.time() * 1000) requestStr = { 'header': self._req_header, 'model': payload } data = json.dumps({'requestStr': json.dumps(requestStr)}) data_str = '{}&{}&{}&{}'.format(self._req_token, t, app_key, data) sign = hashlib.md5(data_str.encode('utf-8')).hexdigest() params = { 't': t, 'appKey': app_key, 'sign': sign, 'data': data, } return params
def from_pycbc(cls, fs, copy=True): """Convert a `pycbc.types.frequencyseries.FrequencySeries` into a `FrequencySeries` Parameters ---------- fs : `pycbc.types.frequencyseries.FrequencySeries` the input PyCBC `~pycbc.types.frequencyseries.FrequencySeries` array copy : `bool`, optional, default: `True` if `True`, copy these data to a new array Returns ------- spectrum : `FrequencySeries` a GWpy version of the input frequency series """ return cls(fs.data, f0=0, df=fs.delta_f, epoch=fs.epoch, copy=copy)
Convert a `pycbc.types.frequencyseries.FrequencySeries` into a `FrequencySeries` Parameters ---------- fs : `pycbc.types.frequencyseries.FrequencySeries` the input PyCBC `~pycbc.types.frequencyseries.FrequencySeries` array copy : `bool`, optional, default: `True` if `True`, copy these data to a new array Returns ------- spectrum : `FrequencySeries` a GWpy version of the input frequency series
Below is the the instruction that describes the task: ### Input: Convert a `pycbc.types.frequencyseries.FrequencySeries` into a `FrequencySeries` Parameters ---------- fs : `pycbc.types.frequencyseries.FrequencySeries` the input PyCBC `~pycbc.types.frequencyseries.FrequencySeries` array copy : `bool`, optional, default: `True` if `True`, copy these data to a new array Returns ------- spectrum : `FrequencySeries` a GWpy version of the input frequency series ### Response: def from_pycbc(cls, fs, copy=True): """Convert a `pycbc.types.frequencyseries.FrequencySeries` into a `FrequencySeries` Parameters ---------- fs : `pycbc.types.frequencyseries.FrequencySeries` the input PyCBC `~pycbc.types.frequencyseries.FrequencySeries` array copy : `bool`, optional, default: `True` if `True`, copy these data to a new array Returns ------- spectrum : `FrequencySeries` a GWpy version of the input frequency series """ return cls(fs.data, f0=0, df=fs.delta_f, epoch=fs.epoch, copy=copy)
def attachviewers(self, profiles): """Attach viewers *and converters* to file, automatically scan all profiles for outputtemplate or inputtemplate""" if self.metadata: template = None for profile in profiles: if isinstance(self, CLAMInputFile): for t in profile.input: if self.metadata.inputtemplate == t.id: template = t break elif isinstance(self, CLAMOutputFile) and self.metadata and self.metadata.provenance: for t in profile.outputtemplates(): if self.metadata.provenance.outputtemplate_id == t.id: template = t break else: raise NotImplementedError #Is ok, nothing to implement for now if template: break if template and template.viewers: for viewer in template.viewers: self.viewers.append(viewer) if template and template.converters: for converter in template.converters: self.converters.append(converter)
Attach viewers *and converters* to file, automatically scan all profiles for outputtemplate or inputtemplate
Below is the the instruction that describes the task: ### Input: Attach viewers *and converters* to file, automatically scan all profiles for outputtemplate or inputtemplate ### Response: def attachviewers(self, profiles): """Attach viewers *and converters* to file, automatically scan all profiles for outputtemplate or inputtemplate""" if self.metadata: template = None for profile in profiles: if isinstance(self, CLAMInputFile): for t in profile.input: if self.metadata.inputtemplate == t.id: template = t break elif isinstance(self, CLAMOutputFile) and self.metadata and self.metadata.provenance: for t in profile.outputtemplates(): if self.metadata.provenance.outputtemplate_id == t.id: template = t break else: raise NotImplementedError #Is ok, nothing to implement for now if template: break if template and template.viewers: for viewer in template.viewers: self.viewers.append(viewer) if template and template.converters: for converter in template.converters: self.converters.append(converter)
def mono_FM(x,fs=2.4e6,file_name='test.wav'): """ Decimate complex baseband input by 10 Design 1st decimation lowpass filter (f_c = 200 KHz) """ b = signal.firwin(64,2*200e3/float(fs)) # Filter and decimate (should be polyphase) y = signal.lfilter(b,1,x) z = ss.downsample(y,10) # Apply complex baseband discriminator z_bb = discrim(z) # Design 2nd decimation lowpass filter (fc = 12 KHz) bb = signal.firwin(64,2*12e3/(float(fs)/10)) # Filter and decimate zz_bb = signal.lfilter(bb,1,z_bb) # Decimate by 5 z_out = ss.downsample(zz_bb,5) # Save to wave file ss.to_wav(file_name, 48000, z_out/2) print('Done!') return z_bb, z_out
Decimate complex baseband input by 10 Design 1st decimation lowpass filter (f_c = 200 KHz)
Below is the the instruction that describes the task: ### Input: Decimate complex baseband input by 10 Design 1st decimation lowpass filter (f_c = 200 KHz) ### Response: def mono_FM(x,fs=2.4e6,file_name='test.wav'): """ Decimate complex baseband input by 10 Design 1st decimation lowpass filter (f_c = 200 KHz) """ b = signal.firwin(64,2*200e3/float(fs)) # Filter and decimate (should be polyphase) y = signal.lfilter(b,1,x) z = ss.downsample(y,10) # Apply complex baseband discriminator z_bb = discrim(z) # Design 2nd decimation lowpass filter (fc = 12 KHz) bb = signal.firwin(64,2*12e3/(float(fs)/10)) # Filter and decimate zz_bb = signal.lfilter(bb,1,z_bb) # Decimate by 5 z_out = ss.downsample(zz_bb,5) # Save to wave file ss.to_wav(file_name, 48000, z_out/2) print('Done!') return z_bb, z_out
def _get_deps(self, tree, include_punct, representation, universal): """Get a list of dependencies from a Stanford Tree for a specific Stanford Dependencies representation.""" if universal: converter = self.universal_converter if self.universal_converter == self.converter: import warnings warnings.warn("This jar doesn't support universal " "dependencies, falling back to Stanford " "Dependencies. To suppress this message, " "call with universal=False") else: converter = self.converter if include_punct: egs = converter(tree, self.acceptFilter) else: egs = converter(tree) if representation == 'basic': deps = egs.typedDependencies() elif representation == 'collapsed': deps = egs.typedDependenciesCollapsed(True) elif representation == 'CCprocessed': deps = egs.typedDependenciesCCprocessed(True) else: # _raise_on_bad_representation should ensure that this # assertion doesn't fail assert representation == 'collapsedTree' deps = egs.typedDependenciesCollapsedTree() return self._listify(deps)
Get a list of dependencies from a Stanford Tree for a specific Stanford Dependencies representation.
Below is the the instruction that describes the task: ### Input: Get a list of dependencies from a Stanford Tree for a specific Stanford Dependencies representation. ### Response: def _get_deps(self, tree, include_punct, representation, universal): """Get a list of dependencies from a Stanford Tree for a specific Stanford Dependencies representation.""" if universal: converter = self.universal_converter if self.universal_converter == self.converter: import warnings warnings.warn("This jar doesn't support universal " "dependencies, falling back to Stanford " "Dependencies. To suppress this message, " "call with universal=False") else: converter = self.converter if include_punct: egs = converter(tree, self.acceptFilter) else: egs = converter(tree) if representation == 'basic': deps = egs.typedDependencies() elif representation == 'collapsed': deps = egs.typedDependenciesCollapsed(True) elif representation == 'CCprocessed': deps = egs.typedDependenciesCCprocessed(True) else: # _raise_on_bad_representation should ensure that this # assertion doesn't fail assert representation == 'collapsedTree' deps = egs.typedDependenciesCollapsedTree() return self._listify(deps)
def get_core(self): """ Get an unsatisfiable core if the formula was previously unsatisfied. """ if self.minicard and self.status == False: return pysolvers.minicard_core(self.minicard)
Get an unsatisfiable core if the formula was previously unsatisfied.
Below is the the instruction that describes the task: ### Input: Get an unsatisfiable core if the formula was previously unsatisfied. ### Response: def get_core(self): """ Get an unsatisfiable core if the formula was previously unsatisfied. """ if self.minicard and self.status == False: return pysolvers.minicard_core(self.minicard)
def wrap_iterable(obj): """ Returns: wrapped_obj, was_scalar """ was_scalar = not isiterable(obj) wrapped_obj = [obj] if was_scalar else obj return wrapped_obj, was_scalar
Returns: wrapped_obj, was_scalar
Below is the the instruction that describes the task: ### Input: Returns: wrapped_obj, was_scalar ### Response: def wrap_iterable(obj): """ Returns: wrapped_obj, was_scalar """ was_scalar = not isiterable(obj) wrapped_obj = [obj] if was_scalar else obj return wrapped_obj, was_scalar
def send_text(self, sender, receiver_type, receiver_id, content): """ 发送文本消息 详情请参考 https://qydev.weixin.qq.com/wiki/index.php?title=企业会话接口说明 :param sender: 发送人 :param receiver_type: 接收人类型:single|group,分别表示:单聊|群聊 :param receiver_id: 接收人的值,为userid|chatid,分别表示:成员id|会话id :param content: 消息内容 :return: 返回的 JSON 数据包 """ data = { 'receiver': { 'type': receiver_type, 'id': receiver_id, }, 'sender': sender, 'msgtype': 'text', 'text': { 'content': content, } } return self._post('chat/send', data=data)
发送文本消息 详情请参考 https://qydev.weixin.qq.com/wiki/index.php?title=企业会话接口说明 :param sender: 发送人 :param receiver_type: 接收人类型:single|group,分别表示:单聊|群聊 :param receiver_id: 接收人的值,为userid|chatid,分别表示:成员id|会话id :param content: 消息内容 :return: 返回的 JSON 数据包
Below is the the instruction that describes the task: ### Input: 发送文本消息 详情请参考 https://qydev.weixin.qq.com/wiki/index.php?title=企业会话接口说明 :param sender: 发送人 :param receiver_type: 接收人类型:single|group,分别表示:单聊|群聊 :param receiver_id: 接收人的值,为userid|chatid,分别表示:成员id|会话id :param content: 消息内容 :return: 返回的 JSON 数据包 ### Response: def send_text(self, sender, receiver_type, receiver_id, content): """ 发送文本消息 详情请参考 https://qydev.weixin.qq.com/wiki/index.php?title=企业会话接口说明 :param sender: 发送人 :param receiver_type: 接收人类型:single|group,分别表示:单聊|群聊 :param receiver_id: 接收人的值,为userid|chatid,分别表示:成员id|会话id :param content: 消息内容 :return: 返回的 JSON 数据包 """ data = { 'receiver': { 'type': receiver_type, 'id': receiver_id, }, 'sender': sender, 'msgtype': 'text', 'text': { 'content': content, } } return self._post('chat/send', data=data)
def user_token(scopes, client_id=None, client_secret=None, redirect_uri=None): """ Generate a user access token :param List[str] scopes: Scopes to get :param str client_id: Spotify Client ID :param str client_secret: Spotify Client secret :param str redirect_uri: Spotify redirect URI :return: Generated access token :rtype: User """ webbrowser.open_new(authorize_url(client_id=client_id, redirect_uri=redirect_uri, scopes=scopes)) code = parse_code(raw_input('Enter the URL that you were redirected to: ')) return User(code, client_id=client_id, client_secret=client_secret, redirect_uri=redirect_uri)
Generate a user access token :param List[str] scopes: Scopes to get :param str client_id: Spotify Client ID :param str client_secret: Spotify Client secret :param str redirect_uri: Spotify redirect URI :return: Generated access token :rtype: User
Below is the the instruction that describes the task: ### Input: Generate a user access token :param List[str] scopes: Scopes to get :param str client_id: Spotify Client ID :param str client_secret: Spotify Client secret :param str redirect_uri: Spotify redirect URI :return: Generated access token :rtype: User ### Response: def user_token(scopes, client_id=None, client_secret=None, redirect_uri=None): """ Generate a user access token :param List[str] scopes: Scopes to get :param str client_id: Spotify Client ID :param str client_secret: Spotify Client secret :param str redirect_uri: Spotify redirect URI :return: Generated access token :rtype: User """ webbrowser.open_new(authorize_url(client_id=client_id, redirect_uri=redirect_uri, scopes=scopes)) code = parse_code(raw_input('Enter the URL that you were redirected to: ')) return User(code, client_id=client_id, client_secret=client_secret, redirect_uri=redirect_uri)
def probably_wkt(text): '''Quick check to determine if the provided text looks like WKT''' valid = False valid_types = set([ 'POINT', 'LINESTRING', 'POLYGON', 'MULTIPOINT', 'MULTILINESTRING', 'MULTIPOLYGON', 'GEOMETRYCOLLECTION', ]) matched = re.match(r'(\w+)\s*\([^)]+\)', text.strip()) if matched: valid = matched.group(1).upper() in valid_types return valid
Quick check to determine if the provided text looks like WKT
Below is the the instruction that describes the task: ### Input: Quick check to determine if the provided text looks like WKT ### Response: def probably_wkt(text): '''Quick check to determine if the provided text looks like WKT''' valid = False valid_types = set([ 'POINT', 'LINESTRING', 'POLYGON', 'MULTIPOINT', 'MULTILINESTRING', 'MULTIPOLYGON', 'GEOMETRYCOLLECTION', ]) matched = re.match(r'(\w+)\s*\([^)]+\)', text.strip()) if matched: valid = matched.group(1).upper() in valid_types return valid
def mask_catalog(regionfile, infile, outfile, negate=False, racol='ra', deccol='dec'): """ Apply a region file as a mask to a catalog, removing all the rows with ra/dec inside the region If negate=False then remove the rows with ra/dec outside the region. Parameters ---------- regionfile : str A file which can be loaded as a :class:`AegeanTools.regions.Region`. The catalogue will be masked according to this region. infile : str Input catalogue. outfile : str Output catalogue. negate : bool If True then pixels *outside* the region are masked. Default = False. racol, deccol : str The name of the columns in `table` that should be interpreted as ra and dec. Default = 'ra', 'dec' See Also -------- :func:`AegeanTools.MIMAS.mask_table` :func:`AegeanTools.catalogs.load_table` """ logging.info("Loading region from {0}".format(regionfile)) region = Region.load(regionfile) logging.info("Loading catalog from {0}".format(infile)) table = load_table(infile) masked_table = mask_table(region, table, negate=negate, racol=racol, deccol=deccol) write_table(masked_table, outfile) return
Apply a region file as a mask to a catalog, removing all the rows with ra/dec inside the region If negate=False then remove the rows with ra/dec outside the region. Parameters ---------- regionfile : str A file which can be loaded as a :class:`AegeanTools.regions.Region`. The catalogue will be masked according to this region. infile : str Input catalogue. outfile : str Output catalogue. negate : bool If True then pixels *outside* the region are masked. Default = False. racol, deccol : str The name of the columns in `table` that should be interpreted as ra and dec. Default = 'ra', 'dec' See Also -------- :func:`AegeanTools.MIMAS.mask_table` :func:`AegeanTools.catalogs.load_table`
Below is the the instruction that describes the task: ### Input: Apply a region file as a mask to a catalog, removing all the rows with ra/dec inside the region If negate=False then remove the rows with ra/dec outside the region. Parameters ---------- regionfile : str A file which can be loaded as a :class:`AegeanTools.regions.Region`. The catalogue will be masked according to this region. infile : str Input catalogue. outfile : str Output catalogue. negate : bool If True then pixels *outside* the region are masked. Default = False. racol, deccol : str The name of the columns in `table` that should be interpreted as ra and dec. Default = 'ra', 'dec' See Also -------- :func:`AegeanTools.MIMAS.mask_table` :func:`AegeanTools.catalogs.load_table` ### Response: def mask_catalog(regionfile, infile, outfile, negate=False, racol='ra', deccol='dec'): """ Apply a region file as a mask to a catalog, removing all the rows with ra/dec inside the region If negate=False then remove the rows with ra/dec outside the region. Parameters ---------- regionfile : str A file which can be loaded as a :class:`AegeanTools.regions.Region`. The catalogue will be masked according to this region. infile : str Input catalogue. outfile : str Output catalogue. negate : bool If True then pixels *outside* the region are masked. Default = False. racol, deccol : str The name of the columns in `table` that should be interpreted as ra and dec. Default = 'ra', 'dec' See Also -------- :func:`AegeanTools.MIMAS.mask_table` :func:`AegeanTools.catalogs.load_table` """ logging.info("Loading region from {0}".format(regionfile)) region = Region.load(regionfile) logging.info("Loading catalog from {0}".format(infile)) table = load_table(infile) masked_table = mask_table(region, table, negate=negate, racol=racol, deccol=deccol) write_table(masked_table, outfile) return
def forward(self, x: torch.Tensor, mask: torch.Tensor) -> torch.Tensor: """Follow Figure 1 (left) for connections.""" x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask)) return self.sublayer[1](x, self.feed_forward)
Follow Figure 1 (left) for connections.
Below is the the instruction that describes the task: ### Input: Follow Figure 1 (left) for connections. ### Response: def forward(self, x: torch.Tensor, mask: torch.Tensor) -> torch.Tensor: """Follow Figure 1 (left) for connections.""" x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask)) return self.sublayer[1](x, self.feed_forward)
def admin_startWS(self, host='localhost', port=8546, cors=None, apis=None): """https://github.com/ethereum/go-ethereum/wiki/Management-APIs#admin_startws :param host: Network interface to open the listener socket (optional) :type host: str :param port: Network port to open the listener socket (optional) :type port: int :param cors: Cross-origin resource sharing header to use (optional) :type cors: str :param apis: API modules to offer over this interface (optional) :type apis: str :rtype: bool """ if cors is None: cors = [] if apis is None: apis = ['eth', 'net', 'web3'] return (yield from self.rpc_call('admin_startWS', [host, port, ','.join(cors), ','.join(apis)]))
https://github.com/ethereum/go-ethereum/wiki/Management-APIs#admin_startws :param host: Network interface to open the listener socket (optional) :type host: str :param port: Network port to open the listener socket (optional) :type port: int :param cors: Cross-origin resource sharing header to use (optional) :type cors: str :param apis: API modules to offer over this interface (optional) :type apis: str :rtype: bool
Below is the the instruction that describes the task: ### Input: https://github.com/ethereum/go-ethereum/wiki/Management-APIs#admin_startws :param host: Network interface to open the listener socket (optional) :type host: str :param port: Network port to open the listener socket (optional) :type port: int :param cors: Cross-origin resource sharing header to use (optional) :type cors: str :param apis: API modules to offer over this interface (optional) :type apis: str :rtype: bool ### Response: def admin_startWS(self, host='localhost', port=8546, cors=None, apis=None): """https://github.com/ethereum/go-ethereum/wiki/Management-APIs#admin_startws :param host: Network interface to open the listener socket (optional) :type host: str :param port: Network port to open the listener socket (optional) :type port: int :param cors: Cross-origin resource sharing header to use (optional) :type cors: str :param apis: API modules to offer over this interface (optional) :type apis: str :rtype: bool """ if cors is None: cors = [] if apis is None: apis = ['eth', 'net', 'web3'] return (yield from self.rpc_call('admin_startWS', [host, port, ','.join(cors), ','.join(apis)]))
def compute_ratio(x): """ 计算每一类数据的占比 """ sum_ = sum(x) ratios = [] for i in x: ratio = i / sum_ ratios.append(ratio) return ratios
计算每一类数据的占比
Below is the the instruction that describes the task: ### Input: 计算每一类数据的占比 ### Response: def compute_ratio(x): """ 计算每一类数据的占比 """ sum_ = sum(x) ratios = [] for i in x: ratio = i / sum_ ratios.append(ratio) return ratios
def _parse_game_date_and_location(self, boxscore): """ Retrieve the game's date and location. The games' meta information, such as date, location, attendance, and duration, follow a complex parsing scheme that changes based on the layout of the page. The information should be able to be parsed and set regardless of the order and how much information is included. To do this, the meta information should be iterated through line-by-line and fields should be determined by the values that are found in each line. Parameters ---------- boxscore : PyQuery object A PyQuery object containing all of the HTML data from the boxscore. """ scheme = BOXSCORE_SCHEME["game_info"] items = [i.text() for i in boxscore(scheme).items()] game_info = items[0].split('\n') attendance = None date = None duration = None stadium = None time = None date = game_info[0] for line in game_info: if 'Attendance' in line: attendance = line.replace('Attendance: ', '').replace(',', '') if 'Time of Game' in line: duration = line.replace('Time of Game: ', '') if 'Stadium' in line: stadium = line.replace('Stadium: ', '') if 'Start Time' in line: time = line.replace('Start Time: ', '') setattr(self, '_attendance', attendance) setattr(self, '_date', date) setattr(self, '_duration', duration) setattr(self, '_stadium', stadium) setattr(self, '_time', time)
Retrieve the game's date and location. The games' meta information, such as date, location, attendance, and duration, follow a complex parsing scheme that changes based on the layout of the page. The information should be able to be parsed and set regardless of the order and how much information is included. To do this, the meta information should be iterated through line-by-line and fields should be determined by the values that are found in each line. Parameters ---------- boxscore : PyQuery object A PyQuery object containing all of the HTML data from the boxscore.
Below is the the instruction that describes the task: ### Input: Retrieve the game's date and location. The games' meta information, such as date, location, attendance, and duration, follow a complex parsing scheme that changes based on the layout of the page. The information should be able to be parsed and set regardless of the order and how much information is included. To do this, the meta information should be iterated through line-by-line and fields should be determined by the values that are found in each line. Parameters ---------- boxscore : PyQuery object A PyQuery object containing all of the HTML data from the boxscore. ### Response: def _parse_game_date_and_location(self, boxscore): """ Retrieve the game's date and location. The games' meta information, such as date, location, attendance, and duration, follow a complex parsing scheme that changes based on the layout of the page. The information should be able to be parsed and set regardless of the order and how much information is included. To do this, the meta information should be iterated through line-by-line and fields should be determined by the values that are found in each line. Parameters ---------- boxscore : PyQuery object A PyQuery object containing all of the HTML data from the boxscore. """ scheme = BOXSCORE_SCHEME["game_info"] items = [i.text() for i in boxscore(scheme).items()] game_info = items[0].split('\n') attendance = None date = None duration = None stadium = None time = None date = game_info[0] for line in game_info: if 'Attendance' in line: attendance = line.replace('Attendance: ', '').replace(',', '') if 'Time of Game' in line: duration = line.replace('Time of Game: ', '') if 'Stadium' in line: stadium = line.replace('Stadium: ', '') if 'Start Time' in line: time = line.replace('Start Time: ', '') setattr(self, '_attendance', attendance) setattr(self, '_date', date) setattr(self, '_duration', duration) setattr(self, '_stadium', stadium) setattr(self, '_time', time)
def _parse_pages(self, unicode=False): """Auxiliary function to parse and format page range of a document.""" if self.pageRange: pages = 'pp. {}'.format(self.pageRange) elif self.startingPage: pages = 'pp. {}-{}'.format(self.startingPage, self.endingPage) else: pages = '(no pages found)' if unicode: pages = u'{}'.format(pages) return pages
Auxiliary function to parse and format page range of a document.
Below is the the instruction that describes the task: ### Input: Auxiliary function to parse and format page range of a document. ### Response: def _parse_pages(self, unicode=False): """Auxiliary function to parse and format page range of a document.""" if self.pageRange: pages = 'pp. {}'.format(self.pageRange) elif self.startingPage: pages = 'pp. {}-{}'.format(self.startingPage, self.endingPage) else: pages = '(no pages found)' if unicode: pages = u'{}'.format(pages) return pages
def command(self, cluster_id, command, *args): """Call a ShardedCluster method.""" cluster = self._storage[cluster_id] try: return getattr(cluster, command)(*args) except AttributeError: raise ValueError("Cannot issue the command %r to ShardedCluster %s" % (command, cluster_id))
Call a ShardedCluster method.
Below is the the instruction that describes the task: ### Input: Call a ShardedCluster method. ### Response: def command(self, cluster_id, command, *args): """Call a ShardedCluster method.""" cluster = self._storage[cluster_id] try: return getattr(cluster, command)(*args) except AttributeError: raise ValueError("Cannot issue the command %r to ShardedCluster %s" % (command, cluster_id))
def getParticleInfos(self, swarmId=None, genIdx=None, completed=None, matured=None, lastDescendent=False): """Return a list of particleStates for all particles we know about in the given swarm, their model Ids, and metric results. Parameters: --------------------------------------------------------------------- swarmId: A string representation of the sorted list of encoders in this swarm. For example '__address_encoder.__gym_encoder' genIdx: If not None, only return particles at this specific generation index. completed: If not None, only return particles of the given state (either completed if 'completed' is True, or running if 'completed' is false matured: If not None, only return particles of the given state (either matured if 'matured' is True, or not matured if 'matured' is false. Note that any model which has completed is also considered matured. lastDescendent: If True, only return particles that are the last descendent, that is, the highest generation index for a given particle Id retval: (particleStates, modelIds, errScores, completed, matured) particleStates: list of particleStates modelIds: list of modelIds errScores: list of errScores, numpy.inf is plugged in if we don't have a result yet completed: list of completed booleans matured: list of matured booleans """ # The indexes of all the models in this swarm. This list excludes hidden # (orphaned) models. if swarmId is not None: entryIdxs = self._swarmIdToIndexes.get(swarmId, []) else: entryIdxs = range(len(self._allResults)) if len(entryIdxs) == 0: return ([], [], [], [], []) # Get the particles of interest particleStates = [] modelIds = [] errScores = [] completedFlags = [] maturedFlags = [] for idx in entryIdxs: entry = self._allResults[idx] # If this entry is hidden (i.e. it was an orphaned model), it should # not be in this list if swarmId is not None: assert (not entry['hidden']) # Get info on this model modelParams = entry['modelParams'] isCompleted = entry['completed'] isMatured = entry['matured'] particleState = modelParams['particleState'] particleGenIdx = particleState['genIdx'] particleId = particleState['id'] if genIdx is not None and particleGenIdx != genIdx: continue if completed is not None and (completed != isCompleted): continue if matured is not None and (matured != isMatured): continue if lastDescendent \ and (self._particleLatestGenIdx[particleId] != particleGenIdx): continue # Incorporate into return values particleStates.append(particleState) modelIds.append(entry['modelID']) errScores.append(entry['errScore']) completedFlags.append(isCompleted) maturedFlags.append(isMatured) return (particleStates, modelIds, errScores, completedFlags, maturedFlags)
Return a list of particleStates for all particles we know about in the given swarm, their model Ids, and metric results. Parameters: --------------------------------------------------------------------- swarmId: A string representation of the sorted list of encoders in this swarm. For example '__address_encoder.__gym_encoder' genIdx: If not None, only return particles at this specific generation index. completed: If not None, only return particles of the given state (either completed if 'completed' is True, or running if 'completed' is false matured: If not None, only return particles of the given state (either matured if 'matured' is True, or not matured if 'matured' is false. Note that any model which has completed is also considered matured. lastDescendent: If True, only return particles that are the last descendent, that is, the highest generation index for a given particle Id retval: (particleStates, modelIds, errScores, completed, matured) particleStates: list of particleStates modelIds: list of modelIds errScores: list of errScores, numpy.inf is plugged in if we don't have a result yet completed: list of completed booleans matured: list of matured booleans
Below is the the instruction that describes the task: ### Input: Return a list of particleStates for all particles we know about in the given swarm, their model Ids, and metric results. Parameters: --------------------------------------------------------------------- swarmId: A string representation of the sorted list of encoders in this swarm. For example '__address_encoder.__gym_encoder' genIdx: If not None, only return particles at this specific generation index. completed: If not None, only return particles of the given state (either completed if 'completed' is True, or running if 'completed' is false matured: If not None, only return particles of the given state (either matured if 'matured' is True, or not matured if 'matured' is false. Note that any model which has completed is also considered matured. lastDescendent: If True, only return particles that are the last descendent, that is, the highest generation index for a given particle Id retval: (particleStates, modelIds, errScores, completed, matured) particleStates: list of particleStates modelIds: list of modelIds errScores: list of errScores, numpy.inf is plugged in if we don't have a result yet completed: list of completed booleans matured: list of matured booleans ### Response: def getParticleInfos(self, swarmId=None, genIdx=None, completed=None, matured=None, lastDescendent=False): """Return a list of particleStates for all particles we know about in the given swarm, their model Ids, and metric results. Parameters: --------------------------------------------------------------------- swarmId: A string representation of the sorted list of encoders in this swarm. For example '__address_encoder.__gym_encoder' genIdx: If not None, only return particles at this specific generation index. completed: If not None, only return particles of the given state (either completed if 'completed' is True, or running if 'completed' is false matured: If not None, only return particles of the given state (either matured if 'matured' is True, or not matured if 'matured' is false. Note that any model which has completed is also considered matured. lastDescendent: If True, only return particles that are the last descendent, that is, the highest generation index for a given particle Id retval: (particleStates, modelIds, errScores, completed, matured) particleStates: list of particleStates modelIds: list of modelIds errScores: list of errScores, numpy.inf is plugged in if we don't have a result yet completed: list of completed booleans matured: list of matured booleans """ # The indexes of all the models in this swarm. This list excludes hidden # (orphaned) models. if swarmId is not None: entryIdxs = self._swarmIdToIndexes.get(swarmId, []) else: entryIdxs = range(len(self._allResults)) if len(entryIdxs) == 0: return ([], [], [], [], []) # Get the particles of interest particleStates = [] modelIds = [] errScores = [] completedFlags = [] maturedFlags = [] for idx in entryIdxs: entry = self._allResults[idx] # If this entry is hidden (i.e. it was an orphaned model), it should # not be in this list if swarmId is not None: assert (not entry['hidden']) # Get info on this model modelParams = entry['modelParams'] isCompleted = entry['completed'] isMatured = entry['matured'] particleState = modelParams['particleState'] particleGenIdx = particleState['genIdx'] particleId = particleState['id'] if genIdx is not None and particleGenIdx != genIdx: continue if completed is not None and (completed != isCompleted): continue if matured is not None and (matured != isMatured): continue if lastDescendent \ and (self._particleLatestGenIdx[particleId] != particleGenIdx): continue # Incorporate into return values particleStates.append(particleState) modelIds.append(entry['modelID']) errScores.append(entry['errScore']) completedFlags.append(isCompleted) maturedFlags.append(isMatured) return (particleStates, modelIds, errScores, completedFlags, maturedFlags)
def update(self, data_set): """ Refresh the time of all specified elements in the supplied data set. """ now = time.time() for d in data_set: self.timed_data[d] = now self._expire_data()
Refresh the time of all specified elements in the supplied data set.
Below is the the instruction that describes the task: ### Input: Refresh the time of all specified elements in the supplied data set. ### Response: def update(self, data_set): """ Refresh the time of all specified elements in the supplied data set. """ now = time.time() for d in data_set: self.timed_data[d] = now self._expire_data()
def _get_esxi_proxy_details(): ''' Returns the running esxi's proxy details ''' det = __proxy__['esxi.get_details']() host = det.get('host') if det.get('vcenter'): host = det['vcenter'] esxi_hosts = None if det.get('esxi_host'): esxi_hosts = [det['esxi_host']] return host, det.get('username'), det.get('password'), \ det.get('protocol'), det.get('port'), det.get('mechanism'), \ det.get('principal'), det.get('domain'), esxi_hosts
Returns the running esxi's proxy details
Below is the the instruction that describes the task: ### Input: Returns the running esxi's proxy details ### Response: def _get_esxi_proxy_details(): ''' Returns the running esxi's proxy details ''' det = __proxy__['esxi.get_details']() host = det.get('host') if det.get('vcenter'): host = det['vcenter'] esxi_hosts = None if det.get('esxi_host'): esxi_hosts = [det['esxi_host']] return host, det.get('username'), det.get('password'), \ det.get('protocol'), det.get('port'), det.get('mechanism'), \ det.get('principal'), det.get('domain'), esxi_hosts
def _construct_body_s3_dict(self): """Constructs the RestApi's `BodyS3Location property`_, from the SAM Api's DefinitionUri property. :returns: a BodyS3Location dict, containing the S3 Bucket, Key, and Version of the Swagger definition :rtype: dict """ if isinstance(self.definition_uri, dict): if not self.definition_uri.get("Bucket", None) or not self.definition_uri.get("Key", None): # DefinitionUri is a dictionary but does not contain Bucket or Key property raise InvalidResourceException(self.logical_id, "'DefinitionUri' requires Bucket and Key properties to be specified") s3_pointer = self.definition_uri else: # DefinitionUri is a string s3_pointer = parse_s3_uri(self.definition_uri) if s3_pointer is None: raise InvalidResourceException(self.logical_id, '\'DefinitionUri\' is not a valid S3 Uri of the form ' '"s3://bucket/key" with optional versionId query parameter.') body_s3 = { 'Bucket': s3_pointer['Bucket'], 'Key': s3_pointer['Key'] } if 'Version' in s3_pointer: body_s3['Version'] = s3_pointer['Version'] return body_s3
Constructs the RestApi's `BodyS3Location property`_, from the SAM Api's DefinitionUri property. :returns: a BodyS3Location dict, containing the S3 Bucket, Key, and Version of the Swagger definition :rtype: dict
Below is the the instruction that describes the task: ### Input: Constructs the RestApi's `BodyS3Location property`_, from the SAM Api's DefinitionUri property. :returns: a BodyS3Location dict, containing the S3 Bucket, Key, and Version of the Swagger definition :rtype: dict ### Response: def _construct_body_s3_dict(self): """Constructs the RestApi's `BodyS3Location property`_, from the SAM Api's DefinitionUri property. :returns: a BodyS3Location dict, containing the S3 Bucket, Key, and Version of the Swagger definition :rtype: dict """ if isinstance(self.definition_uri, dict): if not self.definition_uri.get("Bucket", None) or not self.definition_uri.get("Key", None): # DefinitionUri is a dictionary but does not contain Bucket or Key property raise InvalidResourceException(self.logical_id, "'DefinitionUri' requires Bucket and Key properties to be specified") s3_pointer = self.definition_uri else: # DefinitionUri is a string s3_pointer = parse_s3_uri(self.definition_uri) if s3_pointer is None: raise InvalidResourceException(self.logical_id, '\'DefinitionUri\' is not a valid S3 Uri of the form ' '"s3://bucket/key" with optional versionId query parameter.') body_s3 = { 'Bucket': s3_pointer['Bucket'], 'Key': s3_pointer['Key'] } if 'Version' in s3_pointer: body_s3['Version'] = s3_pointer['Version'] return body_s3
def keep_types_s(s, types): """ Keep the given types from a string Same as :meth:`keep_types` but does not use the :attr:`params` dictionary Parameters ---------- s: str The string of the returns like section types: list of str The type identifiers to keep Returns ------- str The modified string `s` with only the descriptions of `types` """ patt = '|'.join('(?<=\n)' + s + '\n(?s).+?\n(?=\S+|$)' for s in types) return ''.join(re.findall(patt, '\n' + s.strip() + '\n')).rstrip()
Keep the given types from a string Same as :meth:`keep_types` but does not use the :attr:`params` dictionary Parameters ---------- s: str The string of the returns like section types: list of str The type identifiers to keep Returns ------- str The modified string `s` with only the descriptions of `types`
Below is the the instruction that describes the task: ### Input: Keep the given types from a string Same as :meth:`keep_types` but does not use the :attr:`params` dictionary Parameters ---------- s: str The string of the returns like section types: list of str The type identifiers to keep Returns ------- str The modified string `s` with only the descriptions of `types` ### Response: def keep_types_s(s, types): """ Keep the given types from a string Same as :meth:`keep_types` but does not use the :attr:`params` dictionary Parameters ---------- s: str The string of the returns like section types: list of str The type identifiers to keep Returns ------- str The modified string `s` with only the descriptions of `types` """ patt = '|'.join('(?<=\n)' + s + '\n(?s).+?\n(?=\S+|$)' for s in types) return ''.join(re.findall(patt, '\n' + s.strip() + '\n')).rstrip()
def safe_pdist(arr, *args, **kwargs): """ Kwargs: metric = ut.absdiff SeeAlso: scipy.spatial.distance.pdist TODO: move to vtool """ if arr is None or len(arr) < 2: return None else: import vtool as vt arr_ = vt.atleast_nd(arr, 2) return spdist.pdist(arr_, *args, **kwargs)
Kwargs: metric = ut.absdiff SeeAlso: scipy.spatial.distance.pdist TODO: move to vtool
Below is the the instruction that describes the task: ### Input: Kwargs: metric = ut.absdiff SeeAlso: scipy.spatial.distance.pdist TODO: move to vtool ### Response: def safe_pdist(arr, *args, **kwargs): """ Kwargs: metric = ut.absdiff SeeAlso: scipy.spatial.distance.pdist TODO: move to vtool """ if arr is None or len(arr) < 2: return None else: import vtool as vt arr_ = vt.atleast_nd(arr, 2) return spdist.pdist(arr_, *args, **kwargs)
def on_setup_ssh(self, b): """ATTENTION: modifying the order of operations in this function can lead to unexpected problems""" with self._setup_ssh_out: clear_output() self._ssh_keygen() #temporary passwords password = self.__password proxy_password = self.__proxy_password # step 1: if hostname is not provided - do not do anything if self.hostname is None: # check hostname print("Please specify the computer hostname") return # step 2: check if password-free access was enabled earlier if self.can_login(): print ("Password-free access is already enabled") # it can still happen that password-free access is enabled # but host is not present in the config file - fixing this if not self.is_in_config(): self._write_ssh_config() # we do not use proxy here, because if computer # can be accessed without any info in the config - proxy is not needed. self.setup_counter += 1 # only if config file has changed - increase setup_counter return # step 3: if can't login already, chek whether all required information is provided if self.username is None: # check username print("Please enter your ssh username") return if len(password.strip()) == 0: # check password print("Please enter your ssh password") return # step 4: get the right commands to access the proxy server (if provided) success, proxycmd = self._configure_proxy(password, proxy_password) if not success: return # step 5: make host known by ssh on the proxy server if not self.is_host_known(): self._make_host_known(self.hostname,['ssh']+[proxycmd] if proxycmd else []) # step 6: sending public key to the main host if not self._send_pubkey(self.hostname, self.username, password, proxycmd): print ("Could not send public key to {}".format(self.hostname)) return # step 7: modify the ssh config file if necessary if not self.is_in_config(): self._write_ssh_config(proxycmd=proxycmd) # TODO: add a check if new config is different from the current one. If so # infrom the user about it. # step 8: final check if self.can_login(): self.setup_counter += 1 print("Automatic ssh setup successful :-)") return else: print("Automatic ssh setup failed, sorry :-(") return
ATTENTION: modifying the order of operations in this function can lead to unexpected problems
Below is the the instruction that describes the task: ### Input: ATTENTION: modifying the order of operations in this function can lead to unexpected problems ### Response: def on_setup_ssh(self, b): """ATTENTION: modifying the order of operations in this function can lead to unexpected problems""" with self._setup_ssh_out: clear_output() self._ssh_keygen() #temporary passwords password = self.__password proxy_password = self.__proxy_password # step 1: if hostname is not provided - do not do anything if self.hostname is None: # check hostname print("Please specify the computer hostname") return # step 2: check if password-free access was enabled earlier if self.can_login(): print ("Password-free access is already enabled") # it can still happen that password-free access is enabled # but host is not present in the config file - fixing this if not self.is_in_config(): self._write_ssh_config() # we do not use proxy here, because if computer # can be accessed without any info in the config - proxy is not needed. self.setup_counter += 1 # only if config file has changed - increase setup_counter return # step 3: if can't login already, chek whether all required information is provided if self.username is None: # check username print("Please enter your ssh username") return if len(password.strip()) == 0: # check password print("Please enter your ssh password") return # step 4: get the right commands to access the proxy server (if provided) success, proxycmd = self._configure_proxy(password, proxy_password) if not success: return # step 5: make host known by ssh on the proxy server if not self.is_host_known(): self._make_host_known(self.hostname,['ssh']+[proxycmd] if proxycmd else []) # step 6: sending public key to the main host if not self._send_pubkey(self.hostname, self.username, password, proxycmd): print ("Could not send public key to {}".format(self.hostname)) return # step 7: modify the ssh config file if necessary if not self.is_in_config(): self._write_ssh_config(proxycmd=proxycmd) # TODO: add a check if new config is different from the current one. If so # infrom the user about it. # step 8: final check if self.can_login(): self.setup_counter += 1 print("Automatic ssh setup successful :-)") return else: print("Automatic ssh setup failed, sorry :-(") return
def _Open(self, path_spec, mode='rb'): """Opens the file system defined by path specification. Args: path_spec (PathSpec): a path specification. mode (Optional[str]): file access mode. The default is 'rb' which represents read-only binary. Raises: AccessError: if the access to open the file was denied. IOError: if the file system could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid. """ if not path_spec.HasParent(): raise errors.PathSpecError( 'Unsupported path specification without parent.') range_offset = getattr(path_spec, 'range_offset', None) if range_offset is None: raise errors.PathSpecError( 'Unsupported path specification without encoding method.') range_size = getattr(path_spec, 'range_size', None) if range_size is None: raise errors.PathSpecError( 'Unsupported path specification without encoding method.') self._range_offset = range_offset self._range_size = range_size
Opens the file system defined by path specification. Args: path_spec (PathSpec): a path specification. mode (Optional[str]): file access mode. The default is 'rb' which represents read-only binary. Raises: AccessError: if the access to open the file was denied. IOError: if the file system could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid.
Below is the the instruction that describes the task: ### Input: Opens the file system defined by path specification. Args: path_spec (PathSpec): a path specification. mode (Optional[str]): file access mode. The default is 'rb' which represents read-only binary. Raises: AccessError: if the access to open the file was denied. IOError: if the file system could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid. ### Response: def _Open(self, path_spec, mode='rb'): """Opens the file system defined by path specification. Args: path_spec (PathSpec): a path specification. mode (Optional[str]): file access mode. The default is 'rb' which represents read-only binary. Raises: AccessError: if the access to open the file was denied. IOError: if the file system could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid. """ if not path_spec.HasParent(): raise errors.PathSpecError( 'Unsupported path specification without parent.') range_offset = getattr(path_spec, 'range_offset', None) if range_offset is None: raise errors.PathSpecError( 'Unsupported path specification without encoding method.') range_size = getattr(path_spec, 'range_size', None) if range_size is None: raise errors.PathSpecError( 'Unsupported path specification without encoding method.') self._range_offset = range_offset self._range_size = range_size
def xml_to_namespace(xmlstr): '''Converts xml response to service bus namespace The xml format for namespace: <entry> <id>uuid:00000000-0000-0000-0000-000000000000;id=0000000</id> <title type="text">myunittests</title> <updated>2012-08-22T16:48:10Z</updated> <content type="application/xml"> <NamespaceDescription xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Name>myunittests</Name> <Region>West US</Region> <DefaultKey>0000000000000000000000000000000000000000000=</DefaultKey> <Status>Active</Status> <CreatedAt>2012-08-22T16:48:10.217Z</CreatedAt> <AcsManagementEndpoint>https://myunittests-sb.accesscontrol.windows.net/</AcsManagementEndpoint> <ServiceBusEndpoint>https://myunittests.servicebus.windows.net/</ServiceBusEndpoint> <ConnectionString>Endpoint=sb://myunittests.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=0000000000000000000000000000000000000000000=</ConnectionString> <SubscriptionId>00000000000000000000000000000000</SubscriptionId> <Enabled>true</Enabled> </NamespaceDescription> </content> </entry> ''' xmldoc = minidom.parseString(xmlstr) namespace = ServiceBusNamespace() mappings = ( ('Name', 'name', None), ('Region', 'region', None), ('DefaultKey', 'default_key', None), ('Status', 'status', None), ('CreatedAt', 'created_at', None), ('AcsManagementEndpoint', 'acs_management_endpoint', None), ('ServiceBusEndpoint', 'servicebus_endpoint', None), ('ConnectionString', 'connection_string', None), ('SubscriptionId', 'subscription_id', None), ('Enabled', 'enabled', _parse_bool), ) for desc in _MinidomXmlToObject.get_children_from_path( xmldoc, 'entry', 'content', 'NamespaceDescription'): for xml_name, field_name, conversion_func in mappings: node_value = _MinidomXmlToObject.get_first_child_node_value(desc, xml_name) if node_value is not None: if conversion_func is not None: node_value = conversion_func(node_value) setattr(namespace, field_name, node_value) return namespace
Converts xml response to service bus namespace The xml format for namespace: <entry> <id>uuid:00000000-0000-0000-0000-000000000000;id=0000000</id> <title type="text">myunittests</title> <updated>2012-08-22T16:48:10Z</updated> <content type="application/xml"> <NamespaceDescription xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Name>myunittests</Name> <Region>West US</Region> <DefaultKey>0000000000000000000000000000000000000000000=</DefaultKey> <Status>Active</Status> <CreatedAt>2012-08-22T16:48:10.217Z</CreatedAt> <AcsManagementEndpoint>https://myunittests-sb.accesscontrol.windows.net/</AcsManagementEndpoint> <ServiceBusEndpoint>https://myunittests.servicebus.windows.net/</ServiceBusEndpoint> <ConnectionString>Endpoint=sb://myunittests.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=0000000000000000000000000000000000000000000=</ConnectionString> <SubscriptionId>00000000000000000000000000000000</SubscriptionId> <Enabled>true</Enabled> </NamespaceDescription> </content> </entry>
Below is the the instruction that describes the task: ### Input: Converts xml response to service bus namespace The xml format for namespace: <entry> <id>uuid:00000000-0000-0000-0000-000000000000;id=0000000</id> <title type="text">myunittests</title> <updated>2012-08-22T16:48:10Z</updated> <content type="application/xml"> <NamespaceDescription xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Name>myunittests</Name> <Region>West US</Region> <DefaultKey>0000000000000000000000000000000000000000000=</DefaultKey> <Status>Active</Status> <CreatedAt>2012-08-22T16:48:10.217Z</CreatedAt> <AcsManagementEndpoint>https://myunittests-sb.accesscontrol.windows.net/</AcsManagementEndpoint> <ServiceBusEndpoint>https://myunittests.servicebus.windows.net/</ServiceBusEndpoint> <ConnectionString>Endpoint=sb://myunittests.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=0000000000000000000000000000000000000000000=</ConnectionString> <SubscriptionId>00000000000000000000000000000000</SubscriptionId> <Enabled>true</Enabled> </NamespaceDescription> </content> </entry> ### Response: def xml_to_namespace(xmlstr): '''Converts xml response to service bus namespace The xml format for namespace: <entry> <id>uuid:00000000-0000-0000-0000-000000000000;id=0000000</id> <title type="text">myunittests</title> <updated>2012-08-22T16:48:10Z</updated> <content type="application/xml"> <NamespaceDescription xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Name>myunittests</Name> <Region>West US</Region> <DefaultKey>0000000000000000000000000000000000000000000=</DefaultKey> <Status>Active</Status> <CreatedAt>2012-08-22T16:48:10.217Z</CreatedAt> <AcsManagementEndpoint>https://myunittests-sb.accesscontrol.windows.net/</AcsManagementEndpoint> <ServiceBusEndpoint>https://myunittests.servicebus.windows.net/</ServiceBusEndpoint> <ConnectionString>Endpoint=sb://myunittests.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=0000000000000000000000000000000000000000000=</ConnectionString> <SubscriptionId>00000000000000000000000000000000</SubscriptionId> <Enabled>true</Enabled> </NamespaceDescription> </content> </entry> ''' xmldoc = minidom.parseString(xmlstr) namespace = ServiceBusNamespace() mappings = ( ('Name', 'name', None), ('Region', 'region', None), ('DefaultKey', 'default_key', None), ('Status', 'status', None), ('CreatedAt', 'created_at', None), ('AcsManagementEndpoint', 'acs_management_endpoint', None), ('ServiceBusEndpoint', 'servicebus_endpoint', None), ('ConnectionString', 'connection_string', None), ('SubscriptionId', 'subscription_id', None), ('Enabled', 'enabled', _parse_bool), ) for desc in _MinidomXmlToObject.get_children_from_path( xmldoc, 'entry', 'content', 'NamespaceDescription'): for xml_name, field_name, conversion_func in mappings: node_value = _MinidomXmlToObject.get_first_child_node_value(desc, xml_name) if node_value is not None: if conversion_func is not None: node_value = conversion_func(node_value) setattr(namespace, field_name, node_value) return namespace
def delete(self, using=None, **kwargs): """ Deletes the index in elasticsearch. Any additional keyword arguments will be passed to ``Elasticsearch.indices.delete`` unchanged. """ return self._get_connection(using).indices.delete(index=self._name, **kwargs)
Deletes the index in elasticsearch. Any additional keyword arguments will be passed to ``Elasticsearch.indices.delete`` unchanged.
Below is the the instruction that describes the task: ### Input: Deletes the index in elasticsearch. Any additional keyword arguments will be passed to ``Elasticsearch.indices.delete`` unchanged. ### Response: def delete(self, using=None, **kwargs): """ Deletes the index in elasticsearch. Any additional keyword arguments will be passed to ``Elasticsearch.indices.delete`` unchanged. """ return self._get_connection(using).indices.delete(index=self._name, **kwargs)
def workon(ctx, issue_id, new, base_branch): """ Start work on a given issue. This command retrieves the issue from the issue tracker, creates and checks out a new aptly-named branch, puts the issue in the configured active, status, assigns it to you and starts a correctly linked Harvest timer. If a branch with the same name as the one to be created already exists, it is checked out instead. Variations in the branch name occuring after the issue ID are accounted for and the branch renamed to match the new issue summary. If the `default_project` directive is correctly configured, it is enough to give the issue ID (instead of the full project prefix + issue ID). """ lancet = ctx.obj if not issue_id and not new: raise click.UsageError("Provide either an issue ID or the --new flag.") elif issue_id and new: raise click.UsageError( "Provide either an issue ID or the --new flag, but not both." ) if new: # Create a new issue summary = click.prompt("Issue summary") issue = create_issue( lancet, summary=summary, add_to_active_sprint=True ) else: issue = get_issue(lancet, issue_id) username = lancet.tracker.whoami() active_status = lancet.config.get("tracker", "active_status") if not base_branch: base_branch = lancet.config.get("repository", "base_branch") # Get the working branch branch = get_branch(lancet, issue, base_branch) # Make sure the issue is in a correct status transition = get_transition(ctx, lancet, issue, active_status) # Make sure the issue is assigned to us assign_issue(lancet, issue, username, active_status) # Activate environment set_issue_status(lancet, issue, active_status, transition) with taskstatus("Checking out working branch") as ts: lancet.repo.checkout(branch.name) ts.ok('Checked out working branch based on "{}"'.format(base_branch)) with taskstatus("Starting harvest timer") as ts: lancet.timer.start(issue) ts.ok("Started harvest timer")
Start work on a given issue. This command retrieves the issue from the issue tracker, creates and checks out a new aptly-named branch, puts the issue in the configured active, status, assigns it to you and starts a correctly linked Harvest timer. If a branch with the same name as the one to be created already exists, it is checked out instead. Variations in the branch name occuring after the issue ID are accounted for and the branch renamed to match the new issue summary. If the `default_project` directive is correctly configured, it is enough to give the issue ID (instead of the full project prefix + issue ID).
Below is the the instruction that describes the task: ### Input: Start work on a given issue. This command retrieves the issue from the issue tracker, creates and checks out a new aptly-named branch, puts the issue in the configured active, status, assigns it to you and starts a correctly linked Harvest timer. If a branch with the same name as the one to be created already exists, it is checked out instead. Variations in the branch name occuring after the issue ID are accounted for and the branch renamed to match the new issue summary. If the `default_project` directive is correctly configured, it is enough to give the issue ID (instead of the full project prefix + issue ID). ### Response: def workon(ctx, issue_id, new, base_branch): """ Start work on a given issue. This command retrieves the issue from the issue tracker, creates and checks out a new aptly-named branch, puts the issue in the configured active, status, assigns it to you and starts a correctly linked Harvest timer. If a branch with the same name as the one to be created already exists, it is checked out instead. Variations in the branch name occuring after the issue ID are accounted for and the branch renamed to match the new issue summary. If the `default_project` directive is correctly configured, it is enough to give the issue ID (instead of the full project prefix + issue ID). """ lancet = ctx.obj if not issue_id and not new: raise click.UsageError("Provide either an issue ID or the --new flag.") elif issue_id and new: raise click.UsageError( "Provide either an issue ID or the --new flag, but not both." ) if new: # Create a new issue summary = click.prompt("Issue summary") issue = create_issue( lancet, summary=summary, add_to_active_sprint=True ) else: issue = get_issue(lancet, issue_id) username = lancet.tracker.whoami() active_status = lancet.config.get("tracker", "active_status") if not base_branch: base_branch = lancet.config.get("repository", "base_branch") # Get the working branch branch = get_branch(lancet, issue, base_branch) # Make sure the issue is in a correct status transition = get_transition(ctx, lancet, issue, active_status) # Make sure the issue is assigned to us assign_issue(lancet, issue, username, active_status) # Activate environment set_issue_status(lancet, issue, active_status, transition) with taskstatus("Checking out working branch") as ts: lancet.repo.checkout(branch.name) ts.ok('Checked out working branch based on "{}"'.format(base_branch)) with taskstatus("Starting harvest timer") as ts: lancet.timer.start(issue) ts.ok("Started harvest timer")
def delete(name, root=None): ''' Remove the named group name Name group to delete root Directory to chroot into CLI Example: .. code-block:: bash salt '*' group.delete foo ''' cmd = ['groupdel'] if root is not None: cmd.extend(('-R', root)) cmd.append(name) ret = __salt__['cmd.run_all'](cmd, python_shell=False) return not ret['retcode']
Remove the named group name Name group to delete root Directory to chroot into CLI Example: .. code-block:: bash salt '*' group.delete foo
Below is the the instruction that describes the task: ### Input: Remove the named group name Name group to delete root Directory to chroot into CLI Example: .. code-block:: bash salt '*' group.delete foo ### Response: def delete(name, root=None): ''' Remove the named group name Name group to delete root Directory to chroot into CLI Example: .. code-block:: bash salt '*' group.delete foo ''' cmd = ['groupdel'] if root is not None: cmd.extend(('-R', root)) cmd.append(name) ret = __salt__['cmd.run_all'](cmd, python_shell=False) return not ret['retcode']
def process_pulls(self, testpulls=None, testarchive=None, expected=None): """Runs self.find_pulls() *and* processes the pull requests unit tests, status updates and wiki page creations. :arg expected: for unit testing the output results that would be returned from running the tests in real time. """ from datetime import datetime pulls = self.find_pulls(None if testpulls is None else testpulls.values()) for reponame in pulls: for pull in pulls[reponame]: try: archive = self.archive[pull.repokey] if pull.snumber in archive: #We pass the archive in so that an existing staging directory (if #different from the configured one) can be cleaned up if the previous #attempt failed and left the file system dirty. pull.init(archive[pull.snumber]) else: pull.init({}) if self.testmode and testarchive is not None: #Hard-coded start times so that the model output is reproducible if pull.number in testarchive[pull.repokey]: start = testarchive[pull.repokey][pull.number]["start"] else: start = datetime(2015, 4, 23, 13, 8) else: start = datetime.now() archive[pull.snumber] = {"success": False, "start": start, "number": pull.number, "stage": pull.repodir, "completed": False, "finished": None} #Once a local staging directory has been initialized, we add the sha #signature of the pull request to our archive so we can track the rest #of the testing process. If it fails when trying to merge the head of #the pull request, the exception block should catch it and email the #owner of the repo. #We need to save the state of the archive now in case the testing causes #an unhandled exception. self._save_archive() pull.begin() self.cron.email(pull.repo.name, "start", self._get_fields("start", pull), self.testmode) pull.test(expected[pull.number]) pull.finalize() #Update the status of this pull request on the archive, save the archive #file in case the next pull request throws an unhandled exception. archive[pull.snumber]["completed"] = True archive[pull.snumber]["success"] = abs(pull.percent - 1) < 1e-12 #This if block looks like a mess; it is necessary so that we can easily #unit test this processing code by passing in the model outputs etc. that should #have been returned from running live. if (self.testmode and testarchive is not None and pull.number in testarchive[pull.repokey] and testarchive[pull.repokey][pull.number]["finished"] is not None): archive[pull.snumber]["finished"] = testarchive[pull.repokey][pull.number]["finished"] elif self.testmode: archive[pull.snumber]["finished"] = datetime(2015, 4, 23, 13, 9) else: #This single line could replace the whole if block if we didn't have #unit tests integrated with the main code. archive[pull.snumber]["finished"] = datetime.now() self._save_archive() #We email after saving the archive in case the email server causes exceptions. if archive[pull.snumber]["success"]: key = "success" else: key = "failure" self.cron.email(pull.repo.name, key, self._get_fields(key, pull), self.testmode) except: import sys, traceback e = sys.exc_info() errmsg = '\n'.join(traceback.format_exception(e[0], e[1], e[2])) err(errmsg) self.cron.email(pull.repo.name, "error", self._get_fields("error", pull, errmsg), self.testmode)
Runs self.find_pulls() *and* processes the pull requests unit tests, status updates and wiki page creations. :arg expected: for unit testing the output results that would be returned from running the tests in real time.
Below is the the instruction that describes the task: ### Input: Runs self.find_pulls() *and* processes the pull requests unit tests, status updates and wiki page creations. :arg expected: for unit testing the output results that would be returned from running the tests in real time. ### Response: def process_pulls(self, testpulls=None, testarchive=None, expected=None): """Runs self.find_pulls() *and* processes the pull requests unit tests, status updates and wiki page creations. :arg expected: for unit testing the output results that would be returned from running the tests in real time. """ from datetime import datetime pulls = self.find_pulls(None if testpulls is None else testpulls.values()) for reponame in pulls: for pull in pulls[reponame]: try: archive = self.archive[pull.repokey] if pull.snumber in archive: #We pass the archive in so that an existing staging directory (if #different from the configured one) can be cleaned up if the previous #attempt failed and left the file system dirty. pull.init(archive[pull.snumber]) else: pull.init({}) if self.testmode and testarchive is not None: #Hard-coded start times so that the model output is reproducible if pull.number in testarchive[pull.repokey]: start = testarchive[pull.repokey][pull.number]["start"] else: start = datetime(2015, 4, 23, 13, 8) else: start = datetime.now() archive[pull.snumber] = {"success": False, "start": start, "number": pull.number, "stage": pull.repodir, "completed": False, "finished": None} #Once a local staging directory has been initialized, we add the sha #signature of the pull request to our archive so we can track the rest #of the testing process. If it fails when trying to merge the head of #the pull request, the exception block should catch it and email the #owner of the repo. #We need to save the state of the archive now in case the testing causes #an unhandled exception. self._save_archive() pull.begin() self.cron.email(pull.repo.name, "start", self._get_fields("start", pull), self.testmode) pull.test(expected[pull.number]) pull.finalize() #Update the status of this pull request on the archive, save the archive #file in case the next pull request throws an unhandled exception. archive[pull.snumber]["completed"] = True archive[pull.snumber]["success"] = abs(pull.percent - 1) < 1e-12 #This if block looks like a mess; it is necessary so that we can easily #unit test this processing code by passing in the model outputs etc. that should #have been returned from running live. if (self.testmode and testarchive is not None and pull.number in testarchive[pull.repokey] and testarchive[pull.repokey][pull.number]["finished"] is not None): archive[pull.snumber]["finished"] = testarchive[pull.repokey][pull.number]["finished"] elif self.testmode: archive[pull.snumber]["finished"] = datetime(2015, 4, 23, 13, 9) else: #This single line could replace the whole if block if we didn't have #unit tests integrated with the main code. archive[pull.snumber]["finished"] = datetime.now() self._save_archive() #We email after saving the archive in case the email server causes exceptions. if archive[pull.snumber]["success"]: key = "success" else: key = "failure" self.cron.email(pull.repo.name, key, self._get_fields(key, pull), self.testmode) except: import sys, traceback e = sys.exc_info() errmsg = '\n'.join(traceback.format_exception(e[0], e[1], e[2])) err(errmsg) self.cron.email(pull.repo.name, "error", self._get_fields("error", pull, errmsg), self.testmode)
def load_case(adapter, case_obj, update=False): """Load a case to the database Args: adapter: Connection to database case_obj: dict update(bool): If existing case should be updated Returns: case_obj(models.Case) """ # Check if the case already exists in database. existing_case = adapter.case(case_obj) if existing_case: if not update: raise CaseError("Case {0} already exists in database".format(case_obj['case_id'])) case_obj = update_case(case_obj, existing_case) # Add the case to database try: adapter.add_case(case_obj, update=update) except CaseError as err: raise err return case_obj
Load a case to the database Args: adapter: Connection to database case_obj: dict update(bool): If existing case should be updated Returns: case_obj(models.Case)
Below is the the instruction that describes the task: ### Input: Load a case to the database Args: adapter: Connection to database case_obj: dict update(bool): If existing case should be updated Returns: case_obj(models.Case) ### Response: def load_case(adapter, case_obj, update=False): """Load a case to the database Args: adapter: Connection to database case_obj: dict update(bool): If existing case should be updated Returns: case_obj(models.Case) """ # Check if the case already exists in database. existing_case = adapter.case(case_obj) if existing_case: if not update: raise CaseError("Case {0} already exists in database".format(case_obj['case_id'])) case_obj = update_case(case_obj, existing_case) # Add the case to database try: adapter.add_case(case_obj, update=update) except CaseError as err: raise err return case_obj
def AppendFlagsIntoFile(self, filename): """Appends all flags assignments from this FlagInfo object to a file. Output will be in the format of a flagfile. NOTE: MUST mirror the behavior of the C++ AppendFlagsIntoFile from http://code.google.com/p/google-gflags Args: filename: string, name of the file. """ with open(filename, 'a') as out_file: out_file.write(self.FlagsIntoString())
Appends all flags assignments from this FlagInfo object to a file. Output will be in the format of a flagfile. NOTE: MUST mirror the behavior of the C++ AppendFlagsIntoFile from http://code.google.com/p/google-gflags Args: filename: string, name of the file.
Below is the the instruction that describes the task: ### Input: Appends all flags assignments from this FlagInfo object to a file. Output will be in the format of a flagfile. NOTE: MUST mirror the behavior of the C++ AppendFlagsIntoFile from http://code.google.com/p/google-gflags Args: filename: string, name of the file. ### Response: def AppendFlagsIntoFile(self, filename): """Appends all flags assignments from this FlagInfo object to a file. Output will be in the format of a flagfile. NOTE: MUST mirror the behavior of the C++ AppendFlagsIntoFile from http://code.google.com/p/google-gflags Args: filename: string, name of the file. """ with open(filename, 'a') as out_file: out_file.write(self.FlagsIntoString())
def reshape(self, input_shapes): """Change the input shape of the predictor. Parameters ---------- input_shapes : dict of str to tuple The new shape of input data. Examples -------- >>> predictor.reshape({'data':data_shape_tuple}) """ indptr = [0] sdata = [] keys = [] for k, v in input_shapes.items(): if not isinstance(v, tuple): raise ValueError("Expect input_shapes to be dict str->tuple") keys.append(c_str(k)) sdata.extend(v) indptr.append(len(sdata)) new_handle = PredictorHandle() _check_call(_LIB.MXPredReshape( mx_uint(len(indptr) - 1), c_array(ctypes.c_char_p, keys), c_array(mx_uint, indptr), c_array(mx_uint, sdata), self.handle, ctypes.byref(new_handle))) _check_call(_LIB.MXPredFree(self.handle)) self.handle = new_handle
Change the input shape of the predictor. Parameters ---------- input_shapes : dict of str to tuple The new shape of input data. Examples -------- >>> predictor.reshape({'data':data_shape_tuple})
Below is the the instruction that describes the task: ### Input: Change the input shape of the predictor. Parameters ---------- input_shapes : dict of str to tuple The new shape of input data. Examples -------- >>> predictor.reshape({'data':data_shape_tuple}) ### Response: def reshape(self, input_shapes): """Change the input shape of the predictor. Parameters ---------- input_shapes : dict of str to tuple The new shape of input data. Examples -------- >>> predictor.reshape({'data':data_shape_tuple}) """ indptr = [0] sdata = [] keys = [] for k, v in input_shapes.items(): if not isinstance(v, tuple): raise ValueError("Expect input_shapes to be dict str->tuple") keys.append(c_str(k)) sdata.extend(v) indptr.append(len(sdata)) new_handle = PredictorHandle() _check_call(_LIB.MXPredReshape( mx_uint(len(indptr) - 1), c_array(ctypes.c_char_p, keys), c_array(mx_uint, indptr), c_array(mx_uint, sdata), self.handle, ctypes.byref(new_handle))) _check_call(_LIB.MXPredFree(self.handle)) self.handle = new_handle
def pix2vec(nside, ipix, nest=False): """Drop-in replacement for healpy `~healpy.pixelfunc.pix2vec`.""" lon, lat = healpix_to_lonlat(ipix, nside, order='nested' if nest else 'ring') return ang2vec(*_lonlat_to_healpy(lon, lat))
Drop-in replacement for healpy `~healpy.pixelfunc.pix2vec`.
Below is the the instruction that describes the task: ### Input: Drop-in replacement for healpy `~healpy.pixelfunc.pix2vec`. ### Response: def pix2vec(nside, ipix, nest=False): """Drop-in replacement for healpy `~healpy.pixelfunc.pix2vec`.""" lon, lat = healpix_to_lonlat(ipix, nside, order='nested' if nest else 'ring') return ang2vec(*_lonlat_to_healpy(lon, lat))
def free(self, connection): """Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ LOGGER.debug('Pool %s freeing connection %s', self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection))
Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError
Below is the the instruction that describes the task: ### Input: Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError ### Response: def free(self, connection): """Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ LOGGER.debug('Pool %s freeing connection %s', self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection))
def to_key_val_list(value): """Take an object and test to see if it can be represented as a dictionary. If it can be, return a list of tuples, e.g., :: >>> to_key_val_list([('key', 'val')]) [('key', 'val')] >>> to_key_val_list({'key': 'val'}) [('key', 'val')] >>> to_key_val_list('string') ValueError: cannot encode objects that are not 2-tuples. :rtype: list """ if value is None: return None if isinstance(value, (str, bytes, bool, int)): raise ValueError('cannot encode objects that are not 2-tuples') if isinstance(value, collections.Mapping): value = value.items() return list(value)
Take an object and test to see if it can be represented as a dictionary. If it can be, return a list of tuples, e.g., :: >>> to_key_val_list([('key', 'val')]) [('key', 'val')] >>> to_key_val_list({'key': 'val'}) [('key', 'val')] >>> to_key_val_list('string') ValueError: cannot encode objects that are not 2-tuples. :rtype: list
Below is the the instruction that describes the task: ### Input: Take an object and test to see if it can be represented as a dictionary. If it can be, return a list of tuples, e.g., :: >>> to_key_val_list([('key', 'val')]) [('key', 'val')] >>> to_key_val_list({'key': 'val'}) [('key', 'val')] >>> to_key_val_list('string') ValueError: cannot encode objects that are not 2-tuples. :rtype: list ### Response: def to_key_val_list(value): """Take an object and test to see if it can be represented as a dictionary. If it can be, return a list of tuples, e.g., :: >>> to_key_val_list([('key', 'val')]) [('key', 'val')] >>> to_key_val_list({'key': 'val'}) [('key', 'val')] >>> to_key_val_list('string') ValueError: cannot encode objects that are not 2-tuples. :rtype: list """ if value is None: return None if isinstance(value, (str, bytes, bool, int)): raise ValueError('cannot encode objects that are not 2-tuples') if isinstance(value, collections.Mapping): value = value.items() return list(value)
def logjacobian(self, **params): r"""Returns the log of the jacobian needed to transform pdfs in the ``variable_params`` parameter space to the ``sampling_params`` parameter space. Let :math:`\mathbf{x}` be the set of variable parameters, :math:`\mathbf{y} = f(\mathbf{x})` the set of sampling parameters, and :math:`p_x(\mathbf{x})` a probability density function defined over :math:`\mathbf{x}`. The corresponding pdf in :math:`\mathbf{y}` is then: .. math:: p_y(\mathbf{y}) = p_x(\mathbf{x})\left|\mathrm{det}\,\mathbf{J}_{ij}\right|, where :math:`\mathbf{J}_{ij}` is the Jacobian of the inverse transform :math:`\mathbf{x} = g(\mathbf{y})`. This has elements: .. math:: \mathbf{J}_{ij} = \frac{\partial g_i}{\partial{y_j}} This function returns :math:`\log \left|\mathrm{det}\,\mathbf{J}_{ij}\right|`. Parameters ---------- \**params : The keyword arguments should specify values for all of the variable args and all of the sampling args. Returns ------- float : The value of the jacobian. """ return numpy.log(abs(transforms.compute_jacobian( params, self.sampling_transforms, inverse=True)))
r"""Returns the log of the jacobian needed to transform pdfs in the ``variable_params`` parameter space to the ``sampling_params`` parameter space. Let :math:`\mathbf{x}` be the set of variable parameters, :math:`\mathbf{y} = f(\mathbf{x})` the set of sampling parameters, and :math:`p_x(\mathbf{x})` a probability density function defined over :math:`\mathbf{x}`. The corresponding pdf in :math:`\mathbf{y}` is then: .. math:: p_y(\mathbf{y}) = p_x(\mathbf{x})\left|\mathrm{det}\,\mathbf{J}_{ij}\right|, where :math:`\mathbf{J}_{ij}` is the Jacobian of the inverse transform :math:`\mathbf{x} = g(\mathbf{y})`. This has elements: .. math:: \mathbf{J}_{ij} = \frac{\partial g_i}{\partial{y_j}} This function returns :math:`\log \left|\mathrm{det}\,\mathbf{J}_{ij}\right|`. Parameters ---------- \**params : The keyword arguments should specify values for all of the variable args and all of the sampling args. Returns ------- float : The value of the jacobian.
Below is the the instruction that describes the task: ### Input: r"""Returns the log of the jacobian needed to transform pdfs in the ``variable_params`` parameter space to the ``sampling_params`` parameter space. Let :math:`\mathbf{x}` be the set of variable parameters, :math:`\mathbf{y} = f(\mathbf{x})` the set of sampling parameters, and :math:`p_x(\mathbf{x})` a probability density function defined over :math:`\mathbf{x}`. The corresponding pdf in :math:`\mathbf{y}` is then: .. math:: p_y(\mathbf{y}) = p_x(\mathbf{x})\left|\mathrm{det}\,\mathbf{J}_{ij}\right|, where :math:`\mathbf{J}_{ij}` is the Jacobian of the inverse transform :math:`\mathbf{x} = g(\mathbf{y})`. This has elements: .. math:: \mathbf{J}_{ij} = \frac{\partial g_i}{\partial{y_j}} This function returns :math:`\log \left|\mathrm{det}\,\mathbf{J}_{ij}\right|`. Parameters ---------- \**params : The keyword arguments should specify values for all of the variable args and all of the sampling args. Returns ------- float : The value of the jacobian. ### Response: def logjacobian(self, **params): r"""Returns the log of the jacobian needed to transform pdfs in the ``variable_params`` parameter space to the ``sampling_params`` parameter space. Let :math:`\mathbf{x}` be the set of variable parameters, :math:`\mathbf{y} = f(\mathbf{x})` the set of sampling parameters, and :math:`p_x(\mathbf{x})` a probability density function defined over :math:`\mathbf{x}`. The corresponding pdf in :math:`\mathbf{y}` is then: .. math:: p_y(\mathbf{y}) = p_x(\mathbf{x})\left|\mathrm{det}\,\mathbf{J}_{ij}\right|, where :math:`\mathbf{J}_{ij}` is the Jacobian of the inverse transform :math:`\mathbf{x} = g(\mathbf{y})`. This has elements: .. math:: \mathbf{J}_{ij} = \frac{\partial g_i}{\partial{y_j}} This function returns :math:`\log \left|\mathrm{det}\,\mathbf{J}_{ij}\right|`. Parameters ---------- \**params : The keyword arguments should specify values for all of the variable args and all of the sampling args. Returns ------- float : The value of the jacobian. """ return numpy.log(abs(transforms.compute_jacobian( params, self.sampling_transforms, inverse=True)))
def set_xticks(self, row, column, ticks): """Manually specify the x-axis tick values. :param row,column: specify the subplot. :param ticks: list of tick values. """ subplot = self.get_subplot_at(row, column) subplot.set_xticks(ticks)
Manually specify the x-axis tick values. :param row,column: specify the subplot. :param ticks: list of tick values.
Below is the the instruction that describes the task: ### Input: Manually specify the x-axis tick values. :param row,column: specify the subplot. :param ticks: list of tick values. ### Response: def set_xticks(self, row, column, ticks): """Manually specify the x-axis tick values. :param row,column: specify the subplot. :param ticks: list of tick values. """ subplot = self.get_subplot_at(row, column) subplot.set_xticks(ticks)
def verifyZeroInteractions(*objs): """Verify that no methods have been called on given objs. Note that strict mocks usually throw early on unexpected, unstubbed invocations. Partial mocks ('monkeypatched' objects or modules) do not support this functionality at all, bc only for the stubbed invocations the actual usage gets recorded. So this function is of limited use, nowadays. """ for obj in objs: theMock = _get_mock_or_raise(obj) if len(theMock.invocations) > 0: raise VerificationError( "\nUnwanted interaction: %s" % theMock.invocations[0])
Verify that no methods have been called on given objs. Note that strict mocks usually throw early on unexpected, unstubbed invocations. Partial mocks ('monkeypatched' objects or modules) do not support this functionality at all, bc only for the stubbed invocations the actual usage gets recorded. So this function is of limited use, nowadays.
Below is the the instruction that describes the task: ### Input: Verify that no methods have been called on given objs. Note that strict mocks usually throw early on unexpected, unstubbed invocations. Partial mocks ('monkeypatched' objects or modules) do not support this functionality at all, bc only for the stubbed invocations the actual usage gets recorded. So this function is of limited use, nowadays. ### Response: def verifyZeroInteractions(*objs): """Verify that no methods have been called on given objs. Note that strict mocks usually throw early on unexpected, unstubbed invocations. Partial mocks ('monkeypatched' objects or modules) do not support this functionality at all, bc only for the stubbed invocations the actual usage gets recorded. So this function is of limited use, nowadays. """ for obj in objs: theMock = _get_mock_or_raise(obj) if len(theMock.invocations) > 0: raise VerificationError( "\nUnwanted interaction: %s" % theMock.invocations[0])
def is_supported(value, check_all=False, filters=None, iterate=False): """Return True if the value is supported, False otherwise""" assert filters is not None if value is None: return True if not is_editable_type(value): return False elif not isinstance(value, filters): return False elif iterate: if isinstance(value, (list, tuple, set)): valid_count = 0 for val in value: if is_supported(val, filters=filters, iterate=check_all): valid_count += 1 if not check_all: break return valid_count > 0 elif isinstance(value, dict): for key, val in list(value.items()): if not is_supported(key, filters=filters, iterate=check_all) \ or not is_supported(val, filters=filters, iterate=check_all): return False if not check_all: break return True
Return True if the value is supported, False otherwise
Below is the the instruction that describes the task: ### Input: Return True if the value is supported, False otherwise ### Response: def is_supported(value, check_all=False, filters=None, iterate=False): """Return True if the value is supported, False otherwise""" assert filters is not None if value is None: return True if not is_editable_type(value): return False elif not isinstance(value, filters): return False elif iterate: if isinstance(value, (list, tuple, set)): valid_count = 0 for val in value: if is_supported(val, filters=filters, iterate=check_all): valid_count += 1 if not check_all: break return valid_count > 0 elif isinstance(value, dict): for key, val in list(value.items()): if not is_supported(key, filters=filters, iterate=check_all) \ or not is_supported(val, filters=filters, iterate=check_all): return False if not check_all: break return True
def assert_script_in_current_directory(): """Assert fail if current directory is different from location of the script""" script = sys.argv[0] assert os.path.abspath(os.path.dirname(script)) == os.path.abspath( '.'), f"Change into directory of script {script} and run again."
Assert fail if current directory is different from location of the script
Below is the the instruction that describes the task: ### Input: Assert fail if current directory is different from location of the script ### Response: def assert_script_in_current_directory(): """Assert fail if current directory is different from location of the script""" script = sys.argv[0] assert os.path.abspath(os.path.dirname(script)) == os.path.abspath( '.'), f"Change into directory of script {script} and run again."
def _does_require_deprecation(self): """ Check if we have to put the previous version into the deprecated list. """ for index, version_number in enumerate(self.current_version[0][:2]): # We loop through the 2 last elements of the version. if version_number > self.version_yaml[index]: # The currently read version number is greater than the one we have in # the version.yaml. # We return True. return True # We return False, we do not need to deprecate anything. return False
Check if we have to put the previous version into the deprecated list.
Below is the the instruction that describes the task: ### Input: Check if we have to put the previous version into the deprecated list. ### Response: def _does_require_deprecation(self): """ Check if we have to put the previous version into the deprecated list. """ for index, version_number in enumerate(self.current_version[0][:2]): # We loop through the 2 last elements of the version. if version_number > self.version_yaml[index]: # The currently read version number is greater than the one we have in # the version.yaml. # We return True. return True # We return False, we do not need to deprecate anything. return False
def id(self, obj): """ The method is to be used to assign an integer variable ID for a given new object. If the object already has an ID, no new ID is created and the old one is returned instead. An object can be anything. In some cases it is convenient to use string variable names. :param obj: an object to assign an ID to. :rtype: int. Example: .. code-block:: python >>> from pysat.formula import IDPool >>> vpool = IDPool(occupied=[[12, 18], [3, 10]]) >>> >>> # creating 5 unique variables for the following strings >>> for i in range(5): ... print vpool.id('v{0}'.format(i + 1)) 1 2 11 19 20 In some cases, it makes sense to create an external function for accessing IDPool, e.g.: .. code-block:: python >>> # continuing the previous example >>> var = lambda i: vpool.id('var{0}'.format(i)) >>> var(5) 20 >>> var('hello_world!') 21 """ vid = self.obj2id[obj] if vid not in self.id2obj: self.id2obj[vid] = obj return vid
The method is to be used to assign an integer variable ID for a given new object. If the object already has an ID, no new ID is created and the old one is returned instead. An object can be anything. In some cases it is convenient to use string variable names. :param obj: an object to assign an ID to. :rtype: int. Example: .. code-block:: python >>> from pysat.formula import IDPool >>> vpool = IDPool(occupied=[[12, 18], [3, 10]]) >>> >>> # creating 5 unique variables for the following strings >>> for i in range(5): ... print vpool.id('v{0}'.format(i + 1)) 1 2 11 19 20 In some cases, it makes sense to create an external function for accessing IDPool, e.g.: .. code-block:: python >>> # continuing the previous example >>> var = lambda i: vpool.id('var{0}'.format(i)) >>> var(5) 20 >>> var('hello_world!') 21
Below is the the instruction that describes the task: ### Input: The method is to be used to assign an integer variable ID for a given new object. If the object already has an ID, no new ID is created and the old one is returned instead. An object can be anything. In some cases it is convenient to use string variable names. :param obj: an object to assign an ID to. :rtype: int. Example: .. code-block:: python >>> from pysat.formula import IDPool >>> vpool = IDPool(occupied=[[12, 18], [3, 10]]) >>> >>> # creating 5 unique variables for the following strings >>> for i in range(5): ... print vpool.id('v{0}'.format(i + 1)) 1 2 11 19 20 In some cases, it makes sense to create an external function for accessing IDPool, e.g.: .. code-block:: python >>> # continuing the previous example >>> var = lambda i: vpool.id('var{0}'.format(i)) >>> var(5) 20 >>> var('hello_world!') 21 ### Response: def id(self, obj): """ The method is to be used to assign an integer variable ID for a given new object. If the object already has an ID, no new ID is created and the old one is returned instead. An object can be anything. In some cases it is convenient to use string variable names. :param obj: an object to assign an ID to. :rtype: int. Example: .. code-block:: python >>> from pysat.formula import IDPool >>> vpool = IDPool(occupied=[[12, 18], [3, 10]]) >>> >>> # creating 5 unique variables for the following strings >>> for i in range(5): ... print vpool.id('v{0}'.format(i + 1)) 1 2 11 19 20 In some cases, it makes sense to create an external function for accessing IDPool, e.g.: .. code-block:: python >>> # continuing the previous example >>> var = lambda i: vpool.id('var{0}'.format(i)) >>> var(5) 20 >>> var('hello_world!') 21 """ vid = self.obj2id[obj] if vid not in self.id2obj: self.id2obj[vid] = obj return vid
def scan_resource(self, pkg, path): r"""Scan a resource directory for colortable files and add them to the registry. Parameters ---------- pkg : str The package containing the resource directory path : str The path to the directory with the color tables """ for fname in resource_listdir(pkg, path): if fname.endswith(TABLE_EXT): table_path = posixpath.join(path, fname) with contextlib.closing(resource_stream(pkg, table_path)) as stream: self.add_colortable(stream, posixpath.splitext(posixpath.basename(fname))[0])
r"""Scan a resource directory for colortable files and add them to the registry. Parameters ---------- pkg : str The package containing the resource directory path : str The path to the directory with the color tables
Below is the the instruction that describes the task: ### Input: r"""Scan a resource directory for colortable files and add them to the registry. Parameters ---------- pkg : str The package containing the resource directory path : str The path to the directory with the color tables ### Response: def scan_resource(self, pkg, path): r"""Scan a resource directory for colortable files and add them to the registry. Parameters ---------- pkg : str The package containing the resource directory path : str The path to the directory with the color tables """ for fname in resource_listdir(pkg, path): if fname.endswith(TABLE_EXT): table_path = posixpath.join(path, fname) with contextlib.closing(resource_stream(pkg, table_path)) as stream: self.add_colortable(stream, posixpath.splitext(posixpath.basename(fname))[0])
def __prepare_domain(data): """Prepare domainpart of the JID. :Parameters: - `data`: Domain part of the JID :Types: - `data`: `unicode` :raise JIDError: if the domain name is too long. """ # pylint: disable=R0912 if not data: raise JIDError("Domain must be given") data = unicode(data) if not data: raise JIDError("Domain must be given") if u'[' in data: if data[0] == u'[' and data[-1] == u']': try: addr = _validate_ip_address(socket.AF_INET6, data[1:-1]) return "[{0}]".format(addr) except ValueError, err: logger.debug("ValueError: {0}".format(err)) raise JIDError(u"Invalid IPv6 literal in JID domainpart") else: raise JIDError(u"Invalid use of '[' or ']' in JID domainpart") elif data[0].isdigit() and data[-1].isdigit(): try: addr = _validate_ip_address(socket.AF_INET, data) except ValueError, err: logger.debug("ValueError: {0}".format(err)) data = UNICODE_DOT_RE.sub(u".", data) data = data.rstrip(u".") labels = data.split(u".") try: labels = [idna.nameprep(label) for label in labels] except UnicodeError: raise JIDError(u"Domain name invalid") for label in labels: if not STD3_LABEL_RE.match(label): raise JIDError(u"Domain name invalid") try: idna.ToASCII(label) except UnicodeError: raise JIDError(u"Domain name invalid") domain = u".".join(labels) if len(domain.encode("utf-8")) > 1023: raise JIDError(u"Domain name too long") return domain
Prepare domainpart of the JID. :Parameters: - `data`: Domain part of the JID :Types: - `data`: `unicode` :raise JIDError: if the domain name is too long.
Below is the the instruction that describes the task: ### Input: Prepare domainpart of the JID. :Parameters: - `data`: Domain part of the JID :Types: - `data`: `unicode` :raise JIDError: if the domain name is too long. ### Response: def __prepare_domain(data): """Prepare domainpart of the JID. :Parameters: - `data`: Domain part of the JID :Types: - `data`: `unicode` :raise JIDError: if the domain name is too long. """ # pylint: disable=R0912 if not data: raise JIDError("Domain must be given") data = unicode(data) if not data: raise JIDError("Domain must be given") if u'[' in data: if data[0] == u'[' and data[-1] == u']': try: addr = _validate_ip_address(socket.AF_INET6, data[1:-1]) return "[{0}]".format(addr) except ValueError, err: logger.debug("ValueError: {0}".format(err)) raise JIDError(u"Invalid IPv6 literal in JID domainpart") else: raise JIDError(u"Invalid use of '[' or ']' in JID domainpart") elif data[0].isdigit() and data[-1].isdigit(): try: addr = _validate_ip_address(socket.AF_INET, data) except ValueError, err: logger.debug("ValueError: {0}".format(err)) data = UNICODE_DOT_RE.sub(u".", data) data = data.rstrip(u".") labels = data.split(u".") try: labels = [idna.nameprep(label) for label in labels] except UnicodeError: raise JIDError(u"Domain name invalid") for label in labels: if not STD3_LABEL_RE.match(label): raise JIDError(u"Domain name invalid") try: idna.ToASCII(label) except UnicodeError: raise JIDError(u"Domain name invalid") domain = u".".join(labels) if len(domain.encode("utf-8")) > 1023: raise JIDError(u"Domain name too long") return domain
def get_profile(): """ Prefetch the profile module, to fill some holes in the help text.""" argument_parser = ThrowingArgumentParser(add_help=False) argument_parser.add_argument('profile') try: args, _ = argument_parser.parse_known_args() except ArgumentParserError: # silently fails, the main parser will show usage string. return Profile() imported = get_module(args.profile) profile = get_module_profile(imported) if not profile: raise Exception(f"Can't get a profile from {imported}.") return profile
Prefetch the profile module, to fill some holes in the help text.
Below is the the instruction that describes the task: ### Input: Prefetch the profile module, to fill some holes in the help text. ### Response: def get_profile(): """ Prefetch the profile module, to fill some holes in the help text.""" argument_parser = ThrowingArgumentParser(add_help=False) argument_parser.add_argument('profile') try: args, _ = argument_parser.parse_known_args() except ArgumentParserError: # silently fails, the main parser will show usage string. return Profile() imported = get_module(args.profile) profile = get_module_profile(imported) if not profile: raise Exception(f"Can't get a profile from {imported}.") return profile
def _dump_cnt(self): '''Dump counters to file''' self._cnt['1h'].dump(os.path.join(self.data_path, 'scheduler.1h')) self._cnt['1d'].dump(os.path.join(self.data_path, 'scheduler.1d')) self._cnt['all'].dump(os.path.join(self.data_path, 'scheduler.all'))
Dump counters to file
Below is the the instruction that describes the task: ### Input: Dump counters to file ### Response: def _dump_cnt(self): '''Dump counters to file''' self._cnt['1h'].dump(os.path.join(self.data_path, 'scheduler.1h')) self._cnt['1d'].dump(os.path.join(self.data_path, 'scheduler.1d')) self._cnt['all'].dump(os.path.join(self.data_path, 'scheduler.all'))
def check_arguments(cls, conf): """ Sanity checks for options needed for configfile mode. """ try: # Check we have access to the config file f = open(conf['file'], "r") f.close() except IOError as e: raise ArgsError("Cannot open config file '%s': %s" % (conf['file'], e))
Sanity checks for options needed for configfile mode.
Below is the the instruction that describes the task: ### Input: Sanity checks for options needed for configfile mode. ### Response: def check_arguments(cls, conf): """ Sanity checks for options needed for configfile mode. """ try: # Check we have access to the config file f = open(conf['file'], "r") f.close() except IOError as e: raise ArgsError("Cannot open config file '%s': %s" % (conf['file'], e))
def delete(cls, object_version, key=None): """Delete tags. :param object_version: The object version instance or id. :param key: Key of the tag to delete. Default: delete all tags. """ with db.session.begin_nested(): q = cls.query.filter_by( version_id=as_object_version_id(object_version)) if key: q = q.filter_by(key=key) q.delete()
Delete tags. :param object_version: The object version instance or id. :param key: Key of the tag to delete. Default: delete all tags.
Below is the the instruction that describes the task: ### Input: Delete tags. :param object_version: The object version instance or id. :param key: Key of the tag to delete. Default: delete all tags. ### Response: def delete(cls, object_version, key=None): """Delete tags. :param object_version: The object version instance or id. :param key: Key of the tag to delete. Default: delete all tags. """ with db.session.begin_nested(): q = cls.query.filter_by( version_id=as_object_version_id(object_version)) if key: q = q.filter_by(key=key) q.delete()
def run_jar(self, mem=None): """ Special case of run() when the executable is a JAR file. """ cmd = config.get_command('java') if mem: cmd.append('-Xmx%s' % mem) cmd.append('-jar') cmd += self.cmd self.run(cmd)
Special case of run() when the executable is a JAR file.
Below is the the instruction that describes the task: ### Input: Special case of run() when the executable is a JAR file. ### Response: def run_jar(self, mem=None): """ Special case of run() when the executable is a JAR file. """ cmd = config.get_command('java') if mem: cmd.append('-Xmx%s' % mem) cmd.append('-jar') cmd += self.cmd self.run(cmd)
def invert_node_predicate(node_predicate: NodePredicate) -> NodePredicate: # noqa: D202 """Build a node predicate that is the inverse of the given node predicate.""" def inverse_predicate(graph: BELGraph, node: BaseEntity) -> bool: """Return the inverse of the enclosed node predicate applied to the graph and node.""" return not node_predicate(graph, node) return inverse_predicate
Build a node predicate that is the inverse of the given node predicate.
Below is the the instruction that describes the task: ### Input: Build a node predicate that is the inverse of the given node predicate. ### Response: def invert_node_predicate(node_predicate: NodePredicate) -> NodePredicate: # noqa: D202 """Build a node predicate that is the inverse of the given node predicate.""" def inverse_predicate(graph: BELGraph, node: BaseEntity) -> bool: """Return the inverse of the enclosed node predicate applied to the graph and node.""" return not node_predicate(graph, node) return inverse_predicate
def get_file(original_file): """ original file should be s3://bucketname/path/to/file.txt returns a Buffer with the file in it """ import cStringIO import boto3 s3 = boto3.resource('s3') bucket_name, object_key = _parse_s3_file(original_file) logger.debug("Downloading {0} from {1}".format(object_key, bucket_name)) bucket = s3.Bucket(bucket_name) output = cStringIO.StringIO() bucket.download_fileobj(object_key, output) output.reset() return output
original file should be s3://bucketname/path/to/file.txt returns a Buffer with the file in it
Below is the the instruction that describes the task: ### Input: original file should be s3://bucketname/path/to/file.txt returns a Buffer with the file in it ### Response: def get_file(original_file): """ original file should be s3://bucketname/path/to/file.txt returns a Buffer with the file in it """ import cStringIO import boto3 s3 = boto3.resource('s3') bucket_name, object_key = _parse_s3_file(original_file) logger.debug("Downloading {0} from {1}".format(object_key, bucket_name)) bucket = s3.Bucket(bucket_name) output = cStringIO.StringIO() bucket.download_fileobj(object_key, output) output.reset() return output
def serve(): """main entry point""" logging.getLogger().setLevel(logging.DEBUG) logging.info('Python Tornado Crossdock Server Starting ...') tracer = Tracer( service_name='python', reporter=NullReporter(), sampler=ConstSampler(decision=True)) opentracing.tracer = tracer tchannel = TChannel(name='python', hostport=':%d' % DEFAULT_SERVER_PORT, trace=True) register_tchannel_handlers(tchannel=tchannel) tchannel.listen() app = tornado.web.Application(debug=True) register_http_handlers(app) app.listen(DEFAULT_CLIENT_PORT) tornado.ioloop.IOLoop.current().start()
main entry point
Below is the the instruction that describes the task: ### Input: main entry point ### Response: def serve(): """main entry point""" logging.getLogger().setLevel(logging.DEBUG) logging.info('Python Tornado Crossdock Server Starting ...') tracer = Tracer( service_name='python', reporter=NullReporter(), sampler=ConstSampler(decision=True)) opentracing.tracer = tracer tchannel = TChannel(name='python', hostport=':%d' % DEFAULT_SERVER_PORT, trace=True) register_tchannel_handlers(tchannel=tchannel) tchannel.listen() app = tornado.web.Application(debug=True) register_http_handlers(app) app.listen(DEFAULT_CLIENT_PORT) tornado.ioloop.IOLoop.current().start()
def filter_factory(global_conf, **local_conf): """Returns a WSGI filter app for use with paste.deploy.""" conf = global_conf.copy() conf.update(local_conf) def blacklist(app): return BlacklistFilter(app, conf) return blacklist
Returns a WSGI filter app for use with paste.deploy.
Below is the the instruction that describes the task: ### Input: Returns a WSGI filter app for use with paste.deploy. ### Response: def filter_factory(global_conf, **local_conf): """Returns a WSGI filter app for use with paste.deploy.""" conf = global_conf.copy() conf.update(local_conf) def blacklist(app): return BlacklistFilter(app, conf) return blacklist
def _save_image(self, name, format='PNG'): """ Shows a save dialog for the ImageResource with 'name'. """ dialog = QtGui.QFileDialog(self._control, 'Save Image') dialog.setAcceptMode(QtGui.QFileDialog.AcceptSave) dialog.setDefaultSuffix(format.lower()) dialog.setNameFilter('%s file (*.%s)' % (format, format.lower())) if dialog.exec_(): filename = dialog.selectedFiles()[0] image = self._get_image(name) image.save(filename, format)
Shows a save dialog for the ImageResource with 'name'.
Below is the the instruction that describes the task: ### Input: Shows a save dialog for the ImageResource with 'name'. ### Response: def _save_image(self, name, format='PNG'): """ Shows a save dialog for the ImageResource with 'name'. """ dialog = QtGui.QFileDialog(self._control, 'Save Image') dialog.setAcceptMode(QtGui.QFileDialog.AcceptSave) dialog.setDefaultSuffix(format.lower()) dialog.setNameFilter('%s file (*.%s)' % (format, format.lower())) if dialog.exec_(): filename = dialog.selectedFiles()[0] image = self._get_image(name) image.save(filename, format)
def wait_actions_on_objects(self, objects, wait_interval=None, wait_time=None): """ .. versionadded:: 0.2.0 Poll the server periodically until the most recent action on each resource in ``objects`` has finished, yielding each resource's final state when the corresponding action is done. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress actions) is raised. If a `KeyboardInterrupt` is caught, any remaining actions are returned immediately without waiting for completion. :param iterable objects: an iterable of resource objects that have ``fetch_last_action`` methods :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any actions have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator of objects :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded """ acts = [] for o in objects: a = o.fetch_last_action() if a is None: yield o else: acts.append(a) for a in self.wait_actions(acts, wait_interval, wait_time): yield a.fetch_resource()
.. versionadded:: 0.2.0 Poll the server periodically until the most recent action on each resource in ``objects`` has finished, yielding each resource's final state when the corresponding action is done. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress actions) is raised. If a `KeyboardInterrupt` is caught, any remaining actions are returned immediately without waiting for completion. :param iterable objects: an iterable of resource objects that have ``fetch_last_action`` methods :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any actions have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator of objects :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded
Below is the the instruction that describes the task: ### Input: .. versionadded:: 0.2.0 Poll the server periodically until the most recent action on each resource in ``objects`` has finished, yielding each resource's final state when the corresponding action is done. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress actions) is raised. If a `KeyboardInterrupt` is caught, any remaining actions are returned immediately without waiting for completion. :param iterable objects: an iterable of resource objects that have ``fetch_last_action`` methods :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any actions have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator of objects :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded ### Response: def wait_actions_on_objects(self, objects, wait_interval=None, wait_time=None): """ .. versionadded:: 0.2.0 Poll the server periodically until the most recent action on each resource in ``objects`` has finished, yielding each resource's final state when the corresponding action is done. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress actions) is raised. If a `KeyboardInterrupt` is caught, any remaining actions are returned immediately without waiting for completion. :param iterable objects: an iterable of resource objects that have ``fetch_last_action`` methods :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any actions have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator of objects :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded """ acts = [] for o in objects: a = o.fetch_last_action() if a is None: yield o else: acts.append(a) for a in self.wait_actions(acts, wait_interval, wait_time): yield a.fetch_resource()
def overload(fn): """ Overload a given callable object to be used with ``|`` operator overloading. This is especially used for composing a pipeline of transformation over a single data set. Arguments: fn (function): target function to decorate. Raises: TypeError: if function or coroutine function is not provided. Returns: function: decorated function """ if not isfunction(fn): raise TypeError('paco: fn must be a callable object') spec = getargspec(fn) args = spec.args if not spec.varargs and (len(args) < 2 or args[1] != 'iterable'): raise ValueError('paco: invalid function signature or arity') @functools.wraps(fn) def decorator(*args, **kw): # Check function arity if len(args) < 2: return PipeOverloader(fn, args, kw) # Otherwise, behave like a normal wrapper return fn(*args, **kw) return decorator
Overload a given callable object to be used with ``|`` operator overloading. This is especially used for composing a pipeline of transformation over a single data set. Arguments: fn (function): target function to decorate. Raises: TypeError: if function or coroutine function is not provided. Returns: function: decorated function
Below is the the instruction that describes the task: ### Input: Overload a given callable object to be used with ``|`` operator overloading. This is especially used for composing a pipeline of transformation over a single data set. Arguments: fn (function): target function to decorate. Raises: TypeError: if function or coroutine function is not provided. Returns: function: decorated function ### Response: def overload(fn): """ Overload a given callable object to be used with ``|`` operator overloading. This is especially used for composing a pipeline of transformation over a single data set. Arguments: fn (function): target function to decorate. Raises: TypeError: if function or coroutine function is not provided. Returns: function: decorated function """ if not isfunction(fn): raise TypeError('paco: fn must be a callable object') spec = getargspec(fn) args = spec.args if not spec.varargs and (len(args) < 2 or args[1] != 'iterable'): raise ValueError('paco: invalid function signature or arity') @functools.wraps(fn) def decorator(*args, **kw): # Check function arity if len(args) < 2: return PipeOverloader(fn, args, kw) # Otherwise, behave like a normal wrapper return fn(*args, **kw) return decorator
def _bind(self): """Bind events to handlers""" main_window = self.main_window handlers = self.handlers c_handlers = self.cell_handlers # Non wx.Grid events self.Bind(wx.EVT_MOUSEWHEEL, handlers.OnMouseWheel) self.Bind(wx.EVT_KEY_DOWN, handlers.OnKey) # Grid events self.GetGridWindow().Bind(wx.EVT_MOTION, handlers.OnMouseMotion) self.Bind(wx.grid.EVT_GRID_RANGE_SELECT, handlers.OnRangeSelected) # Context menu self.Bind(wx.grid.EVT_GRID_CELL_RIGHT_CLICK, handlers.OnContextMenu) # Cell code events main_window.Bind(self.EVT_CMD_CODE_ENTRY, c_handlers.OnCellText) main_window.Bind(self.EVT_CMD_INSERT_BMP, c_handlers.OnInsertBitmap) main_window.Bind(self.EVT_CMD_LINK_BMP, c_handlers.OnLinkBitmap) main_window.Bind(self.EVT_CMD_VIDEO_CELL, c_handlers.OnLinkVLCVideo) main_window.Bind(self.EVT_CMD_INSERT_CHART, c_handlers.OnInsertChartDialog) # Cell attribute events main_window.Bind(self.EVT_CMD_COPY_FORMAT, c_handlers.OnCopyFormat) main_window.Bind(self.EVT_CMD_PASTE_FORMAT, c_handlers.OnPasteFormat) main_window.Bind(self.EVT_CMD_FONT, c_handlers.OnCellFont) main_window.Bind(self.EVT_CMD_FONTSIZE, c_handlers.OnCellFontSize) main_window.Bind(self.EVT_CMD_FONTBOLD, c_handlers.OnCellFontBold) main_window.Bind(self.EVT_CMD_FONTITALICS, c_handlers.OnCellFontItalics) main_window.Bind(self.EVT_CMD_FONTUNDERLINE, c_handlers.OnCellFontUnderline) main_window.Bind(self.EVT_CMD_FONTSTRIKETHROUGH, c_handlers.OnCellFontStrikethrough) main_window.Bind(self.EVT_CMD_FROZEN, c_handlers.OnCellFrozen) main_window.Bind(self.EVT_CMD_LOCK, c_handlers.OnCellLocked) main_window.Bind(self.EVT_CMD_BUTTON_CELL, c_handlers.OnButtonCell) main_window.Bind(self.EVT_CMD_MARKUP, c_handlers.OnCellMarkup) main_window.Bind(self.EVT_CMD_MERGE, c_handlers.OnMerge) main_window.Bind(self.EVT_CMD_JUSTIFICATION, c_handlers.OnCellJustification) main_window.Bind(self.EVT_CMD_ALIGNMENT, c_handlers.OnCellAlignment) main_window.Bind(self.EVT_CMD_BORDERWIDTH, c_handlers.OnCellBorderWidth) main_window.Bind(self.EVT_CMD_BORDERCOLOR, c_handlers.OnCellBorderColor) main_window.Bind(self.EVT_CMD_BACKGROUNDCOLOR, c_handlers.OnCellBackgroundColor) main_window.Bind(self.EVT_CMD_TEXTCOLOR, c_handlers.OnCellTextColor) main_window.Bind(self.EVT_CMD_ROTATION0, c_handlers.OnTextRotation0) main_window.Bind(self.EVT_CMD_ROTATION90, c_handlers.OnTextRotation90) main_window.Bind(self.EVT_CMD_ROTATION180, c_handlers.OnTextRotation180) main_window.Bind(self.EVT_CMD_ROTATION270, c_handlers.OnTextRotation270) main_window.Bind(self.EVT_CMD_TEXTROTATATION, c_handlers.OnCellTextRotation) # Cell selection events self.Bind(wx.grid.EVT_GRID_CMD_SELECT_CELL, c_handlers.OnCellSelected) # Grid edit mode events main_window.Bind(self.EVT_CMD_ENTER_SELECTION_MODE, handlers.OnEnterSelectionMode) main_window.Bind(self.EVT_CMD_EXIT_SELECTION_MODE, handlers.OnExitSelectionMode) # Grid view events main_window.Bind(self.EVT_CMD_VIEW_FROZEN, handlers.OnViewFrozen) main_window.Bind(self.EVT_CMD_REFRESH_SELECTION, handlers.OnRefreshSelectedCells) main_window.Bind(self.EVT_CMD_TIMER_TOGGLE, handlers.OnTimerToggle) self.Bind(wx.EVT_TIMER, handlers.OnTimer) main_window.Bind(self.EVT_CMD_DISPLAY_GOTO_CELL_DIALOG, handlers.OnDisplayGoToCellDialog) main_window.Bind(self.EVT_CMD_GOTO_CELL, handlers.OnGoToCell) main_window.Bind(self.EVT_CMD_ZOOM_IN, handlers.OnZoomIn) main_window.Bind(self.EVT_CMD_ZOOM_OUT, handlers.OnZoomOut) main_window.Bind(self.EVT_CMD_ZOOM_STANDARD, handlers.OnZoomStandard) main_window.Bind(self.EVT_CMD_ZOOM_FIT, handlers.OnZoomFit) # Find events main_window.Bind(self.EVT_CMD_FIND, handlers.OnFind) main_window.Bind(self.EVT_CMD_REPLACE, handlers.OnShowFindReplace) main_window.Bind(wx.EVT_FIND, handlers.OnReplaceFind) main_window.Bind(wx.EVT_FIND_NEXT, handlers.OnReplaceFind) main_window.Bind(wx.EVT_FIND_REPLACE, handlers.OnReplace) main_window.Bind(wx.EVT_FIND_REPLACE_ALL, handlers.OnReplaceAll) main_window.Bind(wx.EVT_FIND_CLOSE, handlers.OnCloseFindReplace) # Grid change events main_window.Bind(self.EVT_CMD_INSERT_ROWS, handlers.OnInsertRows) main_window.Bind(self.EVT_CMD_INSERT_COLS, handlers.OnInsertCols) main_window.Bind(self.EVT_CMD_INSERT_TABS, handlers.OnInsertTabs) main_window.Bind(self.EVT_CMD_DELETE_ROWS, handlers.OnDeleteRows) main_window.Bind(self.EVT_CMD_DELETE_COLS, handlers.OnDeleteCols) main_window.Bind(self.EVT_CMD_DELETE_TABS, handlers.OnDeleteTabs) main_window.Bind(self.EVT_CMD_SHOW_RESIZE_GRID_DIALOG, handlers.OnResizeGridDialog) main_window.Bind(self.EVT_CMD_QUOTE, handlers.OnQuote) main_window.Bind(wx.grid.EVT_GRID_ROW_SIZE, handlers.OnRowSize) main_window.Bind(wx.grid.EVT_GRID_COL_SIZE, handlers.OnColSize) main_window.Bind(self.EVT_CMD_SORT_ASCENDING, handlers.OnSortAscending) main_window.Bind(self.EVT_CMD_SORT_DESCENDING, handlers.OnSortDescending) # Undo/Redo events main_window.Bind(self.EVT_CMD_UNDO, handlers.OnUndo) main_window.Bind(self.EVT_CMD_REDO, handlers.OnRedo)
Bind events to handlers
Below is the the instruction that describes the task: ### Input: Bind events to handlers ### Response: def _bind(self): """Bind events to handlers""" main_window = self.main_window handlers = self.handlers c_handlers = self.cell_handlers # Non wx.Grid events self.Bind(wx.EVT_MOUSEWHEEL, handlers.OnMouseWheel) self.Bind(wx.EVT_KEY_DOWN, handlers.OnKey) # Grid events self.GetGridWindow().Bind(wx.EVT_MOTION, handlers.OnMouseMotion) self.Bind(wx.grid.EVT_GRID_RANGE_SELECT, handlers.OnRangeSelected) # Context menu self.Bind(wx.grid.EVT_GRID_CELL_RIGHT_CLICK, handlers.OnContextMenu) # Cell code events main_window.Bind(self.EVT_CMD_CODE_ENTRY, c_handlers.OnCellText) main_window.Bind(self.EVT_CMD_INSERT_BMP, c_handlers.OnInsertBitmap) main_window.Bind(self.EVT_CMD_LINK_BMP, c_handlers.OnLinkBitmap) main_window.Bind(self.EVT_CMD_VIDEO_CELL, c_handlers.OnLinkVLCVideo) main_window.Bind(self.EVT_CMD_INSERT_CHART, c_handlers.OnInsertChartDialog) # Cell attribute events main_window.Bind(self.EVT_CMD_COPY_FORMAT, c_handlers.OnCopyFormat) main_window.Bind(self.EVT_CMD_PASTE_FORMAT, c_handlers.OnPasteFormat) main_window.Bind(self.EVT_CMD_FONT, c_handlers.OnCellFont) main_window.Bind(self.EVT_CMD_FONTSIZE, c_handlers.OnCellFontSize) main_window.Bind(self.EVT_CMD_FONTBOLD, c_handlers.OnCellFontBold) main_window.Bind(self.EVT_CMD_FONTITALICS, c_handlers.OnCellFontItalics) main_window.Bind(self.EVT_CMD_FONTUNDERLINE, c_handlers.OnCellFontUnderline) main_window.Bind(self.EVT_CMD_FONTSTRIKETHROUGH, c_handlers.OnCellFontStrikethrough) main_window.Bind(self.EVT_CMD_FROZEN, c_handlers.OnCellFrozen) main_window.Bind(self.EVT_CMD_LOCK, c_handlers.OnCellLocked) main_window.Bind(self.EVT_CMD_BUTTON_CELL, c_handlers.OnButtonCell) main_window.Bind(self.EVT_CMD_MARKUP, c_handlers.OnCellMarkup) main_window.Bind(self.EVT_CMD_MERGE, c_handlers.OnMerge) main_window.Bind(self.EVT_CMD_JUSTIFICATION, c_handlers.OnCellJustification) main_window.Bind(self.EVT_CMD_ALIGNMENT, c_handlers.OnCellAlignment) main_window.Bind(self.EVT_CMD_BORDERWIDTH, c_handlers.OnCellBorderWidth) main_window.Bind(self.EVT_CMD_BORDERCOLOR, c_handlers.OnCellBorderColor) main_window.Bind(self.EVT_CMD_BACKGROUNDCOLOR, c_handlers.OnCellBackgroundColor) main_window.Bind(self.EVT_CMD_TEXTCOLOR, c_handlers.OnCellTextColor) main_window.Bind(self.EVT_CMD_ROTATION0, c_handlers.OnTextRotation0) main_window.Bind(self.EVT_CMD_ROTATION90, c_handlers.OnTextRotation90) main_window.Bind(self.EVT_CMD_ROTATION180, c_handlers.OnTextRotation180) main_window.Bind(self.EVT_CMD_ROTATION270, c_handlers.OnTextRotation270) main_window.Bind(self.EVT_CMD_TEXTROTATATION, c_handlers.OnCellTextRotation) # Cell selection events self.Bind(wx.grid.EVT_GRID_CMD_SELECT_CELL, c_handlers.OnCellSelected) # Grid edit mode events main_window.Bind(self.EVT_CMD_ENTER_SELECTION_MODE, handlers.OnEnterSelectionMode) main_window.Bind(self.EVT_CMD_EXIT_SELECTION_MODE, handlers.OnExitSelectionMode) # Grid view events main_window.Bind(self.EVT_CMD_VIEW_FROZEN, handlers.OnViewFrozen) main_window.Bind(self.EVT_CMD_REFRESH_SELECTION, handlers.OnRefreshSelectedCells) main_window.Bind(self.EVT_CMD_TIMER_TOGGLE, handlers.OnTimerToggle) self.Bind(wx.EVT_TIMER, handlers.OnTimer) main_window.Bind(self.EVT_CMD_DISPLAY_GOTO_CELL_DIALOG, handlers.OnDisplayGoToCellDialog) main_window.Bind(self.EVT_CMD_GOTO_CELL, handlers.OnGoToCell) main_window.Bind(self.EVT_CMD_ZOOM_IN, handlers.OnZoomIn) main_window.Bind(self.EVT_CMD_ZOOM_OUT, handlers.OnZoomOut) main_window.Bind(self.EVT_CMD_ZOOM_STANDARD, handlers.OnZoomStandard) main_window.Bind(self.EVT_CMD_ZOOM_FIT, handlers.OnZoomFit) # Find events main_window.Bind(self.EVT_CMD_FIND, handlers.OnFind) main_window.Bind(self.EVT_CMD_REPLACE, handlers.OnShowFindReplace) main_window.Bind(wx.EVT_FIND, handlers.OnReplaceFind) main_window.Bind(wx.EVT_FIND_NEXT, handlers.OnReplaceFind) main_window.Bind(wx.EVT_FIND_REPLACE, handlers.OnReplace) main_window.Bind(wx.EVT_FIND_REPLACE_ALL, handlers.OnReplaceAll) main_window.Bind(wx.EVT_FIND_CLOSE, handlers.OnCloseFindReplace) # Grid change events main_window.Bind(self.EVT_CMD_INSERT_ROWS, handlers.OnInsertRows) main_window.Bind(self.EVT_CMD_INSERT_COLS, handlers.OnInsertCols) main_window.Bind(self.EVT_CMD_INSERT_TABS, handlers.OnInsertTabs) main_window.Bind(self.EVT_CMD_DELETE_ROWS, handlers.OnDeleteRows) main_window.Bind(self.EVT_CMD_DELETE_COLS, handlers.OnDeleteCols) main_window.Bind(self.EVT_CMD_DELETE_TABS, handlers.OnDeleteTabs) main_window.Bind(self.EVT_CMD_SHOW_RESIZE_GRID_DIALOG, handlers.OnResizeGridDialog) main_window.Bind(self.EVT_CMD_QUOTE, handlers.OnQuote) main_window.Bind(wx.grid.EVT_GRID_ROW_SIZE, handlers.OnRowSize) main_window.Bind(wx.grid.EVT_GRID_COL_SIZE, handlers.OnColSize) main_window.Bind(self.EVT_CMD_SORT_ASCENDING, handlers.OnSortAscending) main_window.Bind(self.EVT_CMD_SORT_DESCENDING, handlers.OnSortDescending) # Undo/Redo events main_window.Bind(self.EVT_CMD_UNDO, handlers.OnUndo) main_window.Bind(self.EVT_CMD_REDO, handlers.OnRedo)
def dmlc_opts(opts): """convert from mxnet's opts to dmlc's opts """ args = ['--num-workers', str(opts.num_workers), '--num-servers', str(opts.num_servers), '--cluster', opts.launcher, '--host-file', opts.hostfile, '--sync-dst-dir', opts.sync_dst_dir] # convert to dictionary dopts = vars(opts) for key in ['env_server', 'env_worker', 'env']: for v in dopts[key]: args.append('--' + key.replace("_","-")) args.append(v) args += opts.command try: from dmlc_tracker import opts except ImportError: print("Can't load dmlc_tracker package. Perhaps you need to run") print(" git submodule update --init --recursive") raise dmlc_opts = opts.get_opts(args) return dmlc_opts
convert from mxnet's opts to dmlc's opts
Below is the the instruction that describes the task: ### Input: convert from mxnet's opts to dmlc's opts ### Response: def dmlc_opts(opts): """convert from mxnet's opts to dmlc's opts """ args = ['--num-workers', str(opts.num_workers), '--num-servers', str(opts.num_servers), '--cluster', opts.launcher, '--host-file', opts.hostfile, '--sync-dst-dir', opts.sync_dst_dir] # convert to dictionary dopts = vars(opts) for key in ['env_server', 'env_worker', 'env']: for v in dopts[key]: args.append('--' + key.replace("_","-")) args.append(v) args += opts.command try: from dmlc_tracker import opts except ImportError: print("Can't load dmlc_tracker package. Perhaps you need to run") print(" git submodule update --init --recursive") raise dmlc_opts = opts.get_opts(args) return dmlc_opts
def complementTab(seq=[]): """returns a list of complementary sequence without inversing it""" complement = {'A': 'T', 'C': 'G', 'G': 'C', 'T': 'A', 'R': 'Y', 'Y': 'R', 'M': 'K', 'K': 'M', 'W': 'W', 'S': 'S', 'B': 'V', 'D': 'H', 'H': 'D', 'V': 'B', 'N': 'N', 'a': 't', 'c': 'g', 'g': 'c', 't': 'a', 'r': 'y', 'y': 'r', 'm': 'k', 'k': 'm', 'w': 'w', 's': 's', 'b': 'v', 'd': 'h', 'h': 'd', 'v': 'b', 'n': 'n'} seq_tmp = [] for bps in seq: if len(bps) == 0: #Need manage '' for deletion seq_tmp.append('') elif len(bps) == 1: seq_tmp.append(complement[bps]) else: #Need manage 'ACT' for insertion #The insertion need to be reverse complement (like seq) seq_tmp.append(reverseComplement(bps)) #Doesn't work in the second for when bps=='' #seq = [complement[bp] if bp != '' else '' for bps in seq for bp in bps] return seq_tmp
returns a list of complementary sequence without inversing it
Below is the the instruction that describes the task: ### Input: returns a list of complementary sequence without inversing it ### Response: def complementTab(seq=[]): """returns a list of complementary sequence without inversing it""" complement = {'A': 'T', 'C': 'G', 'G': 'C', 'T': 'A', 'R': 'Y', 'Y': 'R', 'M': 'K', 'K': 'M', 'W': 'W', 'S': 'S', 'B': 'V', 'D': 'H', 'H': 'D', 'V': 'B', 'N': 'N', 'a': 't', 'c': 'g', 'g': 'c', 't': 'a', 'r': 'y', 'y': 'r', 'm': 'k', 'k': 'm', 'w': 'w', 's': 's', 'b': 'v', 'd': 'h', 'h': 'd', 'v': 'b', 'n': 'n'} seq_tmp = [] for bps in seq: if len(bps) == 0: #Need manage '' for deletion seq_tmp.append('') elif len(bps) == 1: seq_tmp.append(complement[bps]) else: #Need manage 'ACT' for insertion #The insertion need to be reverse complement (like seq) seq_tmp.append(reverseComplement(bps)) #Doesn't work in the second for when bps=='' #seq = [complement[bp] if bp != '' else '' for bps in seq for bp in bps] return seq_tmp
def kill(timeout=15): ''' Kill the salt minion. timeout int seconds to wait for the minion to die. If you have a monitor that restarts ``salt-minion`` when it dies then this is a great way to restart after a minion upgrade. CLI example:: >$ salt minion[12] minion.kill minion1: ---------- killed: 7874 retcode: 0 minion2: ---------- killed: 29071 retcode: 0 The result of the salt command shows the process ID of the minions and the results of a kill signal to the minion in as the ``retcode`` value: ``0`` is success, anything else is a failure. ''' ret = { 'killed': None, 'retcode': 1, } comment = [] pid = __grains__.get('pid') if not pid: comment.append('Unable to find "pid" in grains') ret['retcode'] = salt.defaults.exitcodes.EX_SOFTWARE else: if 'ps.kill_pid' not in __salt__: comment.append('Missing command: ps.kill_pid') ret['retcode'] = salt.defaults.exitcodes.EX_SOFTWARE else: # The retcode status comes from the first kill signal ret['retcode'] = int(not __salt__['ps.kill_pid'](pid)) # If the signal was successfully delivered then wait for the # process to die - check by sending signals until signal delivery # fails. if ret['retcode']: comment.append('ps.kill_pid failed') else: for _ in range(timeout): time.sleep(1) signaled = __salt__['ps.kill_pid'](pid) if not signaled: ret['killed'] = pid break else: # The process did not exit before the timeout comment.append('Timed out waiting for minion to exit') ret['retcode'] = salt.defaults.exitcodes.EX_TEMPFAIL if comment: ret['comment'] = comment return ret
Kill the salt minion. timeout int seconds to wait for the minion to die. If you have a monitor that restarts ``salt-minion`` when it dies then this is a great way to restart after a minion upgrade. CLI example:: >$ salt minion[12] minion.kill minion1: ---------- killed: 7874 retcode: 0 minion2: ---------- killed: 29071 retcode: 0 The result of the salt command shows the process ID of the minions and the results of a kill signal to the minion in as the ``retcode`` value: ``0`` is success, anything else is a failure.
Below is the the instruction that describes the task: ### Input: Kill the salt minion. timeout int seconds to wait for the minion to die. If you have a monitor that restarts ``salt-minion`` when it dies then this is a great way to restart after a minion upgrade. CLI example:: >$ salt minion[12] minion.kill minion1: ---------- killed: 7874 retcode: 0 minion2: ---------- killed: 29071 retcode: 0 The result of the salt command shows the process ID of the minions and the results of a kill signal to the minion in as the ``retcode`` value: ``0`` is success, anything else is a failure. ### Response: def kill(timeout=15): ''' Kill the salt minion. timeout int seconds to wait for the minion to die. If you have a monitor that restarts ``salt-minion`` when it dies then this is a great way to restart after a minion upgrade. CLI example:: >$ salt minion[12] minion.kill minion1: ---------- killed: 7874 retcode: 0 minion2: ---------- killed: 29071 retcode: 0 The result of the salt command shows the process ID of the minions and the results of a kill signal to the minion in as the ``retcode`` value: ``0`` is success, anything else is a failure. ''' ret = { 'killed': None, 'retcode': 1, } comment = [] pid = __grains__.get('pid') if not pid: comment.append('Unable to find "pid" in grains') ret['retcode'] = salt.defaults.exitcodes.EX_SOFTWARE else: if 'ps.kill_pid' not in __salt__: comment.append('Missing command: ps.kill_pid') ret['retcode'] = salt.defaults.exitcodes.EX_SOFTWARE else: # The retcode status comes from the first kill signal ret['retcode'] = int(not __salt__['ps.kill_pid'](pid)) # If the signal was successfully delivered then wait for the # process to die - check by sending signals until signal delivery # fails. if ret['retcode']: comment.append('ps.kill_pid failed') else: for _ in range(timeout): time.sleep(1) signaled = __salt__['ps.kill_pid'](pid) if not signaled: ret['killed'] = pid break else: # The process did not exit before the timeout comment.append('Timed out waiting for minion to exit') ret['retcode'] = salt.defaults.exitcodes.EX_TEMPFAIL if comment: ret['comment'] = comment return ret
def export_gcm_encrypted_private_key(self, password: str, salt: str, n: int = 16384) -> str: """ This interface is used to export an AES algorithm encrypted private key with the mode of GCM. :param password: the secret pass phrase to generate the keys from. :param salt: A string to use for better protection from dictionary attacks. This value does not need to be kept secret, but it should be randomly chosen for each derivation. It is recommended to be at least 8 bytes long. :param n: CPU/memory cost parameter. It must be a power of 2 and less than 2**32 :return: an gcm encrypted private key in the form of string. """ r = 8 p = 8 dk_len = 64 scrypt = Scrypt(n, r, p, dk_len) derived_key = scrypt.generate_kd(password, salt) iv = derived_key[0:12] key = derived_key[32:64] hdr = self.__address.b58encode().encode() mac_tag, cipher_text = AESHandler.aes_gcm_encrypt_with_iv(self.__private_key, hdr, key, iv) encrypted_key = bytes.hex(cipher_text) + bytes.hex(mac_tag) encrypted_key_str = base64.b64encode(bytes.fromhex(encrypted_key)) return encrypted_key_str.decode('utf-8')
This interface is used to export an AES algorithm encrypted private key with the mode of GCM. :param password: the secret pass phrase to generate the keys from. :param salt: A string to use for better protection from dictionary attacks. This value does not need to be kept secret, but it should be randomly chosen for each derivation. It is recommended to be at least 8 bytes long. :param n: CPU/memory cost parameter. It must be a power of 2 and less than 2**32 :return: an gcm encrypted private key in the form of string.
Below is the the instruction that describes the task: ### Input: This interface is used to export an AES algorithm encrypted private key with the mode of GCM. :param password: the secret pass phrase to generate the keys from. :param salt: A string to use for better protection from dictionary attacks. This value does not need to be kept secret, but it should be randomly chosen for each derivation. It is recommended to be at least 8 bytes long. :param n: CPU/memory cost parameter. It must be a power of 2 and less than 2**32 :return: an gcm encrypted private key in the form of string. ### Response: def export_gcm_encrypted_private_key(self, password: str, salt: str, n: int = 16384) -> str: """ This interface is used to export an AES algorithm encrypted private key with the mode of GCM. :param password: the secret pass phrase to generate the keys from. :param salt: A string to use for better protection from dictionary attacks. This value does not need to be kept secret, but it should be randomly chosen for each derivation. It is recommended to be at least 8 bytes long. :param n: CPU/memory cost parameter. It must be a power of 2 and less than 2**32 :return: an gcm encrypted private key in the form of string. """ r = 8 p = 8 dk_len = 64 scrypt = Scrypt(n, r, p, dk_len) derived_key = scrypt.generate_kd(password, salt) iv = derived_key[0:12] key = derived_key[32:64] hdr = self.__address.b58encode().encode() mac_tag, cipher_text = AESHandler.aes_gcm_encrypt_with_iv(self.__private_key, hdr, key, iv) encrypted_key = bytes.hex(cipher_text) + bytes.hex(mac_tag) encrypted_key_str = base64.b64encode(bytes.fromhex(encrypted_key)) return encrypted_key_str.decode('utf-8')
def CreateTaskStorage(self, task): """Creates a task storage. Args: task (Task): task. Returns: FakeStorageWriter: storage writer. Raises: IOError: if the task storage already exists. OSError: if the task storage already exists. """ if task.identifier in self._task_storage_writers: raise IOError('Storage writer for task: {0:s} already exists.'.format( task.identifier)) storage_writer = FakeStorageWriter( self._session, storage_type=definitions.STORAGE_TYPE_TASK, task=task) self._task_storage_writers[task.identifier] = storage_writer return storage_writer
Creates a task storage. Args: task (Task): task. Returns: FakeStorageWriter: storage writer. Raises: IOError: if the task storage already exists. OSError: if the task storage already exists.
Below is the the instruction that describes the task: ### Input: Creates a task storage. Args: task (Task): task. Returns: FakeStorageWriter: storage writer. Raises: IOError: if the task storage already exists. OSError: if the task storage already exists. ### Response: def CreateTaskStorage(self, task): """Creates a task storage. Args: task (Task): task. Returns: FakeStorageWriter: storage writer. Raises: IOError: if the task storage already exists. OSError: if the task storage already exists. """ if task.identifier in self._task_storage_writers: raise IOError('Storage writer for task: {0:s} already exists.'.format( task.identifier)) storage_writer = FakeStorageWriter( self._session, storage_type=definitions.STORAGE_TYPE_TASK, task=task) self._task_storage_writers[task.identifier] = storage_writer return storage_writer
def adjacent(self, node_a, node_b): """Determines whether there is an edge from node_a to node_b. Returns True if such an edge exists, otherwise returns False.""" neighbors = self.neighbors(node_a) return node_b in neighbors
Determines whether there is an edge from node_a to node_b. Returns True if such an edge exists, otherwise returns False.
Below is the the instruction that describes the task: ### Input: Determines whether there is an edge from node_a to node_b. Returns True if such an edge exists, otherwise returns False. ### Response: def adjacent(self, node_a, node_b): """Determines whether there is an edge from node_a to node_b. Returns True if such an edge exists, otherwise returns False.""" neighbors = self.neighbors(node_a) return node_b in neighbors
def create_user(self, instance, name, password, database_names, host=None): """ Creates a user with the specified name and password, and gives that user access to the specified database(s). """ return instance.create_user(name=name, password=password, database_names=database_names, host=host)
Creates a user with the specified name and password, and gives that user access to the specified database(s).
Below is the the instruction that describes the task: ### Input: Creates a user with the specified name and password, and gives that user access to the specified database(s). ### Response: def create_user(self, instance, name, password, database_names, host=None): """ Creates a user with the specified name and password, and gives that user access to the specified database(s). """ return instance.create_user(name=name, password=password, database_names=database_names, host=host)
def getDescription(self): """Returns a description of the dataset""" description = {'name':self.name, 'fields':[f.name for f in self.fields], \ 'numRecords by field':[f.numRecords for f in self.fields]} return description
Returns a description of the dataset
Below is the the instruction that describes the task: ### Input: Returns a description of the dataset ### Response: def getDescription(self): """Returns a description of the dataset""" description = {'name':self.name, 'fields':[f.name for f in self.fields], \ 'numRecords by field':[f.numRecords for f in self.fields]} return description
def update(self): """Update the data from the thermostat. Always sets the current time.""" _LOGGER.debug("Querying the device..") time = datetime.now() value = struct.pack('BBBBBBB', PROP_INFO_QUERY, time.year % 100, time.month, time.day, time.hour, time.minute, time.second) self._conn.make_request(PROP_WRITE_HANDLE, value)
Update the data from the thermostat. Always sets the current time.
Below is the the instruction that describes the task: ### Input: Update the data from the thermostat. Always sets the current time. ### Response: def update(self): """Update the data from the thermostat. Always sets the current time.""" _LOGGER.debug("Querying the device..") time = datetime.now() value = struct.pack('BBBBBBB', PROP_INFO_QUERY, time.year % 100, time.month, time.day, time.hour, time.minute, time.second) self._conn.make_request(PROP_WRITE_HANDLE, value)
def date_in_past(self): """Is the block's date in the past? (Has it not yet happened?) """ now = datetime.datetime.now() return (now.date() > self.date)
Is the block's date in the past? (Has it not yet happened?)
Below is the the instruction that describes the task: ### Input: Is the block's date in the past? (Has it not yet happened?) ### Response: def date_in_past(self): """Is the block's date in the past? (Has it not yet happened?) """ now = datetime.datetime.now() return (now.date() > self.date)
def series_lstrip(series, startswith='http://', ignorecase=True): """ Strip a suffix str (`endswith` str) from a `df` columns or pd.Series of type str """ return series_strip(series, startswith=startswith, endswith=None, startsorendswith=None, ignorecase=ignorecase)
Strip a suffix str (`endswith` str) from a `df` columns or pd.Series of type str
Below is the the instruction that describes the task: ### Input: Strip a suffix str (`endswith` str) from a `df` columns or pd.Series of type str ### Response: def series_lstrip(series, startswith='http://', ignorecase=True): """ Strip a suffix str (`endswith` str) from a `df` columns or pd.Series of type str """ return series_strip(series, startswith=startswith, endswith=None, startsorendswith=None, ignorecase=ignorecase)
def get_cf_distribution_class(): """Return the correct troposphere CF distribution class.""" if LooseVersion(troposphere.__version__) == LooseVersion('2.4.0'): cf_dist = cloudfront.Distribution cf_dist.props['DistributionConfig'] = (DistributionConfig, True) return cf_dist return cloudfront.Distribution
Return the correct troposphere CF distribution class.
Below is the the instruction that describes the task: ### Input: Return the correct troposphere CF distribution class. ### Response: def get_cf_distribution_class(): """Return the correct troposphere CF distribution class.""" if LooseVersion(troposphere.__version__) == LooseVersion('2.4.0'): cf_dist = cloudfront.Distribution cf_dist.props['DistributionConfig'] = (DistributionConfig, True) return cf_dist return cloudfront.Distribution
def _iso_name_and_parent_from_path(self, iso_path): # type: (bytes) -> Tuple[bytes, dr.DirectoryRecord] ''' An internal method to find the parent directory record and name given an ISO path. If the parent is found, return a tuple containing the basename of the path and the parent directory record object. Parameters: iso_path - The absolute ISO path to the entry on the ISO. Returns: A tuple containing just the name of the entry and a Directory Record object representing the parent of the entry. ''' splitpath = utils.split_path(iso_path) name = splitpath.pop() parent = self._find_iso_record(b'/' + b'/'.join(splitpath)) return (name.decode('utf-8').encode('utf-8'), parent)
An internal method to find the parent directory record and name given an ISO path. If the parent is found, return a tuple containing the basename of the path and the parent directory record object. Parameters: iso_path - The absolute ISO path to the entry on the ISO. Returns: A tuple containing just the name of the entry and a Directory Record object representing the parent of the entry.
Below is the the instruction that describes the task: ### Input: An internal method to find the parent directory record and name given an ISO path. If the parent is found, return a tuple containing the basename of the path and the parent directory record object. Parameters: iso_path - The absolute ISO path to the entry on the ISO. Returns: A tuple containing just the name of the entry and a Directory Record object representing the parent of the entry. ### Response: def _iso_name_and_parent_from_path(self, iso_path): # type: (bytes) -> Tuple[bytes, dr.DirectoryRecord] ''' An internal method to find the parent directory record and name given an ISO path. If the parent is found, return a tuple containing the basename of the path and the parent directory record object. Parameters: iso_path - The absolute ISO path to the entry on the ISO. Returns: A tuple containing just the name of the entry and a Directory Record object representing the parent of the entry. ''' splitpath = utils.split_path(iso_path) name = splitpath.pop() parent = self._find_iso_record(b'/' + b'/'.join(splitpath)) return (name.decode('utf-8').encode('utf-8'), parent)
def thread_exception(self, raised_exception): """ Callback for handling exception, that are raised inside :meth:`.WThreadTask.thread_started` :param raised_exception: raised exception :return: None """ print('Thread execution was stopped by the exception. Exception: %s' % str(raised_exception)) print('Traceback:') print(traceback.format_exc())
Callback for handling exception, that are raised inside :meth:`.WThreadTask.thread_started` :param raised_exception: raised exception :return: None
Below is the the instruction that describes the task: ### Input: Callback for handling exception, that are raised inside :meth:`.WThreadTask.thread_started` :param raised_exception: raised exception :return: None ### Response: def thread_exception(self, raised_exception): """ Callback for handling exception, that are raised inside :meth:`.WThreadTask.thread_started` :param raised_exception: raised exception :return: None """ print('Thread execution was stopped by the exception. Exception: %s' % str(raised_exception)) print('Traceback:') print(traceback.format_exc())
def source_sum_err(self): """ The uncertainty of `~photutils.SourceProperties.source_sum`, propagated from the input ``error`` array. ``source_sum_err`` is the quadrature sum of the total errors over the non-masked pixels within the source segment: .. math:: \\Delta F = \\sqrt{\\sum_{i \\in S} \\sigma_{\\mathrm{tot}, i}^2} where :math:`\\Delta F` is ``source_sum_err``, :math:`\\sigma_{\\mathrm{tot, i}}` are the pixel-wise total errors, and :math:`S` are the non-masked pixels in the source segment. Pixel values that are masked in the input ``data``, including any non-finite pixel values (i.e. NaN, infs) that are automatically masked, are also masked in the error array. """ if self._error is not None: if self._is_completely_masked: return np.nan * self._error_unit # table output needs unit else: return np.sqrt(np.sum(self._error_values ** 2)) else: return None
The uncertainty of `~photutils.SourceProperties.source_sum`, propagated from the input ``error`` array. ``source_sum_err`` is the quadrature sum of the total errors over the non-masked pixels within the source segment: .. math:: \\Delta F = \\sqrt{\\sum_{i \\in S} \\sigma_{\\mathrm{tot}, i}^2} where :math:`\\Delta F` is ``source_sum_err``, :math:`\\sigma_{\\mathrm{tot, i}}` are the pixel-wise total errors, and :math:`S` are the non-masked pixels in the source segment. Pixel values that are masked in the input ``data``, including any non-finite pixel values (i.e. NaN, infs) that are automatically masked, are also masked in the error array.
Below is the the instruction that describes the task: ### Input: The uncertainty of `~photutils.SourceProperties.source_sum`, propagated from the input ``error`` array. ``source_sum_err`` is the quadrature sum of the total errors over the non-masked pixels within the source segment: .. math:: \\Delta F = \\sqrt{\\sum_{i \\in S} \\sigma_{\\mathrm{tot}, i}^2} where :math:`\\Delta F` is ``source_sum_err``, :math:`\\sigma_{\\mathrm{tot, i}}` are the pixel-wise total errors, and :math:`S` are the non-masked pixels in the source segment. Pixel values that are masked in the input ``data``, including any non-finite pixel values (i.e. NaN, infs) that are automatically masked, are also masked in the error array. ### Response: def source_sum_err(self): """ The uncertainty of `~photutils.SourceProperties.source_sum`, propagated from the input ``error`` array. ``source_sum_err`` is the quadrature sum of the total errors over the non-masked pixels within the source segment: .. math:: \\Delta F = \\sqrt{\\sum_{i \\in S} \\sigma_{\\mathrm{tot}, i}^2} where :math:`\\Delta F` is ``source_sum_err``, :math:`\\sigma_{\\mathrm{tot, i}}` are the pixel-wise total errors, and :math:`S` are the non-masked pixels in the source segment. Pixel values that are masked in the input ``data``, including any non-finite pixel values (i.e. NaN, infs) that are automatically masked, are also masked in the error array. """ if self._error is not None: if self._is_completely_masked: return np.nan * self._error_unit # table output needs unit else: return np.sqrt(np.sum(self._error_values ** 2)) else: return None
def job_not_running(self, jid, tgt, tgt_type, minions, is_finished): ''' Return a future which will complete once jid (passed in) is no longer running on tgt ''' ping_pub_data = yield self.saltclients['local'](tgt, 'saltutil.find_job', [jid], tgt_type=tgt_type) ping_tag = tagify([ping_pub_data['jid'], 'ret'], 'job') minion_running = False while True: try: event = self.application.event_listener.get_event(self, tag=ping_tag, timeout=self.application.opts['gather_job_timeout']) event = yield event except TimeoutException: if not event.done(): event.set_result(None) if not minion_running or is_finished.done(): raise tornado.gen.Return(True) else: ping_pub_data = yield self.saltclients['local'](tgt, 'saltutil.find_job', [jid], tgt_type=tgt_type) ping_tag = tagify([ping_pub_data['jid'], 'ret'], 'job') minion_running = False continue # Minions can return, we want to see if the job is running... if event['data'].get('return', {}) == {}: continue if event['data']['id'] not in minions: minions[event['data']['id']] = False minion_running = True
Return a future which will complete once jid (passed in) is no longer running on tgt
Below is the the instruction that describes the task: ### Input: Return a future which will complete once jid (passed in) is no longer running on tgt ### Response: def job_not_running(self, jid, tgt, tgt_type, minions, is_finished): ''' Return a future which will complete once jid (passed in) is no longer running on tgt ''' ping_pub_data = yield self.saltclients['local'](tgt, 'saltutil.find_job', [jid], tgt_type=tgt_type) ping_tag = tagify([ping_pub_data['jid'], 'ret'], 'job') minion_running = False while True: try: event = self.application.event_listener.get_event(self, tag=ping_tag, timeout=self.application.opts['gather_job_timeout']) event = yield event except TimeoutException: if not event.done(): event.set_result(None) if not minion_running or is_finished.done(): raise tornado.gen.Return(True) else: ping_pub_data = yield self.saltclients['local'](tgt, 'saltutil.find_job', [jid], tgt_type=tgt_type) ping_tag = tagify([ping_pub_data['jid'], 'ret'], 'job') minion_running = False continue # Minions can return, we want to see if the job is running... if event['data'].get('return', {}) == {}: continue if event['data']['id'] not in minions: minions[event['data']['id']] = False minion_running = True
def get_levels_of_description(self): """ Returns an array of all levels of description defined in this AtoM instance. """ if not hasattr(self, "levels_of_description"): self.levels_of_description = [ item["name"] for item in self._get(urljoin(self.base_url, "taxonomies/34")).json() ] return self.levels_of_description
Returns an array of all levels of description defined in this AtoM instance.
Below is the the instruction that describes the task: ### Input: Returns an array of all levels of description defined in this AtoM instance. ### Response: def get_levels_of_description(self): """ Returns an array of all levels of description defined in this AtoM instance. """ if not hasattr(self, "levels_of_description"): self.levels_of_description = [ item["name"] for item in self._get(urljoin(self.base_url, "taxonomies/34")).json() ] return self.levels_of_description
def set_resolving(self, **kw): """ Certain log fields can be individually resolved. Use this method to set these fields. Valid keyword arguments: :param str timezone: string value to set timezone for audits :param bool time_show_zone: show the time zone in the audit. :param bool time_show_millis: show timezone in milliseconds :param bool keys: resolve log field keys :param bool ip_elements: resolve IP's to SMC elements :param bool ip_dns: resolve IP addresses using DNS :param bool ip_locations: resolve locations """ if 'timezone' in kw and 'time_show_zone' not in kw: kw.update(time_show_zone=True) self.data['resolving'].update(**kw)
Certain log fields can be individually resolved. Use this method to set these fields. Valid keyword arguments: :param str timezone: string value to set timezone for audits :param bool time_show_zone: show the time zone in the audit. :param bool time_show_millis: show timezone in milliseconds :param bool keys: resolve log field keys :param bool ip_elements: resolve IP's to SMC elements :param bool ip_dns: resolve IP addresses using DNS :param bool ip_locations: resolve locations
Below is the the instruction that describes the task: ### Input: Certain log fields can be individually resolved. Use this method to set these fields. Valid keyword arguments: :param str timezone: string value to set timezone for audits :param bool time_show_zone: show the time zone in the audit. :param bool time_show_millis: show timezone in milliseconds :param bool keys: resolve log field keys :param bool ip_elements: resolve IP's to SMC elements :param bool ip_dns: resolve IP addresses using DNS :param bool ip_locations: resolve locations ### Response: def set_resolving(self, **kw): """ Certain log fields can be individually resolved. Use this method to set these fields. Valid keyword arguments: :param str timezone: string value to set timezone for audits :param bool time_show_zone: show the time zone in the audit. :param bool time_show_millis: show timezone in milliseconds :param bool keys: resolve log field keys :param bool ip_elements: resolve IP's to SMC elements :param bool ip_dns: resolve IP addresses using DNS :param bool ip_locations: resolve locations """ if 'timezone' in kw and 'time_show_zone' not in kw: kw.update(time_show_zone=True) self.data['resolving'].update(**kw)
def chunks(seq, size=None, dfmt="f", byte_order=None, padval=0.): """ Chunk generator based on the array module (Python standard library). See chunk.struct for more help. This strategy uses array.array (random access by indexing management) instead of struct.Struct and blocks/deque (circular queue appending) from the chunks.struct strategy. Hint ---- Try each one to find the faster one for your machine, and chooses the default one by assigning ``chunks.default = chunks.strategy_name``. It'll be the one used by the AudioIO/AudioThread playing mechanism. Note ---- The ``dfmt`` symbols for arrays might differ from structs' defaults. """ if size is None: size = chunks.size chunk = array.array(dfmt, xrange(size)) idx = 0 for el in seq: chunk[idx] = el idx += 1 if idx == size: yield chunk.tostring() idx = 0 if idx != 0: for idx in xrange(idx, size): chunk[idx] = padval yield chunk.tostring()
Chunk generator based on the array module (Python standard library). See chunk.struct for more help. This strategy uses array.array (random access by indexing management) instead of struct.Struct and blocks/deque (circular queue appending) from the chunks.struct strategy. Hint ---- Try each one to find the faster one for your machine, and chooses the default one by assigning ``chunks.default = chunks.strategy_name``. It'll be the one used by the AudioIO/AudioThread playing mechanism. Note ---- The ``dfmt`` symbols for arrays might differ from structs' defaults.
Below is the the instruction that describes the task: ### Input: Chunk generator based on the array module (Python standard library). See chunk.struct for more help. This strategy uses array.array (random access by indexing management) instead of struct.Struct and blocks/deque (circular queue appending) from the chunks.struct strategy. Hint ---- Try each one to find the faster one for your machine, and chooses the default one by assigning ``chunks.default = chunks.strategy_name``. It'll be the one used by the AudioIO/AudioThread playing mechanism. Note ---- The ``dfmt`` symbols for arrays might differ from structs' defaults. ### Response: def chunks(seq, size=None, dfmt="f", byte_order=None, padval=0.): """ Chunk generator based on the array module (Python standard library). See chunk.struct for more help. This strategy uses array.array (random access by indexing management) instead of struct.Struct and blocks/deque (circular queue appending) from the chunks.struct strategy. Hint ---- Try each one to find the faster one for your machine, and chooses the default one by assigning ``chunks.default = chunks.strategy_name``. It'll be the one used by the AudioIO/AudioThread playing mechanism. Note ---- The ``dfmt`` symbols for arrays might differ from structs' defaults. """ if size is None: size = chunks.size chunk = array.array(dfmt, xrange(size)) idx = 0 for el in seq: chunk[idx] = el idx += 1 if idx == size: yield chunk.tostring() idx = 0 if idx != 0: for idx in xrange(idx, size): chunk[idx] = padval yield chunk.tostring()
def __encryptKeyTransportMessage( self, bare_jids, encryption_callback, bundles = None, expect_problems = None, ignore_trust = False ): """ bare_jids: iterable<string> encryption_callback: A function which is called using an instance of cryptography.hazmat.primitives.ciphers.CipherContext, which you can use to encrypt any sort of data. You don't have to return anything. bundles: { [bare_jid: string] => { [device_id: int] => ExtendedPublicBundle } } expect_problems: { [bare_jid: string] => iterable<int> } returns: { iv: bytes, sid: int, keys: { [bare_jid: string] => { [device: int] => { "data" : bytes, "pre_key" : boolean } } } } """ yield self.runInactiveDeviceCleanup() ######################### # parameter preparation # ######################### if isinstance(bare_jids, string_type): bare_jids = set([ bare_jids ]) else: bare_jids = set(bare_jids) if bundles == None: bundles = {} if expect_problems == None: expect_problems = {} else: for bare_jid in expect_problems: expect_problems[bare_jid] = set(expect_problems[bare_jid]) # Add the own bare jid to the set of jids bare_jids.add(self.__my_bare_jid) ######################################################## # check all preconditions and prepare missing sessions # ######################################################## problems = [] # Prepare the lists of devices to encrypt for encrypt_for = {} for bare_jid in bare_jids: devices = yield self.__loadActiveDevices(bare_jid) if len(devices) == 0: problems.append(NoDevicesException(bare_jid)) else: encrypt_for[bare_jid] = devices # Remove the sending devices from the list encrypt_for[self.__my_bare_jid].remove(self.__my_device_id) # Check whether all required bundles are available for bare_jid, devices in encrypt_for.items(): missing_bundles = set() # Load all sessions sessions = yield self.__loadSessions(bare_jid, devices) for device in devices: session = sessions[device] if session == None: if not device in bundles.get(bare_jid, {}): missing_bundles.add(device) devices -= missing_bundles for device in missing_bundles: if not device in expect_problems.get(bare_jid, set()): problems.append(MissingBundleException(bare_jid, device)) # Check for missing sessions and simulate the key exchange for bare_jid, devices in encrypt_for.items(): key_exchange_problems = {} # Load all sessions sessions = yield self.__loadSessions(bare_jid, devices) for device in devices: session = sessions[device] # If no session exists, create a new session if session == None: # Get the required bundle bundle = bundles[bare_jid][device] try: # Build the session, discarding the result afterwards. This is # just to check that the key exchange works. self.__state.getSharedSecretActive(bundle) except x3dh.exceptions.KeyExchangeException as e: key_exchange_problems[device] = str(e) encrypt_for[bare_jid] -= set(key_exchange_problems.keys()) for device, message in key_exchange_problems.items(): if not device in expect_problems.get(bare_jid, set()): problems.append(KeyExchangeException( bare_jid, device, message )) if not ignore_trust: # Check the trust for each device for bare_jid, devices in encrypt_for.items(): # Load all trust trusts = yield self.__loadTrusts(bare_jid, devices) # Load all sessions sessions = yield self.__loadSessions(bare_jid, devices) trust_problems = [] for device in devices: trust = trusts[device] session = sessions[device] # Get the identity key of the recipient other_ik = ( bundles[bare_jid][device].ik if session == None else session.ik ) if trust == None: trust_problems.append((device, other_ik, "undecided")) elif not (trust["key"] == other_ik and trust["trusted"]): trust_problems.append((device, other_ik, "untrusted")) devices -= set(map(lambda x: x[0], trust_problems)) for device, other_ik, problem_type in trust_problems: if not device in expect_problems.get(bare_jid, set()): problems.append( TrustException(bare_jid, device, other_ik, problem_type) ) # Check for jids with no eligible devices for bare_jid, devices in list(encrypt_for.items()): # Skip this check for my own bare jid if bare_jid == self.__my_bare_jid: continue if len(devices) == 0: problems.append(NoEligibleDevicesException(bare_jid)) del encrypt_for[bare_jid] # If there were and problems, raise an Exception with a list of those. if len(problems) > 0: raise EncryptionProblemsException(problems) ############## # encryption # ############## # Prepare AES-GCM key and IV aes_gcm_iv = os.urandom(16) aes_gcm_key = os.urandom(16) # Create the AES-GCM instance aes_gcm = Cipher( algorithms.AES(aes_gcm_key), modes.GCM(aes_gcm_iv), backend=default_backend() ).encryptor() # Encrypt the plain data encryption_callback(aes_gcm) # Store the tag aes_gcm_tag = aes_gcm.tag # { # [bare_jid: string] => { # [device: int] => { # "data" : bytes, # "pre_key" : boolean # } # } # } encrypted_keys = {} for bare_jid, devices in encrypt_for.items(): encrypted_keys[bare_jid] = {} for device in devices: # Note whether this is a response to a PreKeyMessage if self.__state.hasBoundOTPK(bare_jid, device): self.__state.respondedTo(bare_jid, device) yield self._storage.storeState(self.__state.serialize()) # Load the session session = yield self.__loadSession(bare_jid, device) # If no session exists, this will be a PreKeyMessage pre_key = session == None # Create a new session if pre_key: # Get the required bundle bundle = bundles[bare_jid][device] # Build the session session_and_init_data = self.__state.getSharedSecretActive(bundle) session = session_and_init_data["dr"] session_init_data = session_and_init_data["to_other"] # Encrypt the AES GCM key and tag encrypted_data = session.encryptMessage(aes_gcm_key + aes_gcm_tag) # Store the new/changed session yield self.__storeSession(bare_jid, device, session) # Serialize the data into a simple message format serialized = self.__backend.WireFormat.messageToWire( encrypted_data["ciphertext"], encrypted_data["header"], { "DoubleRatchet": encrypted_data["additional"] } ) # If it is a PreKeyMessage, apply an additional step to the serialization. if pre_key: serialized = self.__backend.WireFormat.preKeyMessageToWire( session_init_data, serialized, { "DoubleRatchet": encrypted_data["additional"] } ) # Add the final encrypted and serialized data. encrypted_keys[bare_jid][device] = { "data" : serialized, "pre_key" : pre_key } promise.returnValue({ "iv" : aes_gcm_iv, "sid" : self.__my_device_id, "keys" : encrypted_keys })
bare_jids: iterable<string> encryption_callback: A function which is called using an instance of cryptography.hazmat.primitives.ciphers.CipherContext, which you can use to encrypt any sort of data. You don't have to return anything. bundles: { [bare_jid: string] => { [device_id: int] => ExtendedPublicBundle } } expect_problems: { [bare_jid: string] => iterable<int> } returns: { iv: bytes, sid: int, keys: { [bare_jid: string] => { [device: int] => { "data" : bytes, "pre_key" : boolean } } } }
Below is the the instruction that describes the task: ### Input: bare_jids: iterable<string> encryption_callback: A function which is called using an instance of cryptography.hazmat.primitives.ciphers.CipherContext, which you can use to encrypt any sort of data. You don't have to return anything. bundles: { [bare_jid: string] => { [device_id: int] => ExtendedPublicBundle } } expect_problems: { [bare_jid: string] => iterable<int> } returns: { iv: bytes, sid: int, keys: { [bare_jid: string] => { [device: int] => { "data" : bytes, "pre_key" : boolean } } } } ### Response: def __encryptKeyTransportMessage( self, bare_jids, encryption_callback, bundles = None, expect_problems = None, ignore_trust = False ): """ bare_jids: iterable<string> encryption_callback: A function which is called using an instance of cryptography.hazmat.primitives.ciphers.CipherContext, which you can use to encrypt any sort of data. You don't have to return anything. bundles: { [bare_jid: string] => { [device_id: int] => ExtendedPublicBundle } } expect_problems: { [bare_jid: string] => iterable<int> } returns: { iv: bytes, sid: int, keys: { [bare_jid: string] => { [device: int] => { "data" : bytes, "pre_key" : boolean } } } } """ yield self.runInactiveDeviceCleanup() ######################### # parameter preparation # ######################### if isinstance(bare_jids, string_type): bare_jids = set([ bare_jids ]) else: bare_jids = set(bare_jids) if bundles == None: bundles = {} if expect_problems == None: expect_problems = {} else: for bare_jid in expect_problems: expect_problems[bare_jid] = set(expect_problems[bare_jid]) # Add the own bare jid to the set of jids bare_jids.add(self.__my_bare_jid) ######################################################## # check all preconditions and prepare missing sessions # ######################################################## problems = [] # Prepare the lists of devices to encrypt for encrypt_for = {} for bare_jid in bare_jids: devices = yield self.__loadActiveDevices(bare_jid) if len(devices) == 0: problems.append(NoDevicesException(bare_jid)) else: encrypt_for[bare_jid] = devices # Remove the sending devices from the list encrypt_for[self.__my_bare_jid].remove(self.__my_device_id) # Check whether all required bundles are available for bare_jid, devices in encrypt_for.items(): missing_bundles = set() # Load all sessions sessions = yield self.__loadSessions(bare_jid, devices) for device in devices: session = sessions[device] if session == None: if not device in bundles.get(bare_jid, {}): missing_bundles.add(device) devices -= missing_bundles for device in missing_bundles: if not device in expect_problems.get(bare_jid, set()): problems.append(MissingBundleException(bare_jid, device)) # Check for missing sessions and simulate the key exchange for bare_jid, devices in encrypt_for.items(): key_exchange_problems = {} # Load all sessions sessions = yield self.__loadSessions(bare_jid, devices) for device in devices: session = sessions[device] # If no session exists, create a new session if session == None: # Get the required bundle bundle = bundles[bare_jid][device] try: # Build the session, discarding the result afterwards. This is # just to check that the key exchange works. self.__state.getSharedSecretActive(bundle) except x3dh.exceptions.KeyExchangeException as e: key_exchange_problems[device] = str(e) encrypt_for[bare_jid] -= set(key_exchange_problems.keys()) for device, message in key_exchange_problems.items(): if not device in expect_problems.get(bare_jid, set()): problems.append(KeyExchangeException( bare_jid, device, message )) if not ignore_trust: # Check the trust for each device for bare_jid, devices in encrypt_for.items(): # Load all trust trusts = yield self.__loadTrusts(bare_jid, devices) # Load all sessions sessions = yield self.__loadSessions(bare_jid, devices) trust_problems = [] for device in devices: trust = trusts[device] session = sessions[device] # Get the identity key of the recipient other_ik = ( bundles[bare_jid][device].ik if session == None else session.ik ) if trust == None: trust_problems.append((device, other_ik, "undecided")) elif not (trust["key"] == other_ik and trust["trusted"]): trust_problems.append((device, other_ik, "untrusted")) devices -= set(map(lambda x: x[0], trust_problems)) for device, other_ik, problem_type in trust_problems: if not device in expect_problems.get(bare_jid, set()): problems.append( TrustException(bare_jid, device, other_ik, problem_type) ) # Check for jids with no eligible devices for bare_jid, devices in list(encrypt_for.items()): # Skip this check for my own bare jid if bare_jid == self.__my_bare_jid: continue if len(devices) == 0: problems.append(NoEligibleDevicesException(bare_jid)) del encrypt_for[bare_jid] # If there were and problems, raise an Exception with a list of those. if len(problems) > 0: raise EncryptionProblemsException(problems) ############## # encryption # ############## # Prepare AES-GCM key and IV aes_gcm_iv = os.urandom(16) aes_gcm_key = os.urandom(16) # Create the AES-GCM instance aes_gcm = Cipher( algorithms.AES(aes_gcm_key), modes.GCM(aes_gcm_iv), backend=default_backend() ).encryptor() # Encrypt the plain data encryption_callback(aes_gcm) # Store the tag aes_gcm_tag = aes_gcm.tag # { # [bare_jid: string] => { # [device: int] => { # "data" : bytes, # "pre_key" : boolean # } # } # } encrypted_keys = {} for bare_jid, devices in encrypt_for.items(): encrypted_keys[bare_jid] = {} for device in devices: # Note whether this is a response to a PreKeyMessage if self.__state.hasBoundOTPK(bare_jid, device): self.__state.respondedTo(bare_jid, device) yield self._storage.storeState(self.__state.serialize()) # Load the session session = yield self.__loadSession(bare_jid, device) # If no session exists, this will be a PreKeyMessage pre_key = session == None # Create a new session if pre_key: # Get the required bundle bundle = bundles[bare_jid][device] # Build the session session_and_init_data = self.__state.getSharedSecretActive(bundle) session = session_and_init_data["dr"] session_init_data = session_and_init_data["to_other"] # Encrypt the AES GCM key and tag encrypted_data = session.encryptMessage(aes_gcm_key + aes_gcm_tag) # Store the new/changed session yield self.__storeSession(bare_jid, device, session) # Serialize the data into a simple message format serialized = self.__backend.WireFormat.messageToWire( encrypted_data["ciphertext"], encrypted_data["header"], { "DoubleRatchet": encrypted_data["additional"] } ) # If it is a PreKeyMessage, apply an additional step to the serialization. if pre_key: serialized = self.__backend.WireFormat.preKeyMessageToWire( session_init_data, serialized, { "DoubleRatchet": encrypted_data["additional"] } ) # Add the final encrypted and serialized data. encrypted_keys[bare_jid][device] = { "data" : serialized, "pre_key" : pre_key } promise.returnValue({ "iv" : aes_gcm_iv, "sid" : self.__my_device_id, "keys" : encrypted_keys })
def unwrap_aliases(data_type): """ Convenience method to unwrap all Alias(es) from around a DataType. Args: data_type (DataType): The target to unwrap. Return: Tuple[DataType, bool]: The underlying data type and a bool indicating whether the input type had at least one alias layer. """ unwrapped_alias = False while is_alias(data_type): unwrapped_alias = True data_type = data_type.data_type return data_type, unwrapped_alias
Convenience method to unwrap all Alias(es) from around a DataType. Args: data_type (DataType): The target to unwrap. Return: Tuple[DataType, bool]: The underlying data type and a bool indicating whether the input type had at least one alias layer.
Below is the the instruction that describes the task: ### Input: Convenience method to unwrap all Alias(es) from around a DataType. Args: data_type (DataType): The target to unwrap. Return: Tuple[DataType, bool]: The underlying data type and a bool indicating whether the input type had at least one alias layer. ### Response: def unwrap_aliases(data_type): """ Convenience method to unwrap all Alias(es) from around a DataType. Args: data_type (DataType): The target to unwrap. Return: Tuple[DataType, bool]: The underlying data type and a bool indicating whether the input type had at least one alias layer. """ unwrapped_alias = False while is_alias(data_type): unwrapped_alias = True data_type = data_type.data_type return data_type, unwrapped_alias
def run_sim(morphology='patdemo/cells/j4a.hoc', cell_rotation=dict(x=4.99, y=-4.33, z=3.14), closest_idx=dict(x=-200., y=0., z=800.)): '''set up simple cell simulation with LFPs in the plane''' # Create cell cell = LFPy.Cell(morphology=morphology, **cell_parameters) # Align cell cell.set_rotation(**cell_rotation) # Define synapse parameters synapse_parameters = { 'idx' : cell.get_closest_idx(**closest_idx), 'e' : 0., # reversal potential 'syntype' : 'ExpSynI', # synapse type 'tau' : 0.5, # synaptic time constant 'weight' : 0.0878, # synaptic weight 'record_current' : True, # record synapse current } # Create synapse and set time of synaptic input synapse = LFPy.Synapse(cell, **synapse_parameters) synapse.set_spike_times(np.array([1.])) # Create electrode object # Run simulation, electrode object argument in cell.simulate print "running simulation..." cell.simulate(rec_imem=True,rec_isyn=True) grid_electrode = LFPy.RecExtElectrode(cell,**grid_electrode_parameters) point_electrode = LFPy.RecExtElectrode(cell,**point_electrode_parameters) grid_electrode.calc_lfp() point_electrode.calc_lfp() print "done" return cell, synapse, grid_electrode, point_electrode
set up simple cell simulation with LFPs in the plane
Below is the the instruction that describes the task: ### Input: set up simple cell simulation with LFPs in the plane ### Response: def run_sim(morphology='patdemo/cells/j4a.hoc', cell_rotation=dict(x=4.99, y=-4.33, z=3.14), closest_idx=dict(x=-200., y=0., z=800.)): '''set up simple cell simulation with LFPs in the plane''' # Create cell cell = LFPy.Cell(morphology=morphology, **cell_parameters) # Align cell cell.set_rotation(**cell_rotation) # Define synapse parameters synapse_parameters = { 'idx' : cell.get_closest_idx(**closest_idx), 'e' : 0., # reversal potential 'syntype' : 'ExpSynI', # synapse type 'tau' : 0.5, # synaptic time constant 'weight' : 0.0878, # synaptic weight 'record_current' : True, # record synapse current } # Create synapse and set time of synaptic input synapse = LFPy.Synapse(cell, **synapse_parameters) synapse.set_spike_times(np.array([1.])) # Create electrode object # Run simulation, electrode object argument in cell.simulate print "running simulation..." cell.simulate(rec_imem=True,rec_isyn=True) grid_electrode = LFPy.RecExtElectrode(cell,**grid_electrode_parameters) point_electrode = LFPy.RecExtElectrode(cell,**point_electrode_parameters) grid_electrode.calc_lfp() point_electrode.calc_lfp() print "done" return cell, synapse, grid_electrode, point_electrode
def get_parent_info(brain_or_object, endpoint=None): """Generate url information for the parent object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :returns: URL information mapping :rtype: dict """ # special case for the portal object if is_root(brain_or_object): return {} # get the parent object parent = get_parent(brain_or_object) portal_type = get_portal_type(parent) resource = portal_type_to_resource(portal_type) # fall back if no endpoint specified if endpoint is None: endpoint = get_endpoint(parent) return { "parent_id": get_id(parent), "parent_uid": get_uid(parent), "parent_url": url_for(endpoint, resource=resource, uid=get_uid(parent)) }
Generate url information for the parent object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :returns: URL information mapping :rtype: dict
Below is the the instruction that describes the task: ### Input: Generate url information for the parent object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :returns: URL information mapping :rtype: dict ### Response: def get_parent_info(brain_or_object, endpoint=None): """Generate url information for the parent object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :returns: URL information mapping :rtype: dict """ # special case for the portal object if is_root(brain_or_object): return {} # get the parent object parent = get_parent(brain_or_object) portal_type = get_portal_type(parent) resource = portal_type_to_resource(portal_type) # fall back if no endpoint specified if endpoint is None: endpoint = get_endpoint(parent) return { "parent_id": get_id(parent), "parent_uid": get_uid(parent), "parent_url": url_for(endpoint, resource=resource, uid=get_uid(parent)) }
def get_comments(self): """Gets all comments. return: (osid.commenting.CommentList) - a list of comments raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.ResourceLookupSession.get_resources # NOTE: This implementation currently ignores plenary view collection = JSONClientValidated('commenting', collection='Comment', runtime=self._runtime) result = collection.find(self._view_filter()).sort('_id', DESCENDING) return objects.CommentList(result, runtime=self._runtime, proxy=self._proxy)
Gets all comments. return: (osid.commenting.CommentList) - a list of comments raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Gets all comments. return: (osid.commenting.CommentList) - a list of comments raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* ### Response: def get_comments(self): """Gets all comments. return: (osid.commenting.CommentList) - a list of comments raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.ResourceLookupSession.get_resources # NOTE: This implementation currently ignores plenary view collection = JSONClientValidated('commenting', collection='Comment', runtime=self._runtime) result = collection.find(self._view_filter()).sort('_id', DESCENDING) return objects.CommentList(result, runtime=self._runtime, proxy=self._proxy)
def _parseExpression(self, src, returnList=False): """ expr : term [ operator term ]* ; """ src, term = self._parseExpressionTerm(src) operator = None while src[:1] not in ('', ';', '{', '}', '[', ']', ')'): for operator in self.ExpressionOperators: if src.startswith(operator): src = src[len(operator):] break else: operator = ' ' src, term2 = self._parseExpressionTerm(src.lstrip()) if term2 is NotImplemented: break else: term = self.cssBuilder.combineTerms(term, operator, term2) if operator is None and returnList: term = self.cssBuilder.combineTerms(term, None, None) return src, term else: return src, term
expr : term [ operator term ]* ;
Below is the the instruction that describes the task: ### Input: expr : term [ operator term ]* ; ### Response: def _parseExpression(self, src, returnList=False): """ expr : term [ operator term ]* ; """ src, term = self._parseExpressionTerm(src) operator = None while src[:1] not in ('', ';', '{', '}', '[', ']', ')'): for operator in self.ExpressionOperators: if src.startswith(operator): src = src[len(operator):] break else: operator = ' ' src, term2 = self._parseExpressionTerm(src.lstrip()) if term2 is NotImplemented: break else: term = self.cssBuilder.combineTerms(term, operator, term2) if operator is None and returnList: term = self.cssBuilder.combineTerms(term, None, None) return src, term else: return src, term
def deployment_check_existence(name, resource_group, **kwargs): ''' .. versionadded:: 2019.2.0 Check the existence of a deployment. :param name: The name of the deployment to query. :param resource_group: The resource group name assigned to the deployment. CLI Example: .. code-block:: bash salt-call azurearm_resource.deployment_check_existence testdeploy testgroup ''' result = False resconn = __utils__['azurearm.get_client']('resource', **kwargs) try: result = resconn.deployments.check_existence( deployment_name=name, resource_group_name=resource_group ) except CloudError as exc: __utils__['azurearm.log_cloud_error']('resource', str(exc), **kwargs) return result
.. versionadded:: 2019.2.0 Check the existence of a deployment. :param name: The name of the deployment to query. :param resource_group: The resource group name assigned to the deployment. CLI Example: .. code-block:: bash salt-call azurearm_resource.deployment_check_existence testdeploy testgroup
Below is the the instruction that describes the task: ### Input: .. versionadded:: 2019.2.0 Check the existence of a deployment. :param name: The name of the deployment to query. :param resource_group: The resource group name assigned to the deployment. CLI Example: .. code-block:: bash salt-call azurearm_resource.deployment_check_existence testdeploy testgroup ### Response: def deployment_check_existence(name, resource_group, **kwargs): ''' .. versionadded:: 2019.2.0 Check the existence of a deployment. :param name: The name of the deployment to query. :param resource_group: The resource group name assigned to the deployment. CLI Example: .. code-block:: bash salt-call azurearm_resource.deployment_check_existence testdeploy testgroup ''' result = False resconn = __utils__['azurearm.get_client']('resource', **kwargs) try: result = resconn.deployments.check_existence( deployment_name=name, resource_group_name=resource_group ) except CloudError as exc: __utils__['azurearm.log_cloud_error']('resource', str(exc), **kwargs) return result
def tags( self): """*The list of tags associated with this taskpaper object* **Usage:** .. project and task objects can have associated tags. To get a list of tags assigned to an object use: .. code-block:: python projectTag = aProject.tags taskTags = aTasks.tags print projectTag > ['flag', 'home(bathroom)'] """ tags = [] regex = re.compile(r'@[^@]*', re.S) if self.meta["tagString"]: matchList = regex.findall(self.meta["tagString"]) for m in matchList: tags.append(m.strip().replace("@", "")) return tags
*The list of tags associated with this taskpaper object* **Usage:** .. project and task objects can have associated tags. To get a list of tags assigned to an object use: .. code-block:: python projectTag = aProject.tags taskTags = aTasks.tags print projectTag > ['flag', 'home(bathroom)']
Below is the the instruction that describes the task: ### Input: *The list of tags associated with this taskpaper object* **Usage:** .. project and task objects can have associated tags. To get a list of tags assigned to an object use: .. code-block:: python projectTag = aProject.tags taskTags = aTasks.tags print projectTag > ['flag', 'home(bathroom)'] ### Response: def tags( self): """*The list of tags associated with this taskpaper object* **Usage:** .. project and task objects can have associated tags. To get a list of tags assigned to an object use: .. code-block:: python projectTag = aProject.tags taskTags = aTasks.tags print projectTag > ['flag', 'home(bathroom)'] """ tags = [] regex = re.compile(r'@[^@]*', re.S) if self.meta["tagString"]: matchList = regex.findall(self.meta["tagString"]) for m in matchList: tags.append(m.strip().replace("@", "")) return tags
def dcm(self, dcm): """ Set the DCM :param dcm: Matrix3 """ assert(isinstance(dcm, Matrix3)) self._dcm = dcm.copy() # mark other representations as outdated, will get generated on next # read self._q = None self._euler = None
Set the DCM :param dcm: Matrix3
Below is the the instruction that describes the task: ### Input: Set the DCM :param dcm: Matrix3 ### Response: def dcm(self, dcm): """ Set the DCM :param dcm: Matrix3 """ assert(isinstance(dcm, Matrix3)) self._dcm = dcm.copy() # mark other representations as outdated, will get generated on next # read self._q = None self._euler = None