code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
def wr_xlsx(self, fout_xlsx, nts): wr_xlsx(fout_xlsx, nts, prt_flds=self.prt_flds, fld2col_widths=self.fld2col_widths)
Write specified namedtuples into an Excel spreadsheet.
def generateDiff(self, oldWarnings, newWarnings): diffWarnings = {} for modulename in newWarnings: diffInModule = ( newWarnings[modulename] - oldWarnings.get(modulename, set())) if diffInModule: diffWarnings[modulename] = ...
Generate diff between given two lists of warnings. @param oldWarnings: parsed old warnings @param newWarnings: parsed new warnings @return: a dict object of diff
def put(self, request, bot_id, id, format=None): return super(KikBotDetail, self).put(request, bot_id, id, format)
Update existing KikBot --- serializer: KikBotUpdateSerializer responseMessages: - code: 401 message: Not authenticated - code: 400 message: Not valid request
def calculate_size(name, items): data_size = 0 data_size += calculate_size_str(name) data_size += INT_SIZE_IN_BYTES for items_item in items: data_size += calculate_size_data(items_item) return data_size
Calculates the request payload size
def load(config_path: str): if os.path.splitext(config_path)[1] in (, ): _ = load_yaml_configuration(config_path, translator=PipelineTranslator()) elif os.path.splitext(config_path)[1] == : _ = load_python_configuration(config_path) else: raise ValueError( % os.path.splitex...
Load a configuration and keep it alive for the given context :param config_path: path to a configuration file
def _format_fields(self, fields, title_width=12): out = [] header = self.__head for title, content in fields: if len(content.splitlines()) > 1: title = header(title + ":") + "\n" else: title = header((title+":").ljust(title_width))...
Formats a list of fields for display. Parameters ---------- fields : list A list of 2-tuples: (field_title, field_content) title_width : int How many characters to pad titles to. Default 12.
def state_call(self, addr, *args, **kwargs): state = kwargs.pop(, None) if isinstance(addr, SootAddressDescriptor): ret_addr = kwargs.pop(, state.addr if state else SootAddressTerminator()) cc = kwargs.pop(, SimCCSoot(self.arch)) ...
Create a native or a Java call state. :param addr: Soot or native addr of the invoke target. :param args: List of SootArgument values.
def auto2unicode(text): _all_unique_encodes_, _all_common_encodes_ = _get_unique_common_encodes() unique_chars = _get_unique_ch(text, _all_common_encodes_) clen = len(_all_common_encodes_) msg = "Sorry, couldnNeed more words to find unique encode out side of %d common compound...
This function tries to identify encode in available encodings. If it finds, then it will convert text into unicode string. Author : Arulalan.T 04.08.2014
def wrap(name, project, sprefix=None, python=sys.executable): env = __create_jinja_env() template = env.get_template() name_absolute = os.path.abspath(name) real_f = name_absolute + PROJECT_BIN_F_EXT if sprefix: run(uchroot()["/bin/mv", strip_path_prefix(name_abso...
Wrap the binary :name: with the runtime extension of the project. This module generates a python tool that replaces :name: The function in runner only accepts the replaced binaries name as argument. We use the cloudpickle package to perform the serialization, make sure :runner: can be serialized wi...
def calacs(input_file, exec_path=None, time_stamps=False, temp_files=False, verbose=False, debug=False, quiet=False, single_core=False, exe_args=None): if exec_path: if not os.path.exists(exec_path): raise OSError( + exec_path) call_list = [exec_path] else...
Run the calacs.e executable as from the shell. By default this will run the calacs given by 'calacs.e'. Parameters ---------- input_file : str Name of input file. exec_path : str, optional The complete path to a calacs executable. time_stamps : bool, optional Set to T...
def json_from_cov_df(df, threshold=.5, gain=2., n=None, indent=1): nodes, edges = graph_from_cov_df(df=df, threshold=threshold, gain=gain, n=n) return json.dumps({: nodes, : edges}, indent=indent)
Produce a json string describing the graph (list of edges) from a square auto-correlation/covariance matrix { "nodes": [{"group": 1, "name": "the"}, {"group": 1, "name": "and"}, {"group": 1, "name": "our"}, {"group": 2, "name": "that"},... "links": [{"sour...
def property_present(properties, admin_username=, admin_password=, host=None, **kwargs): ret = {: host, : {: host}, : True, : {}, : } if host is None: output = __salt__[]() stdout = output[] reg = re.compile(r) for line in stdout...
properties = {}
def ipopo_factories(self): try: with use_ipopo(self.__context) as ipopo: return { name: ipopo.get_factory_details(name) for name in ipopo.get_factories() } except BundleException: return...
List of iPOPO factories
def amax_files(): return [os.path.join(dp, f) for dp, dn, filenames in os.walk(CACHE_FOLDER) for f in filenames if os.path.splitext(f)[1].lower() == ]
Return all annual maximum flow (`*.am`) files in cache folder and sub folders. :return: List of file paths :rtype: list
def writeImageToFile(self, filename, _format="PNG"): filename = self.device.substituteDeviceTemplate(filename) if not os.path.isabs(filename): raise ValueError("writeImageToFile expects an absolute path (fielname=)" % filename) if os.path.isdir(filename): filena...
Write the View image to the specified filename in the specified format. @type filename: str @param filename: Absolute path and optional filename receiving the image. If this points to a directory, then the filename is determined by this View unique ID and ...
def add_comment(self, post=None, name=None, email=None, pub_date=None, website=None, body=None): if post is None: if not self.posts: raise CommandError("Cannot add comments without posts") post = self.posts[-1] post["comments"].append(...
Adds a comment to the post provided.
def _get_enterprise_admin_users_batch(self, start, end): LOGGER.info(, start, end) return User.objects.filter(groups__name=ENTERPRISE_DATA_API_ACCESS_GROUP, is_staff=False)[start:end]
Returns a batched queryset of User objects.
def _long_to_bytes(n, length, byteorder): if byteorder == : indexes = range(length) else: indexes = reversed(range(length)) return bytearray((n >> i * 8) & 0xff for i in indexes)
Convert a long to a bytestring For use in python version prior to 3.2 Source: http://bugs.python.org/issue16580#msg177208
def format_args(options): args = list() for key, value in options.items(): key = key.replace(, ) if value is True: args.append(.format(key=key)) elif is_sequence(value): values = [str(val) for val...
Convert hash/key options into arguments list
def db_get_map(self, table, record, column): val = self.db_get_val(table, record, column) assert isinstance(val, dict) return val
Gets dict type value of 'column' in 'record' in 'table'. This method is corresponding to the following ovs-vsctl command:: $ ovs-vsctl get TBL REC COL
def _unregister_lookup(cls, lookup, lookup_name=None): if lookup_name is None: lookup_name = lookup.lookup_name del cls.class_lookups[lookup_name]
Remove given lookup from cls lookups. For use in tests only as it's not thread-safe.
def predictions(self, image, strict=True, return_details=False): in_bounds = self.in_bounds(image) assert not strict or in_bounds self._total_prediction_calls += 1 predictions = self.__model.predictions(image) is_adversarial, is_best, distance = self.__is_adversarial( ...
Interface to model.predictions for attacks. Parameters ---------- image : `numpy.ndarray` Single input with shape as expected by the model (without the batch dimension). strict : bool Controls if the bounds for the pixel values should be checked.
def randomtable(numflds=5, numrows=100, wait=0, seed=None): return RandomTable(numflds, numrows, wait=wait, seed=seed)
Construct a table with random numerical data. Use `numflds` and `numrows` to specify the number of fields and rows respectively. Set `wait` to a float greater than zero to simulate a delay on each row generation (number of seconds per row). E.g.:: >>> import petl as etl >>> table = etl.rand...
def post_revert_tags(self, post_id, history_id): params = {: post_id, : history_id} return self._get(, params, )
Function to reverts a post to a previous set of tags (Requires login) (UNTESTED). Parameters: post_id (int): The post id number to update. history_id (int): The id number of the tag history.
def configure_modrpaf(self): r = self.local_renderer if r.env.modrpaf_enabled: self.install_packages() self.enable_mod() else: if self.last_manifest.modrpaf_enabled: self.disable_mod()
Installs the mod-rpaf Apache module. https://github.com/gnif/mod_rpaf
def pretty_tree(x, kids, show): (MID, END, CONT, LAST, ROOT) = (u, u, u, u, u) def rec(x, indent, sym): line = indent + sym + show(x) xs = kids(x) if len(xs) == 0: return line else: if sym == MID: next_indent = indent + CONT ...
(a, (a -> list(a)), (a -> str)) -> str Returns a pseudographic tree representation of x similar to the tree command in Unix.
def getConfig(self): config = {} config["name"] = self.city config["intervals"] = self.__intervals config["last_date"] = self.__lastDay config["excludedUsers"] = [] config["excludedLocations"] = [] for e in self.__excludedUsers: config["exclu...
Return the configuration of the city. :return: configuration of the city. :rtype: dict.
def sid(self): pnames = list(self.terms)+list(self.dterms) pnames.sort() return (self.__class__, tuple([(k, id(self.__dict__[k])) for k in pnames if k in self.__dict__]))
Semantic id.
def __live_receivers(signal): with __lock: __purge() receivers = [funcref() for funcref in __receivers[signal]] return receivers
Return all signal handlers that are currently still alive for the input `signal`. Args: signal: A signal name. Returns: A list of callable receivers for the input signal.
def is_subset(self, other): if isinstance(other, _basebag): for elem, count in self.counts(): if not count <= other.count(elem): return False else: for elem in self: if self.count(elem) > 1 or elem not in other: return False return True
Check that every element in self has a count <= in other. Args: other (Set)
def get_range(self, process_err_pct=0.05): vel = self.vel + 5 * randn() alt = self.alt + 10 * randn() self.pos += vel*self.dt err = (self.pos * process_err_pct) * randn() slant_range = (self.pos**2 + alt**2)**.5 + err return slant_range
Returns slant range to the object. Call once for each new measurement at dt time from last call.
def cleanup(self): for instance in self.context: del(instance) for plugin in self.plugins: del(plugin)
Forcefully delete objects from memory In an ideal world, this shouldn't be necessary. Garbage collection guarantees that anything without reference is automatically removed. However, because this application is designed to be run multiple times from the same interpreter process...
def _delete(self, *args, **kwargs): response = requests.delete(*args, **kwargs) response.raise_for_status()
A wrapper for deleting things :returns: The response of your delete :rtype: dict
def _get_output_nodes(self, output_path, error_path): from aiida.orm.data.array.trajectory import TrajectoryData import re state = None step = None scale = None with open(output_path) as f: lines = [x.strip() for x in f.readlines()] result_d...
Extracts output nodes from the standard output and standard error files.
def reset_api_secret(context, id, etag): result = feeder.reset_api_secret(context, id=id, etag=etag) utils.format_output(result, context.format, headers=[, , ])
reset_api_secret(context, id, etag) Reset a Feeder api_secret. >>> dcictl feeder-reset-api-secret [OPTIONS] :param string id: ID of the feeder [required] :param string etag: Entity tag of the feeder resource [required]
def run_individual(sim_var, reference, neuroml_file, nml_doc, still_included, generate_dir, target, sim_time, dt, simulator, cl...
Run an individual simulation. The candidate data has been flattened into the sim_var dict. The sim_var dict contains parameter:value key value pairs, which are applied to the model before it is simulated.
def apply_filter_rule(self, _filter, query=, way=): if isinstance(_filter, zobjects.FilterRule): _filter = _filter.name content = { : { : {: _filter} }, : {: query} } if way == : ids = self.request(...
:param: _filter _filter a zobjects.FilterRule or the filter name :param: query on what will the filter be applied :param: way string discribing if filter is for 'in' or 'out' messages :returns: list of impacted message's ids
def unset_default_org(self): for org in self.list_orgs(): org_config = self.get_org(org) if org_config.default: del org_config.config["default"] self.set_org(org_config)
unset the default orgs for tasks
def close(self): with self.lock: if self.is_closed: return self.is_closed = True log.debug("Closing connection (%s) to %s", id(self), self.endpoint) reactor.callFromThread(self.connector.disconnect) log.debug("Closed socket to %s", self.e...
Disconnect and error-out all requests.
def find_frame_urls(self, site, frametype, gpsstart, gpsend, match=None, urltype=None, on_gaps="warn"): if on_gaps not in ("warn", "error", "ignore"): raise ValueError("on_gaps must be , , or .") url = ("%s/gwf/%s/%s/%s,%s" % (_url_prefix, site...
Find the framefiles for the given type in the [start, end) interval frame @param site: single-character name of site to match @param frametype: name of frametype to match @param gpsstart: integer GPS start time of query @param gpsend: ...
def get_atom_sequence_to_rosetta_map(self): A 45 if not self.rosetta_to_atom_sequence_maps and self.rosetta_sequences: raise Exception() atom_sequence_to_rosetta_mapping = {} for chain_id, mapping in self.rosetta_to_atom_sequence_maps.iteritems(): chain_mapping...
Uses the Rosetta->ATOM injective map to construct an injective mapping from ATOM->Rosetta. We do not extend the injection to include ATOM residues which have no corresponding Rosetta residue. e.g. atom_sequence_to_rosetta_mapping[c].map.get('A 45 ') will return None if there is no corresponding...
def encode_request(self, fields, files): parts = [] boundary = self.boundary for k, values in fields: if not isinstance(values, (list, tuple)): values = [values] for v in values: parts.extend(( ...
Encode fields and files for posting to an HTTP server. :param fields: The fields to send as a list of (fieldname, value) tuples. :param files: The files to send as a list of (fieldname, filename, file_bytes) tuple.
def dumps(obj, *args, **kwargs): kwargs[] = object2dict return json.dumps(obj, *args, **kwargs)
Serialize a object to string Basic Usage: >>> import simplekit.objson >>> obj = {'name':'wendy'} >>> print simplekit.objson.dumps(obj) :param obj: a object which need to dump :param args: Optional arguments that :func:`json.dumps` takes. :param kwargs: Keys arguments that :py:func:`json....
def _parse_relationships(self, relationships): link = if not isinstance(relationships, dict): self.fail( s resource linkage section.dataRelationship key %s MUST be a hash & contain a `data` field compliant with the spec\ % key, lin...
Ensure compliance with the spec's relationships section Specifically, the relationships object of the single resource object. For modifications we only support relationships via the `data` key referred to as Resource Linkage. :param relationships: dict JSON API relationship...
def chopurl(url): ret = {} if url.find() == -1: raise s_exc.BadUrl(.format(url)) scheme, remain = url.split(, 1) ret[] = scheme.lower() if remain.find() != -1: query = {} remain, queryrem = remain.split(, 1) for qkey in queryrem.split(): qval ...
A sane "stand alone" url parser. Example: info = chopurl(url)
def color_palette(name=None, n_colors=6, desat=None): seaborn_palettes = dict( deep=[" " muted=[" " pastel=[" " bright=[" " dark=[" " colorblind=[" " ) if name...
Return a list of colors defining a color palette. Availible seaborn palette names: deep, muted, bright, pastel, dark, colorblind Other options: hls, husl, any matplotlib palette Matplotlib paletes can be specified as reversed palettes by appending "_r" to the name or as dark palettes ...
def _get_cpu_virtualization(self): try: cpu_vt = self._get_bios_setting() except exception.IloCommandNotSupportedError: return False if cpu_vt == : vt_status = True else: vt_status = False return vt_status
get cpu virtualization status.
def getGroundResolution(self, latitude, level): latitude = self.clipValue(latitude, self.min_lat, self.max_lat); mapSize = self.getMapDimensionsByZoomLevel(level) return math.cos( latitude * math.pi / 180) * 2 * math.pi * self.earth_radius / \ mapSize
returns the ground resolution for based on latitude and zoom level.
def perform_import(val): if val is None: return None elif isinstance(val, str): return import_from_string(val) elif isinstance(val, (list, tuple)): return [import_from_string(item) for item in val] return val
If the given setting is a string import notation, then perform the necessary import or imports.
def cmd_adb(self, *args): self.check_requirements() self.install_platform() args = args[0] if args and args[0] == : print() print( .format(self.targetname)) sys.stderr.write(self.adb_cmd + ) else: self.bui...
Run adb from the Android SDK. Args must come after --, or use --alias to make an alias
def release(self, args): if not self._started: raise ApplicationNotStarted("BACnet stack not running - use startApp()") args = args.split() addr, obj_type, obj_inst = args[:3] try: self.write("{} {} {} outOfService False".format(addr, obj_type, o...
Set the Out_Of_Service property to False - to release the I/O point back to the controller's control. :param args: String with <addr> <type> <inst>
def write_modes_to_file(self, filename="mode.dat", plot=True, analyse=True): modes_directory = "./modes_semi_vec/" if not os.path.isdir(modes_directory): os.mkdir(modes_directory) filename = modes_directory + filename for i, mode in enumerate(self._ms.modes): ...
Writes the mode fields to a file and optionally plots them. Args: filename (str): The nominal filename to use for the saved data. The suffix will be automatically be changed to identifiy each mode number. Default is 'mode.dat' plot (bool): `True` if plo...
def _add_existing_weight(self, weight, trainable=None): if trainable is None: trainable = weight.trainable self.add_weight(name=weight.name, shape=weight.shape, dtype=weight.dtype, trainable=trainable, getter=lambda *_, **__: weight)
Calls add_weight() to register but not create an existing weight.
def script_dir(pyobject, follow_symlinks=True): if getattr(sys, , False): path = abspath(sys.executable) else: path = inspect.getabsfile(pyobject) if follow_symlinks: path = realpath(path) return dirname(path)
Get current script's directory Args: pyobject (Any): Any Python object in the script follow_symlinks (Optional[bool]): Follow symlinks or not. Defaults to True. Returns: str: Current script's directory
def vm_monitoring(name, call=None): if call != : raise SaltCloudSystemExit( ) server, user, password = _get_xml_rpc() auth = .join([user, password]) vm_id = int(get_vm_id(kwargs={: name})) response = server.one.vm.monitoring(auth, vm_id) if response[0] is Fals...
Returns the monitoring records for a given virtual machine. A VM name must be supplied. The monitoring information returned is a list of VM elements. Each VM element contains the complete dictionary of the VM with the updated information returned by the poll action. .. versionadded:: 2016.3.0 ...
def get_njobs_in_queue(self, username=None): if username is None: username = getpass.getuser() njobs, process = self._get_njobs_in_queue(username=username) if process is not None and process.returncode != 0: return njobs
returns the number of jobs in the queue, probably using subprocess or shutil to call a command like 'qstat'. returns None when the number of jobs cannot be determined. Args: username: (str) the username of the jobs to count (default is to autodetect)
def trace_in_process_link(self, link_bytes): return tracers.InProcessLinkTracer(self._nsdk, self._nsdk.trace_in_process_link(link_bytes))
Creates a tracer for tracing asynchronous related processing in the same process. For more information see :meth:`create_in_process_link`. :param bytes link_bytes: An in-process link created using :meth:`create_in_process_link`. :rtype: tracers.InProcessLinkTracer .. versionadded:: 1...
def ptb_raw_data(data_path): train_path = os.path.join(data_path, "ptb.train.txt") valid_path = os.path.join(data_path, "ptb.valid.txt") test_path = os.path.join(data_path, "ptb.test.txt") word_to_id = _build_vocab(train_path) train_data = _file_to_word_ids(train_path, word_to_id) valid_data = _file_to...
Load PTB raw data from data directory "data_path". Reads PTB text files, converts strings to integer ids, and performs mini-batching of the inputs. The PTB dataset comes from Tomas Mikolov's webpage: http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz Args: data_path: string path to the direct...
def displayhtml(public_key, attrs, use_ssl=False, error=None): error_param = if error: error_param = % error if use_ssl: server = API_SSL_SERVER else: server = API_SERVER if not in attrs: attrs[] = get_languag...
Gets the HTML to display for reCAPTCHA public_key -- The public api key use_ssl -- Should the request be sent over ssl? error -- An error message to display (from RecaptchaResponse.error_code)
def _nameFromHeaderInfo(headerInfo, isDecoy, decoyTag): if in headerInfo: proteinName = headerInfo[] else: proteinName = headerInfo[] if isDecoy: proteinName = .join((decoyTag, proteinName)) return proteinName
Generates a protein name from headerInfo. If "isDecoy" is True, the "decoyTag" is added to beginning of the generated protein name. :param headerInfo: dict, must contain a key "name" or "id" :param isDecoy: bool, determines if the "decoyTag" is added or not. :param decoyTag: str, a tag that identifies ...
def create_empty_table_serial_primary(conn, schema, table, columns=None, id_col=): r sql_str = .format(schema=schema, table=table, id_col=id_col) conn.execute(sql_str) if columns is not None: for col in columns: col_str = .format(schema=s...
r"""New database table with primary key type serial and empty columns Parameters ---------- conn : sqlalchemy connection object A valid connection to a database schema : str The database schema table : str The database table columns : list, optional Columns that ...
def to_api_repr(self): source_refs = [ { "projectId": table.project, "datasetId": table.dataset_id, "tableId": table.table_id, } for table in self.sources ] configuration = self._configuration.to_api_r...
Generate a resource for :meth:`_begin`.
def set_asset(self, asset_id, asset_content_type=None): if asset_id is None: raise NullArgument() if not isinstance(asset_id, Id): raise InvalidArgument() if asset_content_type is not None and not isinstance(asset_content_type, Type): raise InvalidArg...
stub
def run(self, data_loaders, workflow, max_epochs, **kwargs): assert isinstance(data_loaders, list) assert mmcv.is_list_of(workflow, tuple) assert len(data_loaders) == len(workflow) self._max_epochs = max_epochs work_dir = self.work_dir if self.work_dir is not None else ...
Start running. Args: data_loaders (list[:obj:`DataLoader`]): Dataloaders for training and validation. workflow (list[tuple]): A list of (phase, epochs) to specify the running order and epochs. E.g, [('train', 2), ('val', 1)] means running ...
def Read(self, timeout=None): timeout1 = timeout if timeout1 is not None: try: self.timer = wx.Timer(self.TaskBarIcon) self.TaskBarIcon.Bind(...
Reads the context menu :param timeout: Optional. Any value other than None indicates a non-blocking read :return:
def train_epoch(self, epoch_info, source: , interactive=True): self.train() if interactive: iterator = tqdm.tqdm(source.train_loader(), desc="Training", unit="iter", file=sys.stdout) else: iterator = source.train_loader() for batch_idx, (data, target) i...
Run a single training epoch
def from_proto(brain_param_proto): resolution = [{ "height": x.height, "width": x.width, "blackAndWhite": x.gray_scale } for x in brain_param_proto.camera_resolutions] brain_params = BrainParameters(brain_param_proto.brain_name, ...
Converts brain parameter proto to BrainParameter object. :param brain_param_proto: protobuf object. :return: BrainParameter object.
def fulltext_scan_ids(self, query_id=None, query_fc=None, preserve_order=True, indexes=None): it = self._fulltext_scan(query_id, query_fc, feature_names=False, preserve_order=preserve_order, indexes=indexes) ...
Fulltext search for identifiers. Yields an iterable of triples (score, identifier) corresponding to the search results of the fulltext search in ``query``. This will only search text indexed under the given feature named ``fname``. Note that, unless ``preserve_order`` is set to...
def read_notes_file(file_path): if not os.path.isfile(file_path): return None with open(file_path, , encoding=_default_encoding) as f: return f.read()
Returns the contents of a notes file. If the notes file does not exist, None is returned
def _validate_measure_sampling(self, experiment): if self._shots <= 1: self._sample_measure = False return if hasattr(experiment.config, ): self._sample_measure = experiment.config.allows_measure_sampling ...
Determine if measure sampling is allowed for an experiment Args: experiment (QobjExperiment): a qobj experiment.
def is_active(self, name): if name in self._plugins.keys(): return self._plugins["name"].active return None
Returns True if plugin exists and is active. If plugin does not exist, it returns None :param name: plugin name :return: boolean or None
def get_items(self): ret=[] l=self.xpath_ctxt.xpathEval("d:item") if l is not None: for i in l: ret.append(DiscoItem(self, i)) return ret
Get the items contained in `self`. :return: the items contained. :returntype: `list` of `DiscoItem`
def _Pairs(data): keys = sorted(data) return [{: k, : data[k]} for k in keys]
dictionary -> list of pairs
def probabilities(self, choosers, alternatives, filter_tables=True): logger.debug(.format( self.name)) self.assert_fitted() if filter_tables: choosers, alternatives = self.apply_predict_filters( choosers, alternatives) if self.prediction...
Returns the probabilities for a set of choosers to choose from among a set of alternatives. Parameters ---------- choosers : pandas.DataFrame Table describing the agents making choices, e.g. households. alternatives : pandas.DataFrame Table describing the...
def get_book_progress(self, asin): kbp = self._get_api_call(, % asin) return KindleCloudReaderAPI._kbp_to_progress(kbp)
Returns the progress data available for a book. NOTE: A summary of the two progress formats can be found in the docstring for `ReadingProgress`. Args: asin: The asin of the book to be queried. Returns: A `ReadingProgress` instance corresponding to the book associated with `asin`.
def query(dataset_key, query, query_type=, profile=, parameters=None, **kwargs): return _get_instance(profile, **kwargs).query(dataset_key, query, query_type=query_type, parameters=parameters, ...
Query an existing dataset :param dataset_key: Dataset identifier, in the form of owner/id or of a url :type dataset_key: str :param query: SQL or SPARQL query :type query: str :param query_type: The type of the query. Must be either 'sql' or 'sparql'. (Default value = 'sql') :type query...
def download(self, local, remote): self.sync(RemoteFile(remote, self.api), LocalFile(local))
Performs synchronization from a remote file to a local file. The remote path is the source and the local path is the destination.
def setup(app): app.add_config_value(, True, ) app.add_config_value(, False, ) app.add_config_value(, gallery_conf, ) app.add_stylesheet() app.connect(, generate_gallery_rst) app.connect(, embed_code_links)
Setup sphinx-gallery sphinx extension
def rmp_pixel_deg_xys(vecX, vecY, vecPrfSd, tplPngSize, varExtXmin, varExtXmax, varExtYmin, varExtYmax): vecXdgr = rmp_rng(vecX, varExtXmin, varExtXmax, varOldThrMin=0.0, varOldAbsMax=(tplPngSize[0] - 1)) vecYdgr = rmp_rng(vecY, varExtYmin, varExtYmax...
Remap x, y, sigma parameters from pixel to degree. Parameters ---------- vecX : 1D numpy array Array with possible x parametrs in pixels vecY : 1D numpy array Array with possible y parametrs in pixels vecPrfSd : 1D numpy array Array with possible sd parametrs in pixels t...
def page(self, to=values.unset, from_=values.unset, date_sent_before=values.unset, date_sent=values.unset, date_sent_after=values.unset, page_token=values.unset, page_number=values.unset, page_size=values.unset): params = values.of({ : to, ...
Retrieve a single page of MessageInstance records from the API. Request is executed immediately :param unicode to: Filter by messages sent to this number :param unicode from_: Filter by from number :param datetime date_sent_before: Filter by date sent :param datetime date_sent: ...
def offset_random_rgb(seed, amount=1): r, g, b = seed results = [] for _ in range(amount): base_val = ((r + g + b) / 3) + 1 new_val = base_val + (random.random() * rgb_max_val / 5) ratio = new_val / base_val results.append((min(int(r*ratio), rgb_max_val), min(int(g*...
Given a seed color, generate a specified number of random colors (1 color by default) determined by a randomized offset from the seed. :param seed: :param amount: :return:
def condition(condition=None, statement=None, _else=None, **kwargs): result = None checked = False if condition is not None: checked = run(condition, **kwargs) if checked: if statement is not None: result = run(statement, **kwargs) elif _else is not None: ...
Run an statement if input condition is checked and return statement result. :param condition: condition to check. :type condition: str or dict :param statement: statement to process if condition is checked. :type statement: str or dict :param _else: else statement. :type _else: str or dict ...
def _build(self): flat_initial_state = nest.flatten(self._initial_state) if self._mask is not None: flat_mask = nest.flatten(self._mask) flat_learnable_state = [ _single_learnable_state(state, state_id=i, learnable=mask) for i, (state, mask) in enumerate(zip(flat_initial_sta...
Connects the module to the graph. Returns: The learnable state, which has the same type, structure and shape as the `initial_state` passed to the constructor.
def init_widget(self): super(AndroidFragment, self).init_widget() f = self.fragment f.setFragmentListener(f.getId()) f.onCreateView.connect(self.on_create_view) f.onDestroyView.connect(self.on_destroy_view)
Initialize the underlying widget.
def update(self, spec, document, upsert=False, manipulate=False, safe=True, multi=False, callback=None, **kwargs): if not isinstance(spec, dict): raise TypeError("spec must be an instance of dict") if not isinstance(document, dict): raise TypeError("docume...
Update a document(s) in this collection. Raises :class:`TypeError` if either `spec` or `document` is not an instance of ``dict`` or `upsert` is not an instance of ``bool``. If `safe` is ``True`` then the update will be checked for errors, raising :class:`~pymongo.errors....
def _connect_signal(self, index): post_save_signal = ElasticSignal(index, ) post_save_signal.connect(post_save, sender=index.object_type) self.signals.append(post_save_signal) post_delete_signal = ElasticSignal(index, ) post_delete_signal.connect(post_delete, sender=ind...
Create signals for building indexes.
def tune(self): if self._node.get(): tune = self._node[].get() if type(tune) is collections.OrderedDict: return tune elif type(tune) is list: return tune[0] return tune return None
XML node representing tune.
def moveEvent(self, event): if not self.isMaximized() and not self.fullscreen_flag: self.window_position = self.pos() QMainWindow.moveEvent(self, event) self.sig_moved.emit(event)
Reimplement Qt method
def add_inclusion(self, role, value): self._add_rule(self.includes, role, value)
Include item if `role` equals `value` Attributes: role (int): Qt role to compare `value` to value (object): Value to exclude
def stdout_logging(loglevel=logging.INFO): logformat = "[%(asctime)s] %(levelname)s:%(name)s:%(lineno)d: %(message)s" logging.config.dictConfig(level=loglevel, stream=sys.stdout, format=logformat, datefmt="%Y-%m-%d %H:%M:%S")
Setup basic logging Args: loglevel (int): minimum loglevel for emitting messages
def get_primitive_structure(self, tolerance=0.25, use_site_props=False, constrain_latt=None): if constrain_latt is None: constrain_latt = [] def site_label(site): if not use_site_props: return site.species_string ...
This finds a smaller unit cell than the input. Sometimes it doesn"t find the smallest possible one, so this method is recursively called until it is unable to find a smaller cell. NOTE: if the tolerance is greater than 1/2 the minimum inter-site distance in the primitive cell, the algor...
def refresh(self): import re if re.match(r"^1\.2\.[0-9]*$", self.identifier): account = self.blockchain.rpc.get_objects([self.identifier])[0] else: account = self.blockchain.rpc.lookup_account_names([self.identifier])[0] if not account: raise...
Refresh/Obtain an account's data from the API server
def is_device(obj): return isinstance(obj, type) and issubclass( obj, DeviceBase) and obj.__module__ not in (, )
Returns True if obj is a device type (derived from DeviceBase), but not defined in :mod:`lewis.core.devices` or :mod:`lewis.devices`. :param obj: Object to test. :return: True if obj is a device type.
def reverse(self): enabled = self.lib.iperf_get_test_reverse(self._test) if enabled: self._reverse = True else: self._reverse = False return self._reverse
Toggles direction of test :rtype: bool
def itemData(self, treeItem, column, role=Qt.DisplayRole): if role == Qt.DisplayRole: if column == self.COL_NODE_NAME: return treeItem.nodeName elif column == self.COL_NODE_PATH: return treeItem.nodePath elif column == self.COL_SHAPE: ...
Returns the data stored under the given role for the item. O
def dBinaryRochedz(r, D, q, F): return -r[2]*(r[0]*r[0]+r[1]*r[1]+r[2]*r[2])**-1.5 -q*r[2]*((r[0]-D)*(r[0]-D)+r[1]*r[1]+r[2]*r[2])**-1.5
Computes a derivative of the potential with respect to z. @param r: relative radius vector (3 components) @param D: instantaneous separation @param q: mass ratio @param F: synchronicity parameter
def read_micromanager_metadata(fh): fh.seek(0) try: byteorder = {b: , b: }[fh.read(2)] except IndexError: raise ValueError() result = {} fh.seek(8) (index_header, index_offset, display_header, display_offset, comments_header, comments_offset, summary_header, summary_le...
Read MicroManager non-TIFF settings from open file and return as dict. The settings can be used to read image data without parsing the TIFF file. Raise ValueError if the file does not contain valid MicroManager metadata.
def extract_command(outputdir, domain_methods, text_domain, keywords, comment_tags, base_dir, project, version, msgid_bugs_address): monkeypatch_i18n()
Extracts strings into .pot files :arg domain: domains to generate strings for or 'all' for all domains :arg outputdir: output dir for .pot files; usually locale/templates/LC_MESSAGES/ :arg domain_methods: DOMAIN_METHODS setting :arg text_domain: TEXT_DOMAIN settings :arg keywords: KEYWORDS ...
def items(self, region_codes, include_subregions=False): items = OrderedDict() for code in region_codes: try: items[code] = self.region_registry[code] except KeyError: continue if include_subregions: items.updat...
Returns calendar classes for regions :param region_codes list of ISO codes for selected regions :param include_subregions boolean if subregions of selected regions should be included in result :rtype dict :return dict where keys are ISO codes strings and values are calen...
async def parallel_results(future_map: Sequence[Tuple]) -> Dict: ctx_methods = OrderedDict(future_map) fs = list(ctx_methods.values()) results = await asyncio.gather(*fs) results = { key: results[idx] for idx, key in enumerate(ctx_methods.keys()) } return results
Run parallel execution of futures and return mapping of their results to the provided keys. Just a neat shortcut around ``asyncio.gather()`` :param future_map: Keys to futures mapping, e.g.: ( ('nav', get_nav()), ('content, get_content()) ) :return: Dict with futures results mapped to keys {'nav': {1:2}, '...