code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
def calcAbsSegCoords(self): from .. import sim p3dsoma = self.getSomaPos() pop = self.tags[] morphSegCoords = sim.net.pops[pop]._morphSegCoords self._segCoords = {} p3dsoma = p3dsoma[np.newaxis].T self._segCoords[] = p3dsoma + morphSegCoords[...
Calculate absolute seg coords by translating the relative seg coords -- used for LFP calc
def _set_pwm(self, raw_values): for i in range(len(self._pins)): self._pi.set_PWM_dutycycle(self._pins[i], raw_values[i])
Set pwm values on the controlled pins. :param raw_values: Raw values to set (0-255).
def _init_vocab(self, token_generator, add_reserved_tokens=True): self._id_to_token = {} non_reserved_start_index = 0 if add_reserved_tokens: self._id_to_token.update(enumerate(RESERVED_TOKENS)) non_reserved_start_index = len(RESERVED_TOKENS) self._id_to_token.update( enumera...
Initialize vocabulary with tokens from token_generator.
def typewrite(message, interval=0.0, pause=None, _pause=True): interval = float(interval) _failSafeCheck() for c in message: if len(c) > 1: c = c.lower() press(c, _pause=False) time.sleep(interval) _failSafeCheck() _autoPause(pause, _pause)
Performs a keyboard key press down, followed by a release, for each of the characters in message. The message argument can also be list of strings, in which case any valid keyboard name can be used. Since this performs a sequence of keyboard presses and does not hold down keys, it cannot be used t...
def plotConvergenceByDistantConnectionChance(results, featureRange, columnRange, longDistanceConnectionsRange, numTrials): convergence = numpy.zeros((len(featureRange), len(longDistanceConnectionsRange), len(columnRange))) for r in results: print longDistanceConnectionsRange.index(r["lon...
Plots the convergence graph: iterations vs number of columns. Each curve shows the convergence for a given number of unique features.
def divide_separate_words(string_matrix: List[List[str]]) -> List[List[str]]: new_X = [] for sentence in string_matrix: data_row = [] for word in sentence: if in word: data_row += word.split() else: data_row.append(word) new...
As part of processing, some words obviously need to be separated. :param string_matrix: a data matrix: a list wrapping a list of strings, with each sublist being a sentence. :return: >>> divide_separate_words([['ita vero'], ['quid', 'est', 'veritas']]) [['ita', 'vero'], ['quid', 'est', 'veritas']]
def write_skills_data(self, data=None): data = data or self.skills_data if skills_data_hash(data) != self.skills_data_hash: write_skills_data(data) self.skills_data_hash = skills_data_hash(data)
Write skills data hash if it has been modified.
def _get_expanded_active_specs(specs): _filter_active(constants.CONFIG_BUNDLES_KEY, specs) _filter_active(, specs) _expand_libs_in_apps(specs) _filter_active(, specs) _filter_active(, specs) _add_active_assets(specs)
This function removes any unnecessary bundles, apps, libs, and services that aren't needed by the activated_bundles. It also expands inside specs.apps.depends.libs all libs that are needed indirectly by each app
def destroy(self, uuid): args = { : uuid, } self._domain_action_chk.check(args) self._client.sync(, args)
Destroy a kvm domain by uuid :param uuid: uuid of the kvm container (same as the used in create) :return:
def remote_pdb_handler(signum, frame): try: from remote_pdb import RemotePdb rdb = RemotePdb(host="127.0.0.1", port=0) rdb.set_trace(frame=frame) except ImportError: log.warning( "remote_pdb unavailable. Please install remote_pdb to " "allow remote ...
Handler to drop us into a remote debugger upon receiving SIGUSR1
def get_job_collection(self, cloud_service_id, job_collection_id): _validate_not_none(, cloud_service_id) _validate_not_none(, job_collection_id) path = self._get_job_collection_path( cloud_service_id, job_collection_id) return self._perform_get(path, Resource)
The Get Job Collection operation gets the details of a job collection cloud_service_id: The cloud service id job_collection_id: Name of the hosted service.
def bpopmax(self, timeout=0): res = self.database.bzpopmax(self.key, timeout) if res is not None: return (res[1], res[2])
Atomically remove the highest-scoring item from the set, blocking until an item becomes available or timeout is reached (0 for no timeout, default). Returns a 2-tuple of (item, score).
def from_http_status(status_code, message, **kwargs): error_class = exception_class_for_http_status(status_code) error = error_class(message, **kwargs) if error.code is None: error.code = status_code return error
Create a :class:`GoogleAPICallError` from an HTTP status code. Args: status_code (int): The HTTP status code. message (str): The exception message. kwargs: Additional arguments passed to the :class:`GoogleAPICallError` constructor. Returns: GoogleAPICallError: An in...
def sync_status(self): status = None try: try: self.api.doi_get(self.pid.pid_value) status = PIDStatus.REGISTERED except DataCiteGoneError: status = PIDStatus.DELETED except DataCiteNoContentError: ...
Synchronize DOI status DataCite MDS. :returns: `True` if is sync successfully.
def setActions( self, actions ): self.uiActionTREE.blockSignals(True) self.uiActionTREE.setUpdatesEnabled(False) self.uiActionTREE.clear() for action in actions: self.uiActionTREE.addTopLevelItem(ActionItem(action)) self.uiActionTREE.sortByC...
Sets the list of actions that will be used for this shortcut dialog \ when editing. :param actions | [<QAction>, ..]
def set_fallback_resolution(self, x_pixels_per_inch, y_pixels_per_inch): cairo.cairo_surface_set_fallback_resolution( self._pointer, x_pixels_per_inch, y_pixels_per_inch) self._check_status()
Set the horizontal and vertical resolution for image fallbacks. When certain operations aren't supported natively by a backend, cairo will fallback by rendering operations to an image and then overlaying that image onto the output. For backends that are natively vector-oriented, ...
def sync_request(self, command, payload, retry=2): loop = asyncio.get_event_loop() task = loop.create_task(self.request(command, payload, retry)) return loop.run_until_complete(task)
Request data.
def _compute_vline_scores(self): M, N, L = self.M, self.N, self.L vline_score = {} for x in range(M): laststart = [0 if (x, 0, 1, k) in self else None for k in range(L)] for y in range(N): block = [0] * (y + 1) for k in range(L): ...
Does the hard work to prepare ``vline_score``.
def put(self, filename, handle): if self.config.s3_cache_readonly: logger.info() else: timer = Timer() self.check_prerequisites() with PatchedBotoConfig(): from boto.s3.key import Key raw_key = self.get_cache_key(fi...
Upload a distribution archive to the configured Amazon S3 bucket. If the :attr:`~.Config.s3_cache_readonly` configuration option is enabled this method does nothing. :param filename: The filename of the distribution archive (a string). :param handle: A file-like object that provides ac...
async def _send_sleep(self, request: Request, stack: Stack): duration = stack.get_layer(lyr.Sleep).duration await sleep(duration)
Sleep for the amount of time specified in the Sleep layer
def load(self, shapefile=None): if shapefile: (shapeName, ext) = os.path.splitext(shapefile) self.shapeName = shapeName self.load_shp(shapeName) self.load_shx(shapeName) self.load_dbf(shapeName) if not (self.shp or self.dbf...
Opens a shapefile from a filename or file-like object. Normally this method would be called by the constructor with the file name as an argument.
def get_call_signature(fn: FunctionType, args: ArgsType, kwargs: KwargsType, debug_cache: bool = False) -> str: try: call_sig = json_encode((fn.__qualname__, args, kwargs)) except TypeError: log.critical( ...
Takes a function and its args/kwargs, and produces a string description of the function call (the call signature) suitable for use indirectly as a cache key. The string is a JSON representation. See ``make_cache_key`` for a more suitable actual cache key.
def query(query, use_sudo=True, **kwargs): func = use_sudo and run_as_root or run user = kwargs.get() or env.get() password = kwargs.get() or env.get() options = [ , , , ] if user: options.append( % quote(user)) if password: options.append( % qu...
Run a MySQL query.
def generation_time(self): entry = self._proto.commandQueueEntry if entry.HasField(): return parse_isostring(entry.generationTimeUTC) return None
The generation time as set by Yamcs. :type: :class:`~datetime.datetime`
def get_include_path(): f1 = os.path.basename(sys.argv[0]).lower() f2 = os.path.basename(sys.executable).lower() if f1 == f2 or f2 == f1 + : result = os.path.dirname(os.path.realpath(sys.executable)) else: result = os.path.dirname(os.path.realpath(__file__)) return ...
Default include path using a tricky sys calls.
async def post_data(self, path, data=None, headers=None, timeout=None): url = self.base_url + path _LOGGER.debug(, url) self._log_data(data, False) resp = None try: resp = await self._session.post( url, headers=headers, data=data, ...
Perform a POST request.
def fetch_replace_restriction(self, ): inter = self.get_refobjinter() restricted = self.status() is None return restricted or inter.fetch_action_restriction(self, )
Fetch whether unloading is restricted :returns: True, if unloading is restricted :rtype: :class:`bool` :raises: None
def timestring(self, pattern="%Y-%m-%d %H:%M:%S", timezone=None): if timezone is None: timezone = self.timezone timestamp = self.__timestamp__ - timezone timestamp -= LOCALTZ return _strftime(pattern, _gmtime(timestamp))
Returns a time string. :param pattern = "%Y-%m-%d %H:%M:%S" The format used. By default, an ISO-type format is used. The syntax here is identical to the one used by time.strftime() and time.strptime(). :param timezone = self.timezone The timezone (in sec...
def from_html_one(html_code, **kwargs): tables = from_html(html_code, **kwargs) try: assert len(tables) == 1 except AssertionError: raise Exception("More than one <table> in provided HTML code! Use from_html instead.") return tables[0]
Generates a PrettyTables from a string of HTML code which contains only a single <table>
def get_corrections_dict(self, entry): corrections = {} for c in self.corrections: val = c.get_correction(entry) if val != 0: corrections[str(c)] = val return corrections
Returns the corrections applied to a particular entry. Args: entry: A ComputedEntry object. Returns: ({correction_name: value})
def change_owner(self, new_owner): old_owner = self.owner.organization_user self.owner.organization_user = new_owner self.owner.save() owner_changed.send(sender=self, old=old_owner, new=new_owner)
Changes ownership of an organization.
def add(self, pattern_txt): self.patterns[len(pattern_txt)] = pattern_txt low = 0 high = len(pattern_txt) - 1 while not pattern_txt[low]: low += 1 while not pattern_txt[high]: high -= 1 min_pattern = pattern_txt[low:high + 1] s...
Add a pattern to the list. Args: pattern_txt (str list): the pattern, as a list of lines.
def gt(self, other, axis="columns", level=None): return self._binary_op("gt", other, axis=axis, level=level)
Checks element-wise that this is greater than other. Args: other: A DataFrame or Series or scalar to compare to. axis: The axis to perform the gt over. level: The Multilevel index level to apply gt over. Returns: A new DataFrame filled with Boole...
def save_figure(self,event=None, transparent=False, dpi=600): if self.panel is not None: self.panel.save_figure(event=event, transparent=transparent, dpi=dpi)
save figure image to file
def get_account(config, environment, stage=None): if environment is None and stage: environment = get_environment(config, stage) account = None for env in config.get(, []): if env.get() == environment: account = env.get() role = env.get() username = o...
Find environment name in config object and return AWS account.
def load_amazon(): dataset_path = _load() X = _load_csv(dataset_path, ) y = X.pop().values graph = nx.Graph(nx.read_gml(os.path.join(dataset_path, ))) return Dataset(load_amazon.__doc__, X, y, normalized_mutual_info_score, graph=graph)
Amazon product co-purchasing network and ground-truth communities. Network was collected by crawling Amazon website. It is based on Customers Who Bought This Item Also Bought feature of the Amazon website. If a product i is frequently co-purchased with product j, the graph contains an undirected edge from ...
def cofold(self, strand1, strand2, temp=37.0, dangles=2, nolp=False, nogu=False, noclosinggu=False, constraints=None, canonicalbponly=False, partition=-1, pfscale=None, gquad=False): mfedotbracket cmd_args = [] cmd_kwargs = {: str(temp)} cmd_kwargs[] = dangl...
Run the RNAcofold command and retrieve the result in a dictionary. :param strand1: Strand 1 for running RNAcofold. :type strand1: coral.DNA or coral.RNA :param strand1: Strand 2 for running RNAcofold. :type strand2: coral.DNA or coral.RNA :param temp: Temperature at which to run...
def add(self): if PyFunceble.CONFIGURATION["inactive_database"]: timestamp = str(self._timestamp()) if ( "inactive_db" in PyFunceble.INTERN and PyFunceble.INTERN["file_to_test"] in PyFunceble.INTERN["in...
Save the current :code.`PyFunceble.CONFIGURATION['to_test']` into the current timestamp.
def _AbortJoin(self, timeout=None): for pid, process in iter(self._processes_per_pid.items()): logger.debug(.format( process.name, pid)) process.join(timeout=timeout) if not process.is_alive(): logger.debug(.format( process.name, pid))
Aborts all registered processes by joining with the parent process. Args: timeout (int): number of seconds to wait for processes to join, where None represents no timeout.
def calc_inbag(n_samples, forest): if not forest.bootstrap: e_s = "Cannot calculate the inbag from a forest that has " e_s = " bootstrap=False" raise ValueError(e_s) n_trees = forest.n_estimators inbag = np.zeros((n_samples, n_trees)) sample_idx = [] for t_idx in range...
Derive samples used to create trees in scikit-learn RandomForest objects. Recovers the samples in each tree from the random state of that tree using :func:`forest._generate_sample_indices`. Parameters ---------- n_samples : int The number of samples used to fit the scikit-learn RandomFores...
def _get_ID2position_mapper(self, position_mapper): def num_parser(x, order): i, j = unravel_index(int(x - 1), self.shape, order=order) return (self.row_labels[i], self.col_labels[j]) if hasattr(position_mapper, ): mapper = position_mapper elif isin...
Defines a position parser that is used to map between sample IDs and positions. Parameters -------------- {_bases_position_mapper} TODO: Fix the name to work with more than 26 letters of the alphabet.
def delete_snapshot(self, path, snapshotname, **kwargs): response = self._delete(path, , snapshotname=snapshotname, **kwargs) assert not response.content
Delete a snapshot of a directory
def list_containers(active=True, defined=True, as_object=False, config_path=None): if config_path: if not os.path.exists(config_path): return tuple() try: entries = _lxc.list_containers(active=active, defined=defined, ...
List the containers on the system.
def transform_non_affine(self, x, mask_out_of_range=True): if mask_out_of_range: x_masked = np.ma.masked_where((x < self._xmin) | (x > self._xmax), x) else: x_masked = x return np.interp(x_masked, self._...
Transform a Nx1 numpy array. Parameters ---------- x : array Data to be transformed. mask_out_of_range : bool, optional Whether to mask input values out of range. Return ------ array or masked array Transformed data.
def search(self, template: str, first: bool = False) -> _Result: elements = [r for r in findall(template, self.xml)] return _get_first_or_list(elements, first)
Search the :class:`Element <Element>` for the given parse template. :param template: The Parse template to use.
def retrieve(self, thing, thing_type=None): thing_id = self._whatis(thing) if thing_type is None: thing_type = thing_id query_parameters = {} if thing_type == API_Constants.URL: query = API_Constants.CONST_API_URL + API_Constants.API...
Retrieve a report from VirusTotal based on a hash, IP, domain, file or URL or ScanID. NOTE: URLs must include the scheme (e.g. http://)\n :param thing: a file name on the local system, a URL or list of URLs, an IP or list of IPs, a domain or list of domains, a hash or list of ha...
def __allocate_clusters(self): self.__initialize(self.__sample_pointer) for optic_object in self.__optics_objects: if optic_object.processed is False: self.__expand_cluster_order(optic_object) self.__extract_clusters()
! @brief Performs cluster allocation and builds ordering diagram that is based on reachability-distances.
def _setup_images(directory, brightness, saturation, hue, preserve_transparency): for file_name in os.listdir(directory): with open(os.path.join(directory, file_name), "rb") as fi: image = Image.open(fi).convert("RGBA") if brightness != 1.0: ...
Apply modifiers to the images of a theme Modifies the images using the PIL.ImageEnhance module. Using this function, theme images are modified to given them a unique look and feel. Works best with PNG-based images.
def get_codon(seq, codon_no, start_offset): seq = seq.replace("-","") codon_start_pos = int(codon_no - 1)*3 - start_offset codon = seq[codon_start_pos:codon_start_pos + 3] return codon
This function takes a sequece and a codon number and returns the codon found in the sequence at that position
def actionAngleTorus_hessian_c(pot,jr,jphi,jz, tol=0.003,dJ=0.001): from galpy.orbit.integrateFullOrbit import _parse_pot npot, pot_type, pot_args= _parse_pot(pot,potfortorus=True) dOdJT= numpy.empty(9) Omegar= numpy.empty(1) Omegaphi= numpy.empty(1) ...
NAME: actionAngleTorus_hessian_c PURPOSE: compute dO/dJ on a single torus INPUT: pot - Potential object or list thereof jr - radial action (scalar) jphi - azimuthal action (scalar) jz - vertical action (scalar) tol= (0.003) goal for |dJ|/|J| along the torus ...
def contains(self, stimtype): for track in self._segments: for component in track: if component.__class__.__name__ == stimtype: return True return False
Returns whether the specified stimlus type is a component in this stimulus :param stimtype: :class:`AbstractStimulusComponent<sparkle.stim.abstract_component.AbstractStimulusComponent>` subclass class name to test for membership in the components of this stimulus :type stimtype: str :returns: b...
def iter_last_tour(tourfile, clm): row = open(tourfile).readlines()[-1] _tour, _tour_o = separate_tour_and_o(row) tour = [] tour_o = [] for tc, to in zip(_tour, _tour_o): if tc not in clm.contigs: logging.debug("Contig `{}` in file `{}` not found in `{}`" ...
Extract last tour from tourfile. The clm instance is also passed in to see if any contig is covered in the clm.
def stats(self): response = self.requester.get( , endpoint=self.endpoint, id=self.id ) return response.json()
Get the stats for the current :class:`Milestone`
def retrieve(self, order_id, id) : _, _, line_item = self.http_client.get("/orders/{order_id}/line_items/{id}".format(order_id=order_id, id=id)) return line_item
Retrieve a single line item Returns a single line item of an order, according to the unique line item ID provided :calls: ``get /orders/{order_id}/line_items/{id}`` :param int order_id: Unique identifier of a Order. :param int id: Unique identifier of a LineItem. :return: Dicti...
def last_valid_index(self): def last_valid_index_builder(df): df.index = pandas.RangeIndex(len(df.index)) return df.apply(lambda df: df.last_valid_index()) func = self._build_mapreduce_func(last_valid_index_builder) first_result = sel...
Returns index of last non-NaN/NULL value. Return: Scalar of index name.
def set_light_state_raw(self, hue, saturation, brightness, kelvin, bulb=ALL_BULBS, timeout=None): with _blocking(self.lock, self.light_state, self.light_state_event, timeout): self.send(REQ_SET_LIGHT_STATE, bulb, , hue...
Sets the (low-level) light state of one or more bulbs.
def score(self, X, y=None, **kwargs): self.score_ = self.estimator.score(X, y, **kwargs) y_pred = self.predict(X) self.draw(y, y_pred) return self.score_
The score function is the hook for visual interaction. Pass in test data and the visualizer will create predictions on the data and evaluate them with respect to the test values. The evaluation will then be passed to draw() and the result of the estimator score will be returned. ...
def replace_random_tokens_bow(self, n_samples, replacement=, random_state=None, min_replace=1, max_replace=1.0, ...
Return a list of ``(text, replaced_words_count, mask)`` tuples with n_samples versions of text with some words replaced. If a word is replaced, all duplicate words are also replaced from the text. By default words are replaced with '', i.e. removed.
def write_data(self, data, response_required=None, timeout=5.0, raw=False): if self._transport is None: return if self._paused: return if self._waiting_for_response: LOG.debug("queueing write %s", data) self._queued_writes.append((data, ...
Write data on the asyncio Protocol
def pemp(stat, stat0): assert len(stat0) > 0 assert len(stat) > 0 stat = np.array(stat) stat0 = np.array(stat0) m = len(stat) m0 = len(stat0) statc = np.concatenate((stat, stat0)) v = np.array([True] * m + [False] * m0) perm = np.argsort(-statc, kind="mergesort") v = v...
Computes empirical values identically to bioconductor/qvalue empPvals
def diff_result_to_cell(item): state = item[] if state == : new_cell = item[].data old_cell = item[].data new_cell[][] = state new_cell[][] = old_cell cell = new_cell else: cell = item[].data cell[][] = state return cell
diff.diff returns a dictionary with all the information we need, but we want to extract the cell and change its metadata.
def ligolw_add(xmldoc, urls, non_lsc_tables_ok = False, verbose = False, contenthandler = DefaultContentHandler): for n, url in enumerate(urls): if verbose: print >>sys.stderr, "%d/%d:" % (n + 1, len(urls)), utils.load_url(url, verbose = verbose, xmldoc = xmldoc, contenthandler = contenthandler) if not ...
An implementation of the LIGO LW add algorithm. urls is a list of URLs (or filenames) to load, xmldoc is the XML document tree to which they should be added.
def present(name, pipeline_objects=None, pipeline_objects_from_pillars=, parameter_objects=None, parameter_objects_from_pillars=, parameter_values=None, parameter_values_from_pillars=, region=None, key=None, keyid=None, profile=None): ...
Ensure the data pipeline exists with matching definition. name Name of the service to ensure a data pipeline exists for. pipeline_objects Pipeline objects to use. Will override objects read from pillars. pipeline_objects_from_pillars The pillar key to use for lookup. paramete...
def parse_swf (url_data): linkfinder = linkparse.swf_url_re.finditer for mo in linkfinder(url_data.get_content()): url = mo.group() url_data.add_url(url)
Parse a SWF file for URLs.
def fetch(self, bank, key): fun = .format(self.driver) return self.modules[fun](bank, key, **self._kwargs)
Fetch data using the specified module :param bank: The name of the location inside the cache which will hold the key and its associated data. :param key: The name of the key (or file inside a directory) which will hold the data. File extensions should no...
def process_raw_data(cls, raw_data): properties = raw_data.get("properties", {}) raw_content = properties.get("ipSecConfiguration", None) if raw_content is not None: ip_sec = IPSecConfiguration.from_raw_data(raw_content) properties["ipSecConfiguration"] = ip_sec...
Create a new model using raw API response.
def record_service_agreement(storage_path, service_agreement_id, did, service_definition_id, price, files, start_time, status=): conn = sqlite3.connect(storage_path) try: cursor = conn.cursor() cursor.execute( ) ...
Records the given pending service agreement. :param storage_path: storage path for the internal db, str :param service_agreement_id: :param did: DID, str :param service_definition_id: identifier of the service inside the asset DDO, str :param price: Asset price, int :param files: :param sta...
def get(self, *, kind: Type=None, tag: Hashable=None, **_) -> Iterator: if kind is None and tag is None: raise TypeError("get() takes at least one keyword-only argument. or .") kinds = self.all tags = self.all if kind is not None: kinds = self.kinds[kind...
Get an iterator of objects by kind or tag. kind: Any type. Pass to get a subset of contained items with the given type. tag: Any Hashable object. Pass to get a subset of contained items with the given tag. Pass both kind and tag to get objects that are both that type...
def subgroup(self, t, i): flags = self.get_flags(i, self.version == _regex.V0) if flags: self.flags(flags[2:-1]) return [flags] comments = self.get_comments(i) if comments: return [comments] verbose = self.verbose ...
Handle parenthesis.
def convert_to_btc(self, amount, currency): if isinstance(amount, Decimal): use_decimal = True else: use_decimal = self._force_decimal url = .format(currency) response = requests.get(url) if response.status_code == 200: data = respons...
Convert X amount to Bit Coins
def _grabContentFromUrl(self, url): info = {} try: queryURL = "http://" + self.info["host"] + ":" + self.info["port"] + "/" + url response = urllib2.urlopen(queryURL) data = str(respons...
Function that abstracts capturing a URL. This method rewrites the one from Wrapper. :param url: The URL to be processed. :return: The response in a Json format.
def save_caption(self, filename: str, mtime: datetime, caption: str) -> None: def _elliptify(caption): pcaption = caption.replace(, ).strip() return + ((pcaption[:29] + u"\u2026") if len(pcaption) > 31 else pcaption) + filename += caption += pcaption ...
Updates picture caption / Post metadata info
def get_controller_value(self, index_or_name, value_type): if not isinstance(index_or_name, int): index = self.get_controller_index(index_or_name) else: index = index_or_name return self.dll.GetControllerValue(index, value_type)
Returns current/min/max value of controller at given index or name. It is much more efficient to query using an integer index rather than string name. Name is fine for seldom updates but it's not advised to be used every second or so. See `get_controller_list` for an example how to cache a dict...
def default_config(level=logging.INFO, auto_init=True, new_formatter=False, **kwargs): formatters = { : { : __name__ + , : } } if new_formatter: formatters = { : { : __name__ + , : } } ...
Returns the default config dictionary and inits the logging system if requested Keyword arguments: level -- loglevel of the console handler (Default: logging.INFO) auto_init -- initialize the logging system with the provided config (Default: True) **kwargs -- additional options for the logging syst...
def download_folder(project, destdir, folder="/", overwrite=False, chunksize=dxfile.DEFAULT_BUFFER_SIZE, show_progress=False, **kwargs): def ensure_local_dir(d): if not os.path.isdir(d): if os.path.exists(d): raise DXFileError("Destination location alre...
:param project: Project ID to use as context for this download. :type project: string :param destdir: Local destination location :type destdir: string :param folder: Path to the remote folder to download :type folder: string :param overwrite: Overwrite existing files :type overwrite: boolean...
def md5(self): md5 = self.meta.get("md5") if md5 is None: md5 = str(hashlib.md5(self.value).hexdigest()) return md5
Return md5 from meta, or compute it if absent.
def read_header(filename, ext=0, extver=None, case_sensitive=False, **keys): dont_create = 0 try: hdunum = ext+1 except TypeError: hdunum = None _fits = _fitsio_wrap.FITS(filename, READONLY, dont_create) if hdunum is None: extname = mks(ext) if extver is None:...
Convenience function to read the header from the specified FITS HDU The FITSHDR allows access to the values and comments by name and number. parameters ---------- filename: string A filename. ext: number or string, optional The extension. Either the numerical extension from ze...
def partition_dict(items, key): def unmatched(pair): test_key, item, = pair return test_key != key items_iter = iter(items.items()) item = items.get(key) left = collections.OrderedDict(itertools.takewhile(unmatched, items_iter)) right = collections.OrderedDict(items_iter) return left, item, right
Given an ordered dictionary of items and a key in that dict, return an ordered dict of items before, the keyed item, and an ordered dict of items after. >>> od = collections.OrderedDict(zip(range(5), 'abcde')) >>> before, item, after = partition_dict(od, 3) >>> before OrderedDict([(0, 'a'), (1, 'b'), (2, 'c')]) ...
def extend_hierarchy(levels, strength, aggregate, smooth, improve_candidates, diagonal_dominance=False, keep=True): def unpack_arg(v): if isinstance(v, tuple): return v[0], v[1] else: return v, {} A = levels[-1].A B = levels[-1].B if A.s...
Extend the multigrid hierarchy. Service routine to implement the strength of connection, aggregation, tentative prolongation construction, and prolongation smoothing. Called by smoothed_aggregation_solver.
def gof_plot( simdata, trueval, name=None, bins=None, format=, suffix=, path=, fontmap=None, verbose=0): if fontmap is None: fontmap = {1: 10, 2: 8, 3: 6, 4: 5, 5: 4} if not isinstance(simdata, ndarray):
Plots histogram of replicated data, indicating the location of the observed data :Arguments: simdata: array or PyMC object Trace of simulated data or the PyMC stochastic object containing trace. trueval: numeric True (observed) value of the data bins: int or string...
async def timeRangeAsync( start: datetime.time, end: datetime.time, step: float) -> AsyncIterator[datetime.datetime]: assert step > 0 start = _fillDate(start) end = _fillDate(end) delta = datetime.timedelta(seconds=step) t = start while t < datetime.datetime.now(): t...
Async version of :meth:`timeRange`.
def nonuniq(iterable): temp_dict = {} for e in iterable: if e in temp_dict: yield e temp_dict.setdefault(e, e)
Yield the non-unique items of an iterable, preserving order. If an item occurs N > 0 times in the input sequence, it will occur N-1 times in the output sequence. Example: >>> x = nonuniq([0, 0, 2, 6, 2, 0, 5]) >>> list(x) [0, 2, 0]
def stop_host(self, config_file): res = self.send_json_request(, data={: config_file}) if res.status_code != 200: raise UnexpectedResponse( .format( res_code=res.status_code, res_text=res.text, ) ) ...
Stops a managed host specified by `config_file`.
def create_station(name, latlonalt, parent_frame=WGS84, orientation=, mask=None): if isinstance(orientation, str): orient = {: np.pi, : 0., : np.pi / 2., : 3 * np.pi / 2.} heading = orient[orientation] else: heading = orientation latlonalt = list(latlonalt) latlonalt[:2] =...
Create a ground station instance Args: name (str): Name of the station latlonalt (tuple of float): coordinates of the station, as follow: * Latitude in degrees * Longitude in degrees * Altitude to sea level in meters parent_frame (Frame): Planetocentri...
def dispatch(self, receiver): super(SessionCallbackRemoved, self).dispatch(receiver) if hasattr(receiver, ): receiver._session_callback_removed(self)
Dispatch handling of this event to a receiver. This method will invoke ``receiver._session_callback_removed`` if it exists.
def get_volumes(self): volumes = [] for v in self.volumes: volumes.extend(v.get_volumes()) return volumes
Gets a list of all volumes in this disk, including volumes that are contained in other volumes.
def IsFile(self): if self._stat_object is None: self._stat_object = self._GetStat() if self._stat_object is not None: self.entry_type = self._stat_object.type return self.entry_type == definitions.FILE_ENTRY_TYPE_FILE
Determines if the file entry is a file. Returns: bool: True if the file entry is a file.
def run(self): if not os.path.exists(self.output): try: os.mkdir(self.output) except: print % self.output if not os.path.isdir(self.output): print % self.output sys.exit(1) vis...
Reads data from disk and generates CSV files.
def _StripCommonPathPrefix(paths): common_prefix = os.path.commonprefix(paths) return [path[common_prefix_len:] for path in paths]
Removes path common prefix from a list of path strings.
def brightness(self): if self.mode == "ww": return int(self.raw_state[9]) else: _, _, v = colorsys.rgb_to_hsv(*self.getRgb()) return v
Return current brightness 0-255. For warm white return current led level. For RGB calculate the HSV and return the 'value'.
def rmultivariate_hypergeometric(n, m, size=None): N = len(m) urn = np.repeat(np.arange(N), m) if size: draw = np.array([[urn[i] for i in np.random.permutation(len(urn))[:n]] for j in range(size)]) r = [[np.sum(draw[j] == i) for i in range(len(m))] ...
Random multivariate hypergeometric variates. Parameters: - `n` : Number of draws. - `m` : Number of items in each categoy.
def convert_tags_to_dict(item): if hasattr(item, ): tags = item.tags if isinstance(tags, list): tags_dict = {} for kv_dict in tags: if isinstance(kv_dict, dict) and in kv_dict and in kv_dict: tags_dict[kv_dict[]] = kv_dict[] ...
Convert AWS inconvenient tags model of a list of {"Key": <key>, "Value": <value>} pairs to a dict of {<key>: <value>} for easier querying. This returns a proxied object over given item to return a different tags format as the tags attribute is read-only and we cannot modify it directly.
def add_data_point_xy(self, x, y): self.x.append(x) self.y.append(y)
Add a new data point to the data set to be smoothed.
def _try_import(module_name): try: mod = importlib.import_module(module_name) return mod except ImportError: err_msg = ("Tried importing %s but failed. See setup.py extras_require. " "The dataset you are trying to use may have additional " "dependencies.") utils.rera...
Try importing a module, with an informative error message on failure.
def _load_data(self, band): d = bandpass_data_fits( + self._band_map[band] + )[1].data wmid = 0.5 * (d.WAVE_MIN + d.WAVE_MAX) df = pd.DataFrame({: wmid, : d.SPECRESP, : d.WAVE_MAX, : d.WAVE_MIN}) return df
In-flight effective areas for the Swift UVOT, as obtained from the CALDB. See Breeveld+ 2011. XXX: confirm that these are equal-energy, not quantum-efficiency.
def vdm_b(vdm, lat): rad = old_div(np.pi, 180.) fact = ((6.371e6)**3) * 1e7 colat = (90. - lat) * rad return vdm * (np.sqrt(1 + 3 * (np.cos(colat)**2))) / fact
Converts a virtual dipole moment (VDM) or a virtual axial dipole moment (VADM; input in units of Am^2) to a local magnetic field value (output in units of tesla) Parameters ---------- vdm : V(A)DM in units of Am^2 lat: latitude of site in degrees Returns ------- B: local magnetic f...
def _rspiral(width, height): x0 = 0 y0 = 0 x1 = width - 1 y1 = height - 1 while x0 < x1 and y0 < y1: for x in range(x0, x1): yield x, y0 for y in range(y0, y1): yield x1, y for x in range(x1, x0, -1): ...
Reversed spiral generator. Parameters ---------- width : `int` Spiral width. height : `int` Spiral height. Returns ------- `generator` of (`int`, `int`) Points.
def outer_product(vec0: QubitVector, vec1: QubitVector) -> QubitVector: R = vec0.rank R1 = vec1.rank N0 = vec0.qubit_nb N1 = vec1.qubit_nb if R != R1: raise ValueError() if not set(vec0.qubits).isdisjoint(vec1.qubits): raise ValueError() qubits: Qubits = tuple(vec0.q...
Direct product of qubit vectors The tensor ranks must match and qubits must be disjoint.
def _get_kws_plt(self, usrgos, **kws_usr): kws_plt = kws_usr.copy() kws_dag = {} hdrgo = kws_plt.get(, None) objcolor = GrouperColors(self.grprobj) if not in kws_usr: kws_plt[] = objcolor.get_go2color_users() elif hdrgo is not None: ...
Add go2color and go2bordercolor relevant to this grouping into plot.
def safe_input(prompt): if sys.version_info < (3,0): if isinstance(prompt, compat.text_type): encoding = locale.getpreferredencoding() or prompt = prompt.encode(encoding) else: if not isinstance(prompt, compat.text_type): promp...
Prompts user for input. Correctly handles prompt message encoding.