Dataset Viewer
code
stringlengths 4
2.07k
| signature
stringlengths 8
2.61k
| docstring
stringlengths 1
5.02k
| loss_without_docstring
float64 1.11
215k
| loss_with_docstring
float64 1.22
904k
| factor
float64 0.02
0.91
|
---|---|---|---|---|---|
return True
else:
try:
if strict:
self.textcontent(cls, correctionhandling) #will raise NoSuchTextException when not found
return True
else:
#Check children
for e in self:
if e.PRINTABLE and not isinstance(e, TextContent):
if e.hastext(cls, strict, correctionhandling):
return True
self.textcontent(cls, correctionhandling) #will raise NoSuchTextException when not found
return True
except NoSuchText:
return False | def hastext(self,cls='current',strict=True, correctionhandling=CorrectionHandling.CURRENT): #pylint: disable=too-many-return-statements
if not self.PRINTABLE: #only printable elements can hold text
return False
elif self.TEXTCONTAINER | Does this element have text (of the specified class)
By default, and unlike :meth:`text`, this checks strictly, i.e. the element itself must have the text and it is not inherited from its children.
Parameters:
cls (str): The class of the text content to obtain, defaults to ``current``.
strict (bool): Set this if you are strictly interested in the text explicitly associated with the element, without recursing into children. Defaults to ``True``.
correctionhandling: Specifies what text to check for when corrections are encountered. The default is ``CorrectionHandling.CURRENT``, which will retrieve the corrected/current text. You can set this to ``CorrectionHandling.ORIGINAL`` if you want the text prior to correction, and ``CorrectionHandling.EITHER`` if you don't care.
Returns:
bool | 4.143237 | 4.585317 | 0.903588 |
self.replace(TextContent, value=text, cls=cls) | def settext(self, text, cls='current') | Set the text for this element.
Arguments:
text (str): The text
cls (str): The class of the text, defaults to ``current`` (leave this unless you know what you are doing). There may be only one text content element of each class associated with the element. | 17.233418 | 20.449476 | 0.842731 |
e = self
while e:
if e.parent:
e = e.parent
if not Class or isinstance(e,Class):
yield e
elif isinstance(Class, tuple):
for C in Class:
if isinstance(e,C):
yield e
else:
break | def ancestors(self, Class=None) | Generator yielding all ancestors of this element, effectively back-tracing its path to the root element. A tuple of multiple classes may be specified.
Arguments:
*Class: The class or classes (:class:`AbstractElement` or subclasses). Not instances!
Yields:
elements (instances derived from :class:`AbstractElement`) | 2.996542 | 3.448565 | 0.868924 |
for e in self.ancestors(tuple(Classes)):
return e
raise NoSuchAnnotation | def ancestor(self, *Classes) | Find the most immediate ancestor of the specified type, multiple classes may be specified.
Arguments:
*Classes: The possible classes (:class:`AbstractElement` or subclasses) to select from. Not instances!
Example::
paragraph = word.ancestor(folia.Paragraph) | 15.977091 | 64.94973 | 0.245992 |
return sum(1 for i in self.select(Class,set,recursive,ignore,node) ) | def count(self, Class, set=None, recursive=True, ignore=True, node=None) | Like :meth:`AbstractElement.select`, but instead of returning the elements, it merely counts them.
Returns:
int | 5.367037 | 6.376721 | 0.841661 |
return self.next(Class,scope, True) | def previous(self, Class=True, scope=True) | Returns the previous element, if it is of the specified type and if it does not cross the boundary of the defined scope. Returns None if no next element is found. Non-authoritative elements are never returned.
Arguments:
* ``Class``: The class to select; any python class subclassed off `'AbstractElement``. Set to ``True`` to constrain to the same class as that of the current instance, set to ``None`` to not constrain at all
* ``scope``: A list of classes which are never crossed looking for a next element. Set to ``True`` to constrain to a default list of structure elements (Sentence,Paragraph,Division,Event, ListItem,Caption), set to ``None`` to not constrain at all. | 16.407049 | 20.213709 | 0.811679 |
found = False
for e in self.select(Class,set,True,default_ignore_annotations):
found = True
yield e
if not found:
raise NoSuchAnnotation() | def annotations(self,Class,set=None) | Obtain child elements (annotations) of the specified class.
A further restriction can be made based on set.
Arguments:
Class (class): The class to select; any python class (not instance) subclassed off :class:`AbstractElement`
Set (str): The set to match against, only elements pertaining to this set will be returned. If set to None (default), all elements regardless of set will be returned.
Yields:
Elements (instances derived from :class:`AbstractElement`)
Example::
for sense in text.annotations(folia.Sense, 'http://some/path/cornetto'):
..
See also:
:meth:`AbstractElement.select`
Raises:
:meth:`AllowTokenAnnotation.annotations`
:class:`NoSuchAnnotation` if no such annotation exists | 8.737211 | 9.770823 | 0.894214 |
for e in self.select(type,set,True,default_ignore_annotations):
return e
raise NoSuchAnnotation() | def annotation(self, type, set=None) | Obtain a single annotation element.
A further restriction can be made based on set.
Arguments:
Class (class): The class to select; any python class (not instance) subclassed off :class:`AbstractElement`
Set (str): The set to match against, only elements pertaining to this set will be returned. If set to None (default), all elements regardless of set will be returned.
Returns:
An element (instance derived from :class:`AbstractElement`)
Example::
sense = word.annotation(folia.Sense, 'http://some/path/cornetto').cls
See also:
:meth:`AllowTokenAnnotation.annotations`
:meth:`AbstractElement.select`
Raises:
:class:`NoSuchAnnotation` if no such annotation exists | 22.930361 | 25.307119 | 0.906083 |
if index is None:
return self.select(Paragraph,None,True,default_ignore_structure)
else:
if index < 0:
index = self.count(Paragraph,None,True,default_ignore_structure) + index
for i,e in enumerate(self.select(Paragraph,None,True,default_ignore_structure)):
if i == index:
return e
raise IndexError | def paragraphs(self, index = None) | Returns a generator of Paragraph elements found (recursively) under this element.
Arguments:
index (int or None): If set to an integer, will retrieve and return the n'th element (starting at 0) instead of returning the generator of all | 3.556048 | 3.969901 | 0.895752 |
targets =[]
self._helper_wrefs(targets, recurse)
if index is None:
return targets
else:
return targets[index] | def wrefs(self, index = None, recurse=True) | Returns a list of word references, these can be Words but also Morphemes or Phonemes.
Arguments:
index (int or None): If set to an integer, will retrieve and return the n'th element (starting at 0) instead of returning the list of all | 5.743971 | 11.124097 | 0.516354 |
for span in self.select(AbstractSpanAnnotation,None,True):
if tuple(span.wrefs()) == words:
return span
raise NoSuchAnnotation | def findspan(self, *words) | Returns the span element which spans over the specified words or morphemes.
See also:
:meth:`Word.findspans` | 18.486383 | 24.539494 | 0.753332 |
for e in self.select(Current,None,False, False):
if not allowempty and len(e) == 0: continue
return True
return False | def hascurrent(self, allowempty=False) | Does the correction record the current authoritative annotation (needed only in a structural context when suggestions are proposed) | 6.840613 | 7.672917 | 0.891527 |
if index is None:
return self.select(Suggestion,None,False, False)
else:
for i, e in enumerate(self.select(Suggestion,None,False, False)):
if index == i:
return e
raise IndexError | def suggestions(self,index=None) | Get suggestions for correction.
Yields:
:class:`Suggestion` element that encapsulate the suggested annotations (if index is ``None``, default)
Returns:
a :class:`Suggestion` element that encapsulate the suggested annotations (if index is set)
Raises:
:class:`IndexError` | 4.012935 | 5.396261 | 0.743651 |
if inspect.isclass(annotationtype): annotationtype = annotationtype.ANNOTATIONTYPE
return ( (annotationtype,set) in self.annotations) or (set in self.alias_set and self.alias_set[set] and (annotationtype, self.alias_set[set]) in self.annotations ) | def declared(self, annotationtype, set) | Checks if the annotation type is present (i.e. declared) in the document.
Arguments:
annotationtype: The type of annotation, this is conveyed by passing the corresponding annototion class (such as :class:`PosAnnotation` for example), or a member of :class:`AnnotationType`, such as ``AnnotationType.POS``.
set (str): the set, should formally be a URL pointing to the set definition (aliases are also supported)
Example::
if doc.declared(folia.PosAnnotation, 'http://some/path/brown-tag-set'):
..
Returns:
bool | 4.291375 | 4.896905 | 0.876344 |
begin = 0
for i, token in enumerate(tokens):
if is_end_of_sentence(tokens, i):
yield tokens[begin:i+1]
begin = i+1
if begin <= len(tokens)-1:
yield tokens[begin:] | def split_sentences(tokens) | Split sentences (based on tokenised data), returns sentences as a list of lists of tokens, each sentence is a list of tokens | 2.250202 | 2.464463 | 0.91306 |
prompt_kwargs = prompt_kwargs or {}
defaults = {
"history": InMemoryHistory(),
"completer": ClickCompleter(group),
"message": u"> ",
}
for key in defaults:
default_value = defaults[key]
if key not in prompt_kwargs:
prompt_kwargs[key] = default_value
return prompt_kwargs | def bootstrap_prompt(prompt_kwargs, group) | Bootstrap prompt_toolkit kwargs or use user defined values.
:param prompt_kwargs: The user specified prompt kwargs. | 3.263362 | 3.729604 | 0.874989 |
logger.debug("started")
CmdStep(name=__name__, context=context).run_step(is_shell=True)
logger.debug("done") | def run_step(context) | Run shell command without shell interpolation.
Context is a dictionary or dictionary-like.
Context must contain the following keys:
cmd: <<cmd string>> (command + args to execute.)
OR, as a dict
cmd:
run: str. mandatory. <<cmd string>> command + args to execute.
save: bool. defaults False. save output to cmdOut.
Will execute command string in the shell as a sub-process.
The shell defaults to /bin/sh.
The context['cmd'] string must be formatted exactly as it would be when
typed at the shell prompt. This includes, for example, quoting or backslash
escaping filenames with spaces in them.
There is an exception to this: Escape curly braces: if you want a literal
curly brace, double it like {{ or }}.
If save is True, will save the output to context as follows:
cmdOut:
returncode: 0
stdout: 'stdout str here. None if empty.'
stderr: 'stderr str here. None if empty.'
cmdOut.returncode is the exit status of the called process. Typically 0
means OK. A negative value -N indicates that the child was terminated by
signal N (POSIX only).
context['cmd'] will interpolate anything in curly braces for values
found in context. So if your context looks like this:
key1: value1
key2: value2
cmd: mything --arg1 {key1}
The cmd passed to the shell will be "mything --arg value1" | 10.794502 | 17.113304 | 0.630767 |
logger.debug("started")
pypyr.steps.cmd.run_step(context)
logger.debug("done") | def run_step(context) | Run command, program or executable.
Context is a dictionary or dictionary-like.
Context must contain the following keys:
cmd: <<cmd string>> (command + args to execute.)
OR, as a dict
cmd:
run: str. mandatory. <<cmd string>> command + args to execute.
save: bool. defaults False. save output to cmdOut.
Will execute the command string in the shell as a sub-process.
Escape curly braces: if you want a literal curly brace, double it like
{{ or }}.
If save is True, will save the output to context as follows:
cmdOut:
returncode: 0
stdout: 'stdout str here. None if empty.'
stderr: 'stderr str here. None if empty.'
cmdOut.returncode is the exit status of the called process. Typically 0
means OK. A negative value -N indicates that the child was terminated by
signal N (POSIX only).
context['cmd'] will interpolate anything in curly braces for values
found in context. So if your context looks like this:
key1: value1
key2: value2
cmd: mything --arg1 {key1}
The cmd passed to the shell will be "mything --arg value1" | 9.634625 | 10.779205 | 0.893816 |
os.makedirs(os.path.abspath(os.path.dirname(path)), exist_ok=True) | def ensure_dir(path) | Create all parent directories of path if they don't exist.
Args:
path. Path-like object. Create parent dirs to this path.
Return:
None. | 2.768811 | 4.568908 | 0.606012 |
return (
path1 and path2
and os.path.isfile(path1) and os.path.isfile(path2)
and os.path.samefile(path1, path2)) | def is_same_file(path1, path2) | Return True if path1 is the same file as path2.
The reason for this dance is that samefile throws if either file doesn't
exist.
Args:
path1: str or path-like.
path2: str or path-like.
Returns:
bool. True if the same file, False if not. | 2.458771 | 3.191062 | 0.770518 |
try:
os.replace(src, dest)
except Exception as ex_replace:
logger.error(f"error moving file {src} to "
f"{dest}. {ex_replace}")
raise | def move_file(src, dest) | Move source file to destination.
Overwrites dest.
Args:
src: str or path-like. source file
dest: str or path-like. destination file
Returns:
None.
Raises:
FileNotFoundError: out path parent doesn't exist.
OSError: if any IO operations go wrong. | 4.096595 | 4.759636 | 0.860695 |
json.dump(payload, file, indent=2, ensure_ascii=False) | def dump(self, file, payload) | Dump json oject to open file output.
Writes json with 2 spaces indentation.
Args:
file: Open file-like object. Must be open for writing.
payload: The Json object to write to file.
Returns:
None. | 3.884516 | 6.811688 | 0.570272 |
error_type = type(error)
if error_type.__module__ in ['__main__', 'builtins']:
return error_type.__name__
else:
return f'{error_type.__module__}.{error_type.__name__}' | def get_error_name(error) | Return canonical error name as string.
For builtin errors like ValueError or Exception, will return the bare
name, like ValueError or Exception.
For all other exceptions, will return modulename.errorname, such as
arbpackage.mod.myerror
Args:
error: Exception object.
Returns:
str. Canonical error name. | 2.28425 | 3.079015 | 0.741877 |
assert keys, ("*keys parameter must be specified.")
for key in keys:
self.assert_key_exists(key, caller) | def assert_keys_exist(self, caller, *keys) | Assert that context contains keys.
Args:
keys: validates that these keys exists in context
caller: string. calling function or module name - this used to
construct error messages
Raises:
KeyNotInContextError: When key doesn't exist in context. | 6.129691 | 9.427818 | 0.650171 |
for key in keys:
self.assert_key_has_value(key, caller) | def assert_keys_have_values(self, caller, *keys) | Check that keys list are all in context and all have values.
Args:
*keys: Will check each of these keys in context
caller: string. Calling function name - just used for informational
messages
Raises:
KeyNotInContextError: Key doesn't exist
KeyInContextHasNoValueError: context[key] is None
AssertionError: if *keys is None | 3.606723 | 5.013564 | 0.719393 |
assert context_items, ("context_items parameter must be specified.")
for context_item in context_items:
self.assert_key_type_value(context_item, caller, extra_error_text) | def assert_keys_type_value(self,
caller,
extra_error_text,
*context_items) | Assert that keys exist of right type and has a value.
Args:
caller: string. calling function name - this used to construct
error messages
extra_error_text: append to end of error message. This can happily
be None or ''.
*context_items: ContextItemInfo tuples
Raises:
AssertionError: if context_items None.
KeyNotInContextError: Key doesn't exist
KeyInContextHasNoValueError: context[key] is None or the wrong
type. | 3.581882 | 4.631426 | 0.773386 |
def function_iter_replace_strings(iterable_strings):
for string in iterable_strings:
yield reduce((lambda s, kv: s.replace(*kv)),
replacements.items(),
string)
return function_iter_replace_strings | def iter_replace_strings(replacements) | Create a function that uses replacement pairs to process a string.
The returned function takes an iterator and yields on each processed
line.
Args:
replacements: Dict containing 'find_string': 'replace_string' pairs
Returns:
function with signature: iterator of strings = function(iterable) | 4.760939 | 5.336993 | 0.892064 |
in_type = type(obj)
if out_type is in_type:
# no need to cast.
return obj
else:
return out_type(obj) | def cast_to_type(obj, out_type) | Cast obj to out_type if it's not out_type already.
If the obj happens to be out_type already, it just returns obj as is.
Args:
obj: input object
out_type: type.
Returns:
obj cast to out_type. Usual python conversion / casting rules apply. | 3.559295 | 4.08812 | 0.870644 |
yaml_writer = yamler.YAML(typ='rt', pure=True)
# if this isn't here the yaml doesn't format nicely indented for humans
yaml_writer.indent(mapping=2, sequence=4, offset=2)
return yaml_writer | def get_yaml_parser_roundtrip() | Create the yaml parser object with this factory method.
The round-trip parser preserves:
- comments
- block style and key ordering are kept, so you can diff the round-tripped
source
- flow style sequences ( ‘a: b, c, d’) (based on request and test by
Anthony Sottile)
- anchor names that are hand-crafted (i.e. not of the form``idNNN``)
- merges in dictionaries are preserved
Returns:
ruamel.yaml.YAML object with round-trip loader | 7.720961 | 8.909068 | 0.866641 |
if uid is None:
raise ParamsError()
r = NCloudBot()
r.method = 'USER_PLAY_LIST'
r.data = {'offset': offset, 'uid': uid, 'limit': limit, 'csrf_token': ''}
r.send()
return r.response | def user_play_list(uid, offset=0, limit=1000) | 获取用户歌单,包含收藏的歌单
:param uid: 用户的ID,可通过登录或者其他接口获取
:param offset: (optional) 分段起始位置,默认 0
:param limit: (optional) 数据上限多少行,默认 1000 | 5.784021 | 6.747431 | 0.857218 |
if uid is None:
raise ParamsError()
r = NCloudBot()
r.method = 'USER_DJ'
r.data = {'offset': offset, 'limit': limit, "csrf_token": ""}
r.params = {'uid': uid}
r.send()
return r.response | def user_dj(uid, offset=0, limit=30) | 获取用户电台数据
:param uid: 用户的ID,可通过登录或者其他接口获取
:param offset: (optional) 分段起始位置,默认 0
:param limit: (optional) 数据上限多少行,默认 30 | 6.452674 | 7.678125 | 0.840397 |
if keyword is None:
raise ParamsError()
r = NCloudBot()
r.method = 'SEARCH'
r.data = {
's': keyword,
'limit': str(limit),
'type': str(type),
'offset': str(offset)
}
r.send()
return r.response | def search(keyword, type=1, offset=0, limit=30) | 搜索歌曲,支持搜索歌曲、歌手、专辑等
:param keyword: 关键词
:param type: (optional) 搜索类型,1: 单曲, 100: 歌手, 1000: 歌单, 1002: 用户
:param offset: (optional) 分段起始位置,默认 0
:param limit: (optional) 数据上限多少行,默认 30 | 4.862638 | 5.656178 | 0.859704 |
if uid is None:
raise ParamsError()
r = NCloudBot()
r.method = 'USER_FOLLOWS'
r.params = {'uid': uid}
r.data = {'offset': offset, 'limit': limit, 'order': True}
r.send()
return r.response | def user_follows(uid, offset='0', limit=30) | 获取用户关注列表
:param uid: 用户的ID,可通过登录或者其他接口获取
:param offset: (optional) 分段起始位置,默认 0
:param limit: (optional) 数据上限多少行,默认 30 | 5.808166 | 6.995786 | 0.830238 |
if uid is None:
raise ParamsError()
r = NCloudBot()
r.method = 'USER_EVENT'
r.params = {'uid': uid}
r.data = {'time': -1, 'getcounts': True, "csrf_token": ""}
r.send()
return r.response | def user_event(uid) | 获取用户动态
:param uid: 用户的ID,可通过登录或者其他接口获取 | 10.012704 | 11.090661 | 0.902805 |
if uid is None:
raise ParamsError()
r = NCloudBot()
r.method = 'USER_RECORD'
r.data = {'type': type, 'uid': uid, "csrf_token": ""}
r.send()
return r.response | def user_record(uid, type=0) | 获取用户的播放列表,必须登录
:param uid: 用户的ID,可通过登录或者其他接口获取
:param type: (optional) 数据类型,0:获取所有记录,1:获取 weekData | 8.505954 | 9.941768 | 0.855578 |
r = NCloudBot()
r.method = 'TOP_PLAYLIST_HIGHQUALITY'
r.data = {'cat': cat, 'offset': offset, 'limit': limit}
r.send()
return r.response | def top_playlist_highquality(cat='全部', offset=0, limit=20) | 获取网易云音乐的精品歌单
:param cat: (optional) 歌单类型,默认 ‘全部’,比如 华语、欧美等
:param offset: (optional) 分段起始位置,默认 0
:param limit: (optional) 数据上限多少行,默认 20 | 5.045775 | 6.227322 | 0.810264 |
if id is None:
raise ParamsError()
r = NCloudBot()
r.method = 'PLAY_LIST_DETAIL'
r.data = {'id': id, 'limit': limit, "csrf_token": ""}
r.send()
return r.response | def play_list_detail(id, limit=20) | 获取歌单中的所有音乐。由于获取精品中,只能看到歌单名字和 ID 并没有歌单的音乐,因此增加该接口传入歌单 ID
获取歌单中的所有音乐.
:param id: 歌单的ID
:param limit: (optional) 数据上限多少行,默认 20 | 7.085087 | 7.985322 | 0.887264 |
if id is None:
raise ParamsError()
r = NCloudBot()
r.method = 'LYRIC'
r.params = {'id': id}
r.send()
return r.response | def lyric(id) | 通过歌曲 ID 获取歌曲歌词地址
:param id: 歌曲ID | 7.959171 | 9.683273 | 0.82195 |
if id is None:
raise ParamsError()
r = NCloudBot()
r.method = 'MUSIC_COMMENT'
r.params = {'id': id}
r.data = {'offset': offset, 'limit': limit, 'rid': id, "csrf_token": ""}
r.send()
return r.response | def music_comment(id, offset=0, limit=20) | 获取歌曲的评论列表
:param id: 歌曲 ID
:param offset: (optional) 分段起始位置,默认 0
:param limit: (optional) 数据上限多少行,默认 20 | 5.906661 | 6.879433 | 0.858597 |
old = get_option(name)
globals()[name] = value
return old | def set_option(name, value) | Set plydata option
Parameters
----------
name : str
Name of the option
value : object
New value of the option
Returns
-------
old : object
Old value of the option
See also
--------
:class:`options` | 5.131089 | 10.648097 | 0.481878 |
n = len(data)
if isinstance(gdf, GroupedDataFrame):
for i, col in enumerate(gdf.plydata_groups):
if col not in data:
group_values = [gdf[col].iloc[0]] * n
# Need to be careful and maintain the dtypes
# of the group columns
if pdtypes.is_categorical_dtype(gdf[col]):
col_values = pd.Categorical(
group_values,
categories=gdf[col].cat.categories,
ordered=gdf[col].cat.ordered
)
else:
col_values = pd.Series(
group_values,
index=data.index,
dtype=gdf[col].dtype
)
# Group columns come first
data.insert(i, col, col_values)
return data | def _add_group_columns(data, gdf) | Add group columns to data with a value from the grouped dataframe
It is assumed that the grouped dataframe contains a single group
>>> data = pd.DataFrame({
... 'x': [5, 6, 7]})
>>> gdf = GroupedDataFrame({
... 'g': list('aaa'),
... 'x': range(3)}, groups=['g'])
>>> _add_group_columns(data, gdf)
g x
0 a 5
1 a 6
2 a 7 | 3.214729 | 3.555518 | 0.904152 |
gdf._is_copy = None
result_index = gdf.index if self.keep_index else []
data = pd.DataFrame(index=result_index)
for expr in self.expressions:
value = expr.evaluate(gdf, self.env)
if isinstance(value, pd.DataFrame):
data = value
break
else:
_create_column(data, expr.column, value)
data = _add_group_columns(data, gdf)
return data | def _evaluate_group_dataframe(self, gdf) | Evaluate a single group dataframe
Parameters
----------
gdf : pandas.DataFrame
Input group dataframe
Returns
-------
out : pandas.DataFrame
Result data | 4.171311 | 4.832239 | 0.863225 |
code = compile(expr, source_name, "eval", self.flags, False)
return eval(code, {}, VarLookupDict([inner_namespace]
+ self._namespaces)) | def eval(self, expr, source_name="<string>", inner_namespace={}) | Evaluate some Python code in the encapsulated environment.
:arg expr: A string containing a Python expression.
:arg source_name: A name for this string, for use in tracebacks.
:arg inner_namespace: A dict-like object that will be checked first
when `expr` attempts to access any variables.
:returns: The value of `expr`. | 8.80812 | 12.233155 | 0.72002 |
d[key] = value
try:
yield d
finally:
del d[key] | def temporary_key(d, key, value) | Context manager that removes key from dictionary on closing
The dictionary will hold the key for the duration of
the context.
Parameters
----------
d : dict-like
Dictionary in which to insert a temporary key.
key : hashable
Location at which to insert ``value``.
value : object
Value to insert in ``d`` at location ``key``. | 3.285757 | 5.072637 | 0.647741 |
setattr(obj, name, value)
try:
yield obj
finally:
delattr(obj, name) | def temporary_attr(obj, name, value) | Context manager that removes key from dictionary on closing
The dictionary will hold the key for the duration of
the context.
Parameters
----------
obj : object
Object onto which to add a temporary attribute.
name : str
Name of attribute to add to ``obj``.
value : object
Value of ``attr``. | 2.818514 | 3.793048 | 0.743074 |
env = EvalEnvironment.capture(1)
try:
return env.namespace[name]
except KeyError:
raise NameError("No data named {!r} found".format(name)) | def Q(name) | Quote a variable name
A way to 'quote' variable names, especially ones that do not otherwise
meet Python's variable name rules.
Parameters
----------
name : str
Name of variable
Returns
-------
value : object
Value of variable
Examples
--------
>>> import pandas as pd
>>> from plydata import define
>>> df = pd.DataFrame({'class': [10, 20, 30]})
Since ``class`` is a reserved python keyword it cannot be a variable
name, and therefore cannot be used in an expression without quoting it.
>>> df >> define(y='class+1')
Traceback (most recent call last):
File "<string>", line 1
class+1
^
SyntaxError: invalid syntax
>>> df >> define(y='Q("class")+1')
class y
0 10 11
1 20 21
2 30 31
Note that it is ``'Q("some name")'`` and not ``'Q(some name)'``.
As in the above example, you do not need to ``import`` ``Q`` before
you can use it. | 8.118253 | 12.806493 | 0.633917 |
original_index = [df.index for df in dfs]
have_bad_index = [not isinstance(df.index, pd.RangeIndex)
for df in dfs]
for df, bad in zip(dfs, have_bad_index):
if bad:
df.reset_index(drop=True, inplace=True)
try:
yield dfs
finally:
for df, bad, idx in zip(dfs, have_bad_index, original_index):
if bad and len(df.index) == len(idx):
df.index = idx | def regular_index(*dfs) | Change & restore the indices of dataframes
Dataframe with duplicate values can be hard to work with.
When split and recombined, you cannot restore the row order.
This can be the case even if the index has unique but
irregular/unordered. This contextmanager resets the unordered
indices of any dataframe passed to it, on exit it restores
the original index.
A regular index is of the form::
RangeIndex(start=0, stop=n, step=1)
Parameters
----------
dfs : tuple
Dataframes
Yields
------
dfs : tuple
Dataframe
Examples
--------
Create dataframes with different indices
>>> df1 = pd.DataFrame([4, 3, 2, 1])
>>> df2 = pd.DataFrame([3, 2, 1], index=[3, 0, 0])
>>> df3 = pd.DataFrame([11, 12, 13], index=[11, 12, 13])
Within the contexmanager all frames have nice range indices
>>> with regular_index(df1, df2, df3):
... print(df1.index)
... print(df2.index)
... print(df3.index)
RangeIndex(start=0, stop=4, step=1)
RangeIndex(start=0, stop=3, step=1)
RangeIndex(start=0, stop=3, step=1)
Indices restored
>>> df1.index
RangeIndex(start=0, stop=4, step=1)
>>> df2.index
Int64Index([3, 0, 0], dtype='int64')
>>> df3.index
Int64Index([11, 12, 13], dtype='int64') | 2.488878 | 2.839272 | 0.87659 |
seen = set()
def make_seen(x):
seen.add(x)
return x
return [make_seen(x) for x in lst if x not in seen] | def unique(lst) | Return unique elements
:class:`pandas.unique` and :class:`numpy.unique` cast
mixed type lists to the same type. They are faster, but
some times we want to maintain the type.
Parameters
----------
lst : list-like
List of items
Returns
-------
out : list
Unique items in the order that they appear in the
input.
Examples
--------
>>> import pandas as pd
>>> import numpy as np
>>> lst = ['one', 'two', 123, 'three']
>>> pd.unique(lst)
array(['one', 'two', '123', 'three'], dtype=object)
>>> np.unique(lst)
array(['123', 'one', 'three', 'two'],
dtype='<U5')
>>> unique(lst)
['one', 'two', 123, 'three']
pandas and numpy cast 123 to a string!, and numpy does not
even maintain the order. | 3.058544 | 4.655399 | 0.656989 |
h, m, s, frac = map(int, groups)
ms = frac * 10**(3 - len(groups[-1]))
ms += s * 1000
ms += m * 60000
ms += h * 3600000
return ms | def timestamp_to_ms(groups) | Convert groups from :data:`pysubs2.time.TIMESTAMP` match to milliseconds.
Example:
>>> timestamp_to_ms(TIMESTAMP.match("0:00:00.42").groups())
420 | 2.483495 | 2.911049 | 0.853127 |
ms += s * 1000
ms += m * 60000
ms += h * 3600000
return int(round(ms)) | def times_to_ms(h=0, m=0, s=0, ms=0) | Convert hours, minutes, seconds to milliseconds.
Arguments may be positive or negative, int or float,
need not be normalized (``s=120`` is okay).
Returns:
Number of milliseconds (rounded to int). | 2.146832 | 2.737198 | 0.784317 |
sgn = "-" if ms < 0 else ""
h, m, s, ms = ms_to_times(abs(ms))
if fractions:
return sgn + "{:01d}:{:02d}:{:02d}.{:03d}".format(h, m, s, ms)
else:
return sgn + "{:01d}:{:02d}:{:02d}".format(h, m, s) | def ms_to_str(ms, fractions=False) | Prettyprint milliseconds to [-]H:MM:SS[.mmm]
Handles huge and/or negative times. Non-negative times with ``fractions=True``
are matched by :data:`pysubs2.time.TIMESTAMP`.
Arguments:
ms: Number of milliseconds (int, float or other numeric class).
fractions: Whether to print up to millisecond precision.
Returns:
str | 1.984356 | 2.231961 | 0.889064 |
delta = make_time(h=h, m=m, s=s, ms=ms, frames=frames, fps=fps)
self.start += delta
self.end += delta | def shift(self, h=0, m=0, s=0, ms=0, frames=None, fps=None) | Shift start and end times.
See :meth:`SSAFile.shift()` for full description. | 2.891431 | 3.192194 | 0.905782 |
with open(path, encoding=encoding) as fp:
return cls.from_file(fp, format_, fps=fps, **kwargs) | def load(cls, path, encoding="utf-8", format_=None, fps=None, **kwargs) | Load subtitle file from given path.
Arguments:
path (str): Path to subtitle file.
encoding (str): Character encoding of input file.
Defaults to UTF-8, you may need to change this.
format_ (str): Optional, forces use of specific parser
(eg. `"srt"`, `"ass"`). Otherwise, format is detected
automatically from file contents. This argument should
be rarely needed.
fps (float): Framerate for frame-based formats (MicroDVD),
for other formats this argument is ignored. Framerate might
be detected from the file, in which case you don't need
to specify it here (when given, this argument overrides
autodetection).
kwargs: Extra options for the parser.
Returns:
SSAFile
Raises:
IOError
UnicodeDecodeError
pysubs2.exceptions.UnknownFPSError
pysubs2.exceptions.UnknownFormatIdentifierError
pysubs2.exceptions.FormatAutodetectionError
Note:
pysubs2 may autodetect subtitle format and/or framerate. These
values are set as :attr:`SSAFile.format` and :attr:`SSAFile.fps`
attributes.
Example:
>>> subs1 = pysubs2.load("subrip-subtitles.srt")
>>> subs2 = pysubs2.load("microdvd-subtitles.sub", fps=23.976) | 2.821504 | 6.601945 | 0.427375 |
fp = io.StringIO(string)
return cls.from_file(fp, format_, fps=fps, **kwargs) | def from_string(cls, string, format_=None, fps=None, **kwargs) | Load subtitle file from string.
See :meth:`SSAFile.load()` for full description.
Arguments:
string (str): Subtitle file in a string. Note that the string
must be Unicode (in Python 2).
Returns:
SSAFile
Example:
>>> text = '''
... 1
... 00:00:00,000 --> 00:00:05,000
... An example SubRip file.
... '''
>>> subs = SSAFile.from_string(text) | 3.452757 | 5.655931 | 0.610467 |
fp = io.StringIO()
self.to_file(fp, format_, fps=fps, **kwargs)
return fp.getvalue() | def to_string(self, format_, fps=None, **kwargs) | Get subtitle file as a string.
See :meth:`SSAFile.save()` for full description.
Returns:
str | 2.903417 | 4.49524 | 0.645887 |
impl = get_format_class(format_)
impl.to_file(self, fp, format_, fps=fps, **kwargs) | def to_file(self, fp, format_, fps=None, **kwargs) | Write subtitle file to file object.
See :meth:`SSAFile.save()` for full description.
Note:
This is a low-level method. Usually, one of :meth:`SSAFile.save()`
or :meth:`SSAFile.to_string()` is preferable.
Arguments:
fp (file object): A file object, ie. :class:`io.TextIOBase` instance.
Note that the file must be opened in text mode (as opposed to binary). | 4.253644 | 6.884232 | 0.617882 |
if in_fps <= 0 or out_fps <= 0:
raise ValueError("Framerates must be positive, cannot transform %f -> %f" % (in_fps, out_fps))
ratio = in_fps / out_fps
for line in self:
line.start = int(round(line.start * ratio))
line.end = int(round(line.end * ratio)) | def transform_framerate(self, in_fps, out_fps) | Rescale all timestamps by ratio of in_fps/out_fps.
Can be used to fix files converted from frame-based to time-based
with wrongly assumed framerate.
Arguments:
in_fps (float)
out_fps (float)
Raises:
ValueError: Non-positive framerate given. | 2.525565 | 2.763763 | 0.913814 |
def _scenario(func, *args, **kw):
_check_coroutine(func)
if weight > 0:
sname = name or func.__name__
data = {'name': sname,
'weight': weight, 'delay': delay,
'func': func, 'args': args, 'kw': kw}
_SCENARIO[sname] = data
@functools.wraps(func)
def __scenario(*args, **kw):
return func(*args, **kw)
return __scenario
return _scenario | def scenario(weight=1, delay=0.0, name=None) | Decorator to register a function as a Molotov test.
Options:
- **weight** used by Molotov when the scenarii are randomly picked.
The functions with the highest values are more likely to be picked.
Integer, defaults to 1. This value is ignored when the
*scenario_picker* decorator is used.
- **delay** once the scenario is done, the worker will sleep
*delay* seconds. Float, defaults to 0.
The general --delay argument you can pass to Molotov
will be summed with this delay.
- **name** name of the scenario. If not provided, will use the
function __name___ attribute.
The decorated function receives an :class:`aiohttp.ClientSession` instance. | 2.643283 | 3.377786 | 0.782549 |
req = functools.partial(_request, endpoint, verb, session_options,
**options)
return _run_in_fresh_loop(req) | def request(endpoint, verb='GET', session_options=None, **options) | Performs a synchronous request.
Uses a dedicated event loop and aiohttp.ClientSession object.
Options:
- endpoint: the endpoint to call
- verb: the HTTP verb to use (defaults: GET)
- session_options: a dict containing options to initialize the session
(defaults: None)
- options: extra options for the request (defaults: None)
Returns a dict object with the following keys:
- content: the content of the response
- status: the status
- headers: a dict with all the response headers | 7.09925 | 11.811403 | 0.601051 |
if name not in _VARS and factory is not None:
_VARS[name] = factory()
return _VARS.get(name) | def get_var(name, factory=None) | Gets a global variable given its name.
If factory is not None and the variable is not set, factory
is a callable that will set the variable.
If not set, returns None. | 3.204886 | 4.024689 | 0.796307 |
return write_config_file(os.path.join(repo_directory, config_file), config) | def write_config(repo_directory, config) | Writes the specified configuration to the presentation repository. | 3.62955 | 4.567416 | 0.794662 |
if len(value) != 2:
raise ValueError('viewport must have 2 dimensions')
for v in value:
_assert_is_type('viewport dimension', v, int)
if v < 0:
raise ValueError('viewport dimensions cannot be negative') | def viewport(value) | 2-element list of ints : Dimensions of the viewport
The viewport is a bounding box containing the visualization. If the
dimensions of the visualization are larger than the viewport, then
the visualization will be scrollable.
If undefined, then the full visualization is shown. | 3.679985 | 4.034875 | 0.912044 |
for i, entry in enumerate(value):
_assert_is_type('data[{0}]'.format(i), entry, Data) | def data(value) | list or KeyedList of ``Data`` : Data definitions
This defines the data being visualized. See the :class:`Data` class
for details. | 8.582577 | 9.572338 | 0.896602 |
for i, entry in enumerate(value):
_assert_is_type('scales[{0}]'.format(i), entry, Scale) | def scales(value) | list or KeyedList of ``Scale`` : Scale definitions
Scales map the data from the domain of the data to some
visualization space (such as an x-axis). See the :class:`Scale`
class for details. | 7.340902 | 8.663894 | 0.847298 |
for i, entry in enumerate(value):
_assert_is_type('axes[{0}]'.format(i), entry, Axis) | def axes(value) | list or KeyedList of ``Axis`` : Axis definitions
Axes define the locations of the data being mapped by the scales.
See the :class:`Axis` class for details. | 7.187729 | 8.182513 | 0.878426 |
for i, entry in enumerate(value):
_assert_is_type('marks[{0}]'.format(i), entry, Mark) | def marks(value) | list or KeyedList of ``Mark`` : Mark definitions
Marks are the visual objects (such as lines, bars, etc.) that
represent the data in the visualization space. See the :class:`Mark`
class for details. | 7.105653 | 8.001483 | 0.888042 |
keys = self.axes.get_keys()
if keys:
for key in keys:
if key == 'x':
self.axes[key].title = x
elif key == 'y':
self.axes[key].title = y
else:
self.axes.extend([Axis(type='x', title=x),
Axis(type='y', title=y)])
return self | def axis_titles(self, x=None, y=None) | Apply axis titles to the figure.
This is a convenience method for manually modifying the "Axes" mark.
Parameters
----------
x: string, default 'null'
X-axis title
y: string, default 'null'
Y-axis title
Example
-------
>>>vis.axis_titles(y="Data 1", x="Data 2") | 2.938781 | 3.798805 | 0.773607 |
if self.axes:
for axis in self.axes:
self._set_axis_properties(axis)
self._set_all_axis_color(axis, color)
if title_size:
ref = ValueRef(value=title_size)
axis.properties.title.font_size = ref
else:
raise ValueError('This Visualization has no axes!')
return self | def common_axis_properties(self, color=None, title_size=None) | Set common axis properties such as color
Parameters
----------
color: str, default None
Hex color str, etc | 4.884216 | 5.870546 | 0.831987 |
self._axis_properties('x', title_size, title_offset, label_angle,
label_align, color)
return self | def x_axis_properties(self, title_size=None, title_offset=None,
label_angle=None, label_align=None, color=None) | Change x-axis title font size and label angle
Parameters
----------
title_size: int, default None
Title size, in px
title_offset: int, default None
Pixel offset from given axis
label_angle: int, default None
label angle in degrees
label_align: str, default None
Label alignment
color: str, default None
Hex color | 2.551095 | 4.358789 | 0.585276 |
self._axis_properties('y', title_size, title_offset, label_angle,
label_align, color)
return self | def y_axis_properties(self, title_size=None, title_offset=None,
label_angle=None, label_align=None, color=None) | Change y-axis title font size and label angle
Parameters
----------
title_size: int, default None
Title size, in px
title_offset: int, default None
Pixel offset from given axis
label_angle: int, default None
label angle in degrees
label_align: str, default None
Label alignment
color: str, default None
Hex color | 2.531944 | 4.292426 | 0.589863 |
self.legends.append(Legend(title=title, fill=scale, offset=0,
properties=LegendProperties()))
if text_color:
color_props = PropertySet(fill=ValueRef(value=text_color))
self.legends[0].properties.labels = color_props
self.legends[0].properties.title = color_props
return self | def legend(self, title=None, scale='color', text_color=None) | Convience method for adding a legend to the figure.
Important: This defaults to the color scale that is generated with
Line, Area, Stacked Line, etc charts. For bar charts, the scale ref is
usually 'y'.
Parameters
----------
title: string, default None
Legend Title
scale: string, default 'color'
Scale reference for legend
text_color: str, default None
Title and label color | 4.888274 | 5.362185 | 0.91162 |
# TODO: support writing to separate file
return super(self.__class__, self).to_json(validate=validate,
pretty_print=pretty_print) | def to_json(self, validate=False, pretty_print=True, data_path=None) | Convert data to JSON
Parameters
----------
data_path : string
If not None, then data is written to a separate file at the
specified path. Note that the ``url`` attribute if the data must
be set independently for the data to load correctly.
Returns
-------
string
Valid Vega JSON. | 6.386598 | 7.143742 | 0.894013 |
kwargs.setdefault('headers', DEFAULT_HEADERS)
try:
res = requests.get(url, **kwargs)
res.raise_for_status()
except requests.RequestException as e:
print(e)
else:
html = res.text
tree = Selector(text=html)
return tree | def fetch(url: str, **kwargs) -> Selector | Send HTTP request and parse it as a DOM tree.
Args:
url (str): The url of the site.
Returns:
Selector: allows you to select parts of HTML text using CSS or XPath expressions. | 2.534437 | 2.823326 | 0.897678 |
kwargs.setdefault('headers', DEFAULT_HEADERS)
async with aiohttp.ClientSession(**kwargs) as ses:
async with ses.get(url, **kwargs) as res:
html = await res.text()
tree = Selector(text=html)
return tree | async def async_fetch(url: str, **kwargs) -> Selector | Do the fetch in an async style.
Args:
url (str): The url of the site.
Returns:
Selector: allows you to select parts of HTML text using CSS or XPath expressions. | 2.701533 | 3.326574 | 0.812107 |
if sort_by:
reverse = order == 'desc'
total = sorted(total, key=itemgetter(sort_by), reverse=reverse)
if no_duplicate:
total = [key for key, _ in groupby(total)]
data = json.dumps(total, ensure_ascii=False)
Path(name).write_text(data, encoding='utf-8') | def save_as_json(total: list,
name='data.json',
sort_by: str = None,
no_duplicate=False,
order='asc') | Save what you crawled as a json file.
Args:
total (list): Total of data you crawled.
name (str, optional): Defaults to 'data.json'. The name of the file.
sort_by (str, optional): Defaults to None. Sort items by a specific key.
no_duplicate (bool, optional): Defaults to False. If True, it will remove duplicated data.
order (str, optional): Defaults to 'asc'. The opposite option is 'desc'. | 2.191902 | 2.520806 | 0.869524 |
observer = str(observer)
if observer not in color_constants.OBSERVERS:
raise InvalidObserverError(self)
self.observer = observer | def set_observer(self, observer) | Validates and sets the color's observer angle.
.. note:: This only changes the observer angle value. It does no conversion
of the color's coordinates.
:param str observer: One of '2' or '10'. | 6.7325 | 11.376806 | 0.591774 |
illuminant = illuminant.lower()
if illuminant not in color_constants.ILLUMINANTS[self.observer]:
raise InvalidIlluminantError(illuminant)
self.illuminant = illuminant | def set_illuminant(self, illuminant) | Validates and sets the color's illuminant.
.. note:: This only changes the illuminant. It does no conversion
of the color's coordinates. For this, you'll want to refer to
:py:meth:`XYZColor.apply_adaptation <colormath.color_objects.XYZColor.apply_adaptation>`.
.. tip:: Call this after setting your observer.
:param str illuminant: One of the various illuminants. | 3.507856 | 3.961984 | 0.885379 |
return numpy.sqrt(
numpy.sum(numpy.power(lab_color_vector - lab_color_matrix, 2), axis=1)) | def delta_e_cie1976(lab_color_vector, lab_color_matrix) | Calculates the Delta E (CIE1976) between `lab_color_vector` and all
colors in `lab_color_matrix`. | 2.597642 | 3.04161 | 0.854035 |
color1_vector = _get_lab_color1_vector(color1)
color2_matrix = _get_lab_color2_matrix(color2)
delta_e = color_diff_matrix.delta_e_cmc(
color1_vector, color2_matrix, pl=pl, pc=pc)[0]
return numpy.asscalar(delta_e) | def delta_e_cmc(color1, color2, pl=2, pc=1) | Calculates the Delta E (CMC) of two colors.
CMC values
Acceptability: pl=2, pc=1
Perceptability: pl=1, pc=1 | 3.141172 | 3.611605 | 0.869744 |
return compile(script, vars, library_paths).first(_get_value(value, url, opener), default) | def first(script, value=None, default=None, vars={}, url=None, opener=default_opener, library_paths=[]) | Transform object by jq script, returning the first result.
Return default if result is empty. | 7.959636 | 9.199601 | 0.865215 |
self._property_handlers[name].append(handler)
_mpv_observe_property(self._event_handle, hash(name)&0xffffffffffffffff, name.encode('utf-8'), MpvFormat.NODE) | def observe_property(self, name, handler) | Register an observer on the named property. An observer is a function that is called with the new property
value every time the property's value is changed. The basic function signature is ``fun(property_name,
new_value)`` with new_value being the decoded property value as a python object. This function can be used as a
function decorator if no handler is given.
To unregister the observer, call either of ``mpv.unobserve_property(name, handler)``,
``mpv.unobserve_all_properties(handler)`` or the handler's ``unregister_mpv_properties`` attribute::
@player.observe_property('volume')
def my_handler(new_volume, *):
print("It's loud!", volume)
my_handler.unregister_mpv_properties() | 9.93284 | 11.346714 | 0.875394 |
if name in self._default_serialization_methods:
raise ValueError("Can't replace original %s serialization method")
self._serialization_methods[name] = serialize_func | def register_serialization_method(self, name, serialize_func) | Register a custom serialization method that can be
used via schema configuration | 4.583811 | 5.113591 | 0.896398 |
return [c for c in self.connections.values()
if c.busy and not c.closed] | def busy_connections(self) | Return a list of active/busy connections
:rtype: list | 4.929905 | 5.680995 | 0.867789 |
return [c for c in self.connections.values()
if not c.busy and not c.closed] | def idle_connections(self) | Return a list of idle connections
:rtype: list | 5.079988 | 5.847279 | 0.868778 |
cid = id(connection)
try:
self.connection_handle(connection).lock(session)
except KeyError:
raise ConnectionNotFoundError(self.id, cid)
else:
if self.idle_start:
with self._lock:
self.idle_start = None
LOGGER.debug('Pool %s locked connection %s', self.id, cid) | def lock(self, connection, session) | Explicitly lock the specified connection
:type connection: psycopg2.extensions.connection
:param connection: The connection to lock
:param queries.Session session: The session to hold the lock | 5.750189 | 6.523287 | 0.881486 |
with cls._lock:
cls._ensure_pool_exists(pid)
cls._pools[pid].add(connection) | def add(cls, pid, connection) | Add a new connection and session to a pool.
:param str pid: The pool id
:type connection: psycopg2.extensions.connection
:param connection: The connection to add to the pool | 5.563025 | 6.372959 | 0.872911 |
with cls._lock:
try:
cls._ensure_pool_exists(pid)
except KeyError:
LOGGER.debug('Pool clean invoked against missing pool %s', pid)
return
cls._pools[pid].clean()
cls._maybe_remove_pool(pid) | def clean(cls, pid) | Clean the specified pool, removing any closed connections or
stale locks.
:param str pid: The pool id to clean | 5.338863 | 5.922016 | 0.901528 |
with cls._lock:
cls._ensure_pool_exists(pid)
return cls._pools[pid].get(session) | def get(cls, pid, session) | Get an idle, unused connection from the pool. Once a connection has
been retrieved, it will be marked as in-use until it is freed.
:param str pid: The pool ID
:param queries.Session session: The session to assign to the connection
:rtype: psycopg2.extensions.connection | 5.511879 | 7.309681 | 0.754052 |
with cls._lock:
return cls._pools[pid].connection_handle(connection) | def get_connection(cls, pid, connection) | Return the specified :class:`~queries.pool.Connection` from the
pool.
:param str pid: The pool ID
:param connection: The connection to return for
:type connection: psycopg2.extensions.connection
:rtype: queries.pool.Connection | 11.869633 | 16.281431 | 0.729029 |
with cls._lock:
cls._ensure_pool_exists(pid)
return connection in cls._pools[pid] | def has_connection(cls, pid, connection) | Check to see if a pool has the specified connection
:param str pid: The pool ID
:param connection: The connection to check for
:type connection: psycopg2.extensions.connection
:rtype: bool | 6.414604 | 7.117787 | 0.901208 |
with cls._lock:
cls._ensure_pool_exists(pid)
cls._pools[pid].lock(connection, session) | def lock(cls, pid, connection, session) | Explicitly lock the specified connection in the pool
:param str pid: The pool id
:type connection: psycopg2.extensions.connection
:param connection: The connection to add to the pool
:param queries.Session session: The session to hold the lock | 5.319861 | 8.255533 | 0.644399 |
with cls._lock:
cls._ensure_pool_exists(pid)
cls._pools[pid].close()
del cls._pools[pid] | def remove(cls, pid) | Remove a pool, closing all connections
:param str pid: The pool ID | 4.157514 | 4.755346 | 0.874282 |
with cls._lock:
cls._ensure_pool_exists(pid)
cls._pools[pid].set_idle_ttl(ttl) | def set_idle_ttl(cls, pid, ttl) | Set the idle TTL for a pool, after which it will be destroyed.
:param str pid: The pool id
:param int ttl: The TTL for an idle pool | 4.12344 | 5.411238 | 0.762014 |
with cls._lock:
cls._ensure_pool_exists(pid)
cls._pools[pid].set_max_size(size) | def set_max_size(cls, pid, size) | Set the maximum number of connections for the specified pool
:param str pid: The pool to set the size for
:param int size: The maximum number of connections | 4.33354 | 6.610533 | 0.655551 |
if not len(cls._pools[pid]):
del cls._pools[pid] | def _maybe_remove_pool(cls, pid) | If the pool has no open connections, remove it
:param str pid: The pool id to clean | 5.482966 | 8.059567 | 0.680305 |
if self._conn.encoding != value:
self._conn.set_client_encoding(value) | def set_encoding(self, value=DEFAULT_ENCODING) | Set the client encoding for the session if the value specified
is different than the current client encoding.
:param str value: The encoding value to use | 5.017162 | 6.387079 | 0.785518 |
cursor = connection.cursor(name=name,
cursor_factory=self._cursor_factory)
if name is not None:
cursor.scrollable = True
cursor.withhold = True
return cursor | def _get_cursor(self, connection, name=None) | Return a cursor for the given cursor_factory. Specify a name to
use server-side cursors.
:param connection: The connection to create a cursor on
:type connection: psycopg2.extensions.connection
:param str name: A cursor name for a server side cursor
:rtype: psycopg2.extensions.cursor | 4.010459 | 4.650095 | 0.862447 |
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE,
connection)
psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY,
connection) | def _register_unicode(connection) | Register the cursor to be able to receive Unicode string.
:type connection: psycopg2.extensions.connection
:param connection: Where to register things | 2.01908 | 2.349557 | 0.859345 |
if port:
host = '%s:%s' % (host, port)
if password:
return 'postgresql://%s:%s@%s/%s' % (user, password, host, dbname)
return 'postgresql://%s@%s/%s' % (user, host, dbname) | def uri(host='localhost', port=5432, dbname='postgres', user='postgres',
password=None) | Return a PostgreSQL connection URI for the specified values.
:param str host: Host to connect to
:param int port: Port to connect on
:param str dbname: The database name
:param str user: User to connect as
:param str password: The password to use, None for no password
:return str: The PostgreSQL connection URI | 1.733076 | 2.025685 | 0.855551 |
value = 'http%s' % url[5:] if url[:5] == 'postgresql' else url
parsed = _urlparse.urlparse(value)
path, query = parsed.path, parsed.query
hostname = parsed.hostname if parsed.hostname else ''
return PARSED(parsed.scheme.replace('http', 'postgresql'),
parsed.netloc,
path,
parsed.params,
query,
parsed.fragment,
parsed.username,
parsed.password,
hostname.replace('%2f', '/'),
parsed.port) | def urlparse(url) | Parse the URL in a Python2/3 independent fashion.
:param str url: The URL to parse
:rtype: Parsed | 3.465995 | 3.988016 | 0.869103 |
End of preview. Expand
in Data Studio
- Downloads last month
- 36