Unnamed: 0
int64 0
2.93k
| code
stringlengths 101
62.2k
| docs
stringlengths 51
10.7k
| doc_len
int64 4
1.74k
| words
int64 4
4.82k
| lang
stringclasses 1
value | prompt
stringlengths 320
71.2k
|
---|---|---|---|---|---|---|
0 | def rolling(self, *args, **kwargs) -> RollingGroupby:
from pandas.core.window import RollingGroupby
return RollingGroupby(
self._selected_obj,
*args,
_grouper=self.grouper,
_as_index=self.as_index,
**kwargs,
)
|
Return a rolling grouper, providing rolling functionality per group.
| 9 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def rolling(self, *args, **kwargs) -> RollingGroupby:
from pandas.core.window import RollingGroupby
return RollingGroupby(
self._selected_obj,
*args,
_grouper=self.grouper,
_as_index=self.as_index,
**kwargs,
)
```
###Assistant :
Return a rolling grouper, providing rolling functionality per group.
|
1 | def expected_degree_graph(w, seed=None, selfloops=True):
r
n = len(w)
G = nx.empty_graph(n)
# If there are no nodes are no edges in the graph, return the empty graph.
if n == 0 or max(w) == 0:
return G
rho = 1 / sum(w)
# Sort the weights in decreasing order. The original order of the
# weights dictates the order of the (integer) node labels, so we
# need to remember the permutation applied in the sorting.
order = sorted(enumerate(w), key=itemgetter(1), reverse=True)
mapping = {c: u for c, (u, v) in enumerate(order)}
seq = [v for u, v in order]
last = n
if not selfloops:
last -= 1
for u in range(last):
v = u
if not selfloops:
v += 1
factor = seq[u] * rho
p = min(seq[v] * factor, 1)
while v < n and p > 0:
if p != 1:
r = seed.random()
v += math.floor(math.log(r, 1 - p))
if v < n:
q = min(seq[v] * factor, 1)
if seed.random() < q / p:
G.add_edge(mapping[u], mapping[v])
v += 1
p = q
return G
| Returns a random graph with given expected degrees.
Given a sequence of expected degrees $W=(w_0,w_1,\ldots,w_{n-1})$
of length $n$ this algorithm assigns an edge between node $u$ and
node $v$ with probability
.. math::
p_{uv} = \frac{w_u w_v}{\sum_k w_k} .
Parameters
----------
w : list
The list of expected degrees.
selfloops: bool (default=True)
Set to False to remove the possibility of self-loop edges.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
Graph
Examples
--------
>>> z = [10 for i in range(100)]
>>> G = nx.expected_degree_graph(z)
Notes
-----
The nodes have integer labels corresponding to index of expected degrees
input sequence.
The complexity of this algorithm is $\mathcal{O}(n+m)$ where $n$ is the
number of nodes and $m$ is the expected number of edges.
The model in [1]_ includes the possibility of self-loop edges.
Set selfloops=False to produce a graph without self loops.
For finite graphs this model doesn't produce exactly the given
expected degree sequence. Instead the expected degrees are as
follows.
For the case without self loops (selfloops=False),
.. math::
E[deg(u)] = \sum_{v \ne u} p_{uv}
= w_u \left( 1 - \frac{w_u}{\sum_k w_k} \right) .
NetworkX uses the standard convention that a self-loop edge counts 2
in the degree of a node, so with self loops (selfloops=True),
.. math::
E[deg(u)] = \sum_{v \ne u} p_{uv} + 2 p_{uu}
= w_u \left( 1 + \frac{w_u}{\sum_k w_k} \right) .
References
----------
.. [1] Fan Chung and L. Lu, Connected components in random graphs with
given expected degree sequences, Ann. Combinatorics, 6,
pp. 125-145, 2002.
.. [2] Joel Miller and Aric Hagberg,
Efficient generation of networks with given expected degrees,
in Algorithms and Models for the Web-Graph (WAW 2011),
Alan Frieze, Paul Horn, and Paweł Prałat (Eds), LNCS 6732,
pp. 115-126, 2011.
| 298 | 179 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def expected_degree_graph(w, seed=None, selfloops=True):
r
n = len(w)
G = nx.empty_graph(n)
# If there are no nodes are no edges in the graph, return the empty graph.
if n == 0 or max(w) == 0:
return G
rho = 1 / sum(w)
# Sort the weights in decreasing order. The original order of the
# weights dictates the order of the (integer) node labels, so we
# need to remember the permutation applied in the sorting.
order = sorted(enumerate(w), key=itemgetter(1), reverse=True)
mapping = {c: u for c, (u, v) in enumerate(order)}
seq = [v for u, v in order]
last = n
if not selfloops:
last -= 1
for u in range(last):
v = u
if not selfloops:
v += 1
factor = seq[u] * rho
p = min(seq[v] * factor, 1)
while v < n and p > 0:
if p != 1:
r = seed.random()
v += math.floor(math.log(r, 1 - p))
if v < n:
q = min(seq[v] * factor, 1)
if seed.random() < q / p:
G.add_edge(mapping[u], mapping[v])
v += 1
p = q
return G
```
###Assistant : Returns a random graph with given expected degrees.
Given a sequence of expected degrees $W=(w_0,w_1,\ldots,w_{n-1})$
of length $n$ this algorithm assigns an edge between node $u$ and
node $v$ with probability
.. math::
p_{uv} = \frac{w_u w_v}{\sum_k w_k} .
Parameters
----------
w : list
The list of expected degrees.
selfloops: bool (default=True)
Set to False to remove the possibility of self-loop edges.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
Graph
Examples
--------
>>> z = [10 for i in range(100)]
>>> G = nx.expected_degree_graph(z)
Notes
-----
The nodes have integer labels corresponding to index of expected degrees
input sequence.
The complexity of this algorithm is $\mathcal{O}(n+m)$ where $n$ is the
number of nodes and $m$ is the expected number of edges.
The model in [1]_ includes the possibility of self-loop edges.
Set selfloops=False to produce a graph without self loops.
For finite graphs this model doesn't produce exactly the given
expected degree sequence. Instead the expected degrees are as
follows.
For the case without self loops (selfloops=False),
.. math::
E[deg(u)] = \sum_{v \ne u} p_{uv}
= w_u \left( 1 - \frac{w_u}{\sum_k w_k} \right) .
NetworkX uses the standard convention that a self-loop edge counts 2
in the degree of a node, so with self loops (selfloops=True),
.. math::
E[deg(u)] = \sum_{v \ne u} p_{uv} + 2 p_{uu}
= w_u \left( 1 + \frac{w_u}{\sum_k w_k} \right) .
References
----------
.. [1] Fan Chung and L. Lu, Connected components in random graphs with
given expected degree sequences, Ann. Combinatorics, 6,
pp. 125-145, 2002.
.. [2] Joel Miller and Aric Hagberg,
Efficient generation of networks with given expected degrees,
in Algorithms and Models for the Web-Graph (WAW 2011),
Alan Frieze, Paul Horn, and Paweł Prałat (Eds), LNCS 6732,
pp. 115-126, 2011.
|
2 | def save(self, path):
os.makedirs(path, exist_ok=True)
with open(os.path.join(path, "metrics.json"), "w") as fp:
json.dump(self.metrics, fp)
artifacts_metadata = {
artifact_name: {
"uri": artifact.uri,
"class_name": _get_fully_qualified_class_name(artifact),
}
for artifact_name, artifact in self.artifacts.items()
}
with open(os.path.join(path, "artifacts_metadata.json"), "w") as fp:
json.dump(artifacts_metadata, fp)
artifacts_dir = os.path.join(path, "artifacts")
os.mkdir(artifacts_dir)
for artifact_name, artifact in self.artifacts.items():
artifact._save(os.path.join(artifacts_dir, artifact_name))
| Write the evaluation results to the specified local filesystem path | 10 | 49 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def save(self, path):
os.makedirs(path, exist_ok=True)
with open(os.path.join(path, "metrics.json"), "w") as fp:
json.dump(self.metrics, fp)
artifacts_metadata = {
artifact_name: {
"uri": artifact.uri,
"class_name": _get_fully_qualified_class_name(artifact),
}
for artifact_name, artifact in self.artifacts.items()
}
with open(os.path.join(path, "artifacts_metadata.json"), "w") as fp:
json.dump(artifacts_metadata, fp)
artifacts_dir = os.path.join(path, "artifacts")
os.mkdir(artifacts_dir)
for artifact_name, artifact in self.artifacts.items():
artifact._save(os.path.join(artifacts_dir, artifact_name))
```
###Assistant : Write the evaluation results to the specified local filesystem path
|
3 | def test_build_group_generic_issue_attachment(self):
event = self.store_event(
data={"message": "Hello world", "level": "error"}, project_id=self.project.id
)
event = event.for_group(event.groups[0])
occurrence = self.build_occurrence(level="info")
occurrence.save(project_id=self.project.id)
event.occurrence = occurrence
event.group.type = GroupType.PROFILE_BLOCKED_THREAD
attachments = SlackIssuesMessageBuilder(group=event.group, event=event).build()
assert attachments["title"] == occurrence.issue_title
assert attachments["text"] == occurrence.evidence_display[0].value
assert attachments["fallback"] == f"[{self.project.slug}] {occurrence.issue_title}"
assert attachments["color"] == "#2788CE" # blue for info level
| Test that a generic issue type's Slack alert contains the expected values | 12 | 51 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_build_group_generic_issue_attachment(self):
event = self.store_event(
data={"message": "Hello world", "level": "error"}, project_id=self.project.id
)
event = event.for_group(event.groups[0])
occurrence = self.build_occurrence(level="info")
occurrence.save(project_id=self.project.id)
event.occurrence = occurrence
event.group.type = GroupType.PROFILE_BLOCKED_THREAD
attachments = SlackIssuesMessageBuilder(group=event.group, event=event).build()
assert attachments["title"] == occurrence.issue_title
assert attachments["text"] == occurrence.evidence_display[0].value
assert attachments["fallback"] == f"[{self.project.slug}] {occurrence.issue_title}"
assert attachments["color"] == "#2788CE" # blue for info level
```
###Assistant : Test that a generic issue type's Slack alert contains the expected values
|
4 | def apply(self, func, mask=None) -> 'ImageProcessor':
img = orig_img = self._img
img = func(img).astype(orig_img.dtype)
if img.ndim != 4:
raise Exception('func used in ImageProcessor.apply changed format of image')
if mask is not None:
mask = self._check_normalize_mask(mask)
img = ne.evaluate('orig_img*(1-mask) + img*mask').astype(orig_img.dtype)
self._img = img
return self
|
apply your own function on internal image
image has NHWC format. Do not change format, but dims can be changed.
func callable (img) -> img
example:
.apply( lambda img: img-[102,127,63] )
| 31 | 45 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def apply(self, func, mask=None) -> 'ImageProcessor':
img = orig_img = self._img
img = func(img).astype(orig_img.dtype)
if img.ndim != 4:
raise Exception('func used in ImageProcessor.apply changed format of image')
if mask is not None:
mask = self._check_normalize_mask(mask)
img = ne.evaluate('orig_img*(1-mask) + img*mask').astype(orig_img.dtype)
self._img = img
return self
```
###Assistant :
apply your own function on internal image
image has NHWC format. Do not change format, but dims can be changed.
func callable (img) -> img
example:
.apply( lambda img: img-[102,127,63] )
|
5 | def predict(self, x):
# start the timer
self.timer.start()
v_, _ = self.eval_out() # evaluate the ratings and the associated probabilities
vp = self.sess.run(v_, feed_dict={self.vu: x})
# stop the timer
self.timer.stop()
log.info("Done inference, time %f2" % self.timer.interval)
return vp
| Returns the inferred ratings. This method is similar to recommend_k_items() with the
exceptions that it returns all the inferred ratings
Basic mechanics:
The method samples new ratings from the learned joint distribution, together with
their probabilities. The input x must have the same number of columns as the one used
for training the model, i.e. the same number of items, but it can have an arbitrary number
of rows (users).
Args:
x (numpy.ndarray, int32): Input user/affinity matrix. Note that this can be a single vector, i.e.
the ratings of a single user.
Returns:
numpy.ndarray, float:
- A matrix with the inferred ratings.
- The elapsed time for predediction.
| 108 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def predict(self, x):
# start the timer
self.timer.start()
v_, _ = self.eval_out() # evaluate the ratings and the associated probabilities
vp = self.sess.run(v_, feed_dict={self.vu: x})
# stop the timer
self.timer.stop()
log.info("Done inference, time %f2" % self.timer.interval)
return vp
```
###Assistant : Returns the inferred ratings. This method is similar to recommend_k_items() with the
exceptions that it returns all the inferred ratings
Basic mechanics:
The method samples new ratings from the learned joint distribution, together with
their probabilities. The input x must have the same number of columns as the one used
for training the model, i.e. the same number of items, but it can have an arbitrary number
of rows (users).
Args:
x (numpy.ndarray, int32): Input user/affinity matrix. Note that this can be a single vector, i.e.
the ratings of a single user.
Returns:
numpy.ndarray, float:
- A matrix with the inferred ratings.
- The elapsed time for predediction.
|
6 | def raw_decode(self, s, idx=0):
try:
obj, end = self.scan_once(s, idx)
except StopIteration as err:
raise JSONDecodeError("Expecting value", s, err.value) from None
return obj, end
| Decode a JSON document from ``s`` (a ``str`` beginning with
a JSON document) and return a 2-tuple of the Python
representation and the index in ``s`` where the document ended.
This can be used to decode a JSON document from a string that may
have extraneous data at the end.
| 50 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def raw_decode(self, s, idx=0):
try:
obj, end = self.scan_once(s, idx)
except StopIteration as err:
raise JSONDecodeError("Expecting value", s, err.value) from None
return obj, end
```
###Assistant : Decode a JSON document from ``s`` (a ``str`` beginning with
a JSON document) and return a 2-tuple of the Python
representation and the index in ``s`` where the document ended.
This can be used to decode a JSON document from a string that may
have extraneous data at the end.
|
7 | def has_bridges(G, root=None):
try:
next(bridges(G))
except StopIteration:
return False
else:
return True
@not_implemented_for("multigraph")
@not_implemented_for("directed") | Decide whether a graph has any bridges.
A *bridge* in a graph is an edge whose removal causes the number of
connected components of the graph to increase.
Parameters
----------
G : undirected graph
root : node (optional)
A node in the graph `G`. If specified, only the bridges in the
connected component containing this node will be considered.
Returns
-------
bool
Whether the graph (or the connected component containing `root`)
has any bridges.
Raises
------
NodeNotFound
If `root` is not in the graph `G`.
NetworkXNotImplemented
If `G` is a directed graph.
Examples
--------
The barbell graph with parameter zero has a single bridge::
>>> G = nx.barbell_graph(10, 0)
>>> nx.has_bridges(G)
True
On the other hand, the cycle graph has no bridges::
>>> G = nx.cycle_graph(5)
>>> nx.has_bridges(G)
False
Notes
-----
This implementation uses the :func:`networkx.bridges` function, so
it shares its worst-case time complexity, $O(m + n)$, ignoring
polylogarithmic factors, where $n$ is the number of nodes in the
graph and $m$ is the number of edges.
| 167 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def has_bridges(G, root=None):
try:
next(bridges(G))
except StopIteration:
return False
else:
return True
@not_implemented_for("multigraph")
@not_implemented_for("directed")
```
###Assistant : Decide whether a graph has any bridges.
A *bridge* in a graph is an edge whose removal causes the number of
connected components of the graph to increase.
Parameters
----------
G : undirected graph
root : node (optional)
A node in the graph `G`. If specified, only the bridges in the
connected component containing this node will be considered.
Returns
-------
bool
Whether the graph (or the connected component containing `root`)
has any bridges.
Raises
------
NodeNotFound
If `root` is not in the graph `G`.
NetworkXNotImplemented
If `G` is a directed graph.
Examples
--------
The barbell graph with parameter zero has a single bridge::
>>> G = nx.barbell_graph(10, 0)
>>> nx.has_bridges(G)
True
On the other hand, the cycle graph has no bridges::
>>> G = nx.cycle_graph(5)
>>> nx.has_bridges(G)
False
Notes
-----
This implementation uses the :func:`networkx.bridges` function, so
it shares its worst-case time complexity, $O(m + n)$, ignoring
polylogarithmic factors, where $n$ is the number of nodes in the
graph and $m$ is the number of edges.
|
8 | def wheel_metadata(source, dist_info_dir):
# type: (ZipFile, str) -> Message
path = f"{dist_info_dir}/WHEEL"
# Zip file path separators must be /
wheel_contents = read_wheel_metadata_file(source, path)
try:
wheel_text = wheel_contents.decode()
except UnicodeDecodeError as e:
raise UnsupportedWheel(f"error decoding {path!r}: {e!r}")
# FeedParser (used by Parser) does not raise any exceptions. The returned
# message may have .defects populated, but for backwards-compatibility we
# currently ignore them.
return Parser().parsestr(wheel_text)
| Return the WHEEL metadata of an extracted wheel, if possible.
Otherwise, raise UnsupportedWheel.
| 13 | 65 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def wheel_metadata(source, dist_info_dir):
# type: (ZipFile, str) -> Message
path = f"{dist_info_dir}/WHEEL"
# Zip file path separators must be /
wheel_contents = read_wheel_metadata_file(source, path)
try:
wheel_text = wheel_contents.decode()
except UnicodeDecodeError as e:
raise UnsupportedWheel(f"error decoding {path!r}: {e!r}")
# FeedParser (used by Parser) does not raise any exceptions. The returned
# message may have .defects populated, but for backwards-compatibility we
# currently ignore them.
return Parser().parsestr(wheel_text)
```
###Assistant : Return the WHEEL metadata of an extracted wheel, if possible.
Otherwise, raise UnsupportedWheel.
|
9 | def remove_column(self, i, *args, **kwargs):
table = self.table.remove_column(i, *args, **kwargs)
name = self.table.column_names[i]
blocks = []
for tables in self.blocks:
blocks.append(
[
t.remove_column(t.column_names.index(name), *args, **kwargs) if name in t.column_names else t
for t in tables
]
)
return ConcatenationTable(table, blocks)
|
Create new Table with the indicated column removed.
Args:
i (:obj:`int`):
Index of column to remove.
Returns:
:class:`datasets.table.Table`:
New table without the column.
| 23 | 40 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def remove_column(self, i, *args, **kwargs):
table = self.table.remove_column(i, *args, **kwargs)
name = self.table.column_names[i]
blocks = []
for tables in self.blocks:
blocks.append(
[
t.remove_column(t.column_names.index(name), *args, **kwargs) if name in t.column_names else t
for t in tables
]
)
return ConcatenationTable(table, blocks)
```
###Assistant :
Create new Table with the indicated column removed.
Args:
i (:obj:`int`):
Index of column to remove.
Returns:
:class:`datasets.table.Table`:
New table without the column.
|
10 | def test_cable_cannot_terminate_to_a_wireless_interface(self):
wireless_interface = Interface(device=self.device1, name="W1", type=InterfaceTypeChoices.TYPE_80211A)
cable = Cable(a_terminations=[self.interface2], b_terminations=[wireless_interface])
with self.assertRaises(ValidationError):
cable.clean()
|
A cable cannot terminate to a wireless interface
| 8 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_cable_cannot_terminate_to_a_wireless_interface(self):
wireless_interface = Interface(device=self.device1, name="W1", type=InterfaceTypeChoices.TYPE_80211A)
cable = Cable(a_terminations=[self.interface2], b_terminations=[wireless_interface])
with self.assertRaises(ValidationError):
cable.clean()
```
###Assistant :
A cable cannot terminate to a wireless interface
|
11 | def get_test_db_clone_settings(self, suffix):
# When this function is called, the test database has been created
# already and its name has been copied to settings_dict['NAME'] so
# we don't need to call _get_test_db_name.
orig_settings_dict = self.connection.settings_dict
return {
**orig_settings_dict,
"NAME": "{}_{}".format(orig_settings_dict["NAME"], suffix),
}
|
Return a modified connection settings dict for the n-th clone of a DB.
| 13 | 43 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_test_db_clone_settings(self, suffix):
# When this function is called, the test database has been created
# already and its name has been copied to settings_dict['NAME'] so
# we don't need to call _get_test_db_name.
orig_settings_dict = self.connection.settings_dict
return {
**orig_settings_dict,
"NAME": "{}_{}".format(orig_settings_dict["NAME"], suffix),
}
```
###Assistant :
Return a modified connection settings dict for the n-th clone of a DB.
|
12 | def open(self, host='', port=IMAP4_PORT, timeout=None):
self.host = host
self.port = port
self.sock = self._create_socket(timeout)
self.file = self.sock.makefile('rb')
| Setup connection to remote server on "host:port"
(default: localhost:standard IMAP4 port).
This connection will be used by the routines:
read, readline, send, shutdown.
| 23 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def open(self, host='', port=IMAP4_PORT, timeout=None):
self.host = host
self.port = port
self.sock = self._create_socket(timeout)
self.file = self.sock.makefile('rb')
```
###Assistant : Setup connection to remote server on "host:port"
(default: localhost:standard IMAP4 port).
This connection will be used by the routines:
read, readline, send, shutdown.
|
13 | def synchronized_output_end_sequence(self) -> str:
if self.synchronised_output:
return TERMINAL_MODES_ANSI_SEQUENCES[Mode.SynchronizedOutput]["end_sync"]
return ""
|
Returns the ANSI sequence that we should send to the terminal to tell it that
it should stop buffering the content we're about to send.
If the terminal doesn't seem to support synchronised updates the string will be empty.
Returns:
str: the "synchronised output stop" ANSI sequence. It will be ab empty string
if the terminal emulator doesn't seem to support the "synchronised updates" mode.
| 65 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def synchronized_output_end_sequence(self) -> str:
if self.synchronised_output:
return TERMINAL_MODES_ANSI_SEQUENCES[Mode.SynchronizedOutput]["end_sync"]
return ""
```
###Assistant :
Returns the ANSI sequence that we should send to the terminal to tell it that
it should stop buffering the content we're about to send.
If the terminal doesn't seem to support synchronised updates the string will be empty.
Returns:
str: the "synchronised output stop" ANSI sequence. It will be ab empty string
if the terminal emulator doesn't seem to support the "synchronised updates" mode.
|
14 | def _band_penalty_coefficients(self, fc, q, gain, filter_frs):
ref_frs = biquad.digital_coeffs(self.frequency, 192e3, *biquad.peaking(fc, q, gain, fs=192e3))
est_sums = np.sum(filter_frs, axis=1)
ref_sums = np.sum(ref_frs, axis=1)
penalties = np.zeros((len(fc),))
mask = np.squeeze(ref_sums) != 0.0
penalties[mask] = est_sums[mask] / ref_sums[mask]
return 10 * (1 - np.expand_dims(penalties, 1))
| Calculates penalty coefficients for filters if their transition bands extend beyond Nyquist frequency
The calculation is based on ratio of frequency response integrals between 44.1 kHz and 192 kHz
Args:
fc: Filter center frequencies, 1-D array
q: Filter qualities, 1-D array
gain: Filter gains, 1-D array
filter_frs: Filter frequency responses, 2-D array, one fr per row
Returns:
Column array of penalty coefficients, one per filter
| 65 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _band_penalty_coefficients(self, fc, q, gain, filter_frs):
ref_frs = biquad.digital_coeffs(self.frequency, 192e3, *biquad.peaking(fc, q, gain, fs=192e3))
est_sums = np.sum(filter_frs, axis=1)
ref_sums = np.sum(ref_frs, axis=1)
penalties = np.zeros((len(fc),))
mask = np.squeeze(ref_sums) != 0.0
penalties[mask] = est_sums[mask] / ref_sums[mask]
return 10 * (1 - np.expand_dims(penalties, 1))
```
###Assistant : Calculates penalty coefficients for filters if their transition bands extend beyond Nyquist frequency
The calculation is based on ratio of frequency response integrals between 44.1 kHz and 192 kHz
Args:
fc: Filter center frequencies, 1-D array
q: Filter qualities, 1-D array
gain: Filter gains, 1-D array
filter_frs: Filter frequency responses, 2-D array, one fr per row
Returns:
Column array of penalty coefficients, one per filter
|
15 | def test_predict_on_toy_problem(global_random_seed):
clf1 = LogisticRegression(random_state=global_random_seed)
clf2 = RandomForestClassifier(n_estimators=10, random_state=global_random_seed)
clf3 = GaussianNB()
X = np.array(
[[-1.1, -1.5], [-1.2, -1.4], [-3.4, -2.2], [1.1, 1.2], [2.1, 1.4], [3.1, 2.3]]
)
y = np.array([1, 1, 1, 2, 2, 2])
assert_array_equal(clf1.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
assert_array_equal(clf2.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
assert_array_equal(clf3.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
eclf = VotingClassifier(
estimators=[("lr", clf1), ("rf", clf2), ("gnb", clf3)],
voting="hard",
weights=[1, 1, 1],
)
assert_array_equal(eclf.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
eclf = VotingClassifier(
estimators=[("lr", clf1), ("rf", clf2), ("gnb", clf3)],
voting="soft",
weights=[1, 1, 1],
)
assert_array_equal(eclf.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
| Manually check predicted class labels for toy dataset. | 8 | 104 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_predict_on_toy_problem(global_random_seed):
clf1 = LogisticRegression(random_state=global_random_seed)
clf2 = RandomForestClassifier(n_estimators=10, random_state=global_random_seed)
clf3 = GaussianNB()
X = np.array(
[[-1.1, -1.5], [-1.2, -1.4], [-3.4, -2.2], [1.1, 1.2], [2.1, 1.4], [3.1, 2.3]]
)
y = np.array([1, 1, 1, 2, 2, 2])
assert_array_equal(clf1.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
assert_array_equal(clf2.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
assert_array_equal(clf3.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
eclf = VotingClassifier(
estimators=[("lr", clf1), ("rf", clf2), ("gnb", clf3)],
voting="hard",
weights=[1, 1, 1],
)
assert_array_equal(eclf.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
eclf = VotingClassifier(
estimators=[("lr", clf1), ("rf", clf2), ("gnb", clf3)],
voting="soft",
weights=[1, 1, 1],
)
assert_array_equal(eclf.fit(X, y).predict(X), [1, 1, 1, 2, 2, 2])
```
###Assistant : Manually check predicted class labels for toy dataset.
|
16 | def fit_transform(self, X, y=None):
self._validate_params()
return self._transform(X, fitting=True)
| Learn a list of feature name -> indices mappings and transform X.
Like fit(X) followed by transform(X), but does not require
materializing X in memory.
Parameters
----------
X : Mapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python
objects) to feature values (strings or convertible to dtype).
.. versionchanged:: 0.24
Accepts multiple string values for one categorical feature.
y : (ignored)
Ignored parameter.
Returns
-------
Xa : {array, sparse matrix}
Feature vectors; always 2-d.
| 78 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def fit_transform(self, X, y=None):
self._validate_params()
return self._transform(X, fitting=True)
```
###Assistant : Learn a list of feature name -> indices mappings and transform X.
Like fit(X) followed by transform(X), but does not require
materializing X in memory.
Parameters
----------
X : Mapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python
objects) to feature values (strings or convertible to dtype).
.. versionchanged:: 0.24
Accepts multiple string values for one categorical feature.
y : (ignored)
Ignored parameter.
Returns
-------
Xa : {array, sparse matrix}
Feature vectors; always 2-d.
|
17 | def _on_feature_permission_requested(self, url, feature):
page = self._widget.page()
grant_permission = functools.partial(
page.setFeaturePermission, url, feature,
QWebEnginePage.PermissionPolicy.PermissionGrantedByUser)
deny_permission = functools.partial(
page.setFeaturePermission, url, feature,
QWebEnginePage.PermissionPolicy.PermissionDeniedByUser)
permission_str = debug.qenum_key(QWebEnginePage, feature)
if not url.isValid():
# WORKAROUND for https://bugreports.qt.io/browse/QTBUG-85116
is_qtbug = (qtutils.version_check('5.15.0',
compiled=False,
exact=True) and
self._tab.is_private and
feature == QWebEnginePage.Feature.Notifications)
logger = log.webview.debug if is_qtbug else log.webview.warning
logger("Ignoring feature permission {} for invalid URL {}".format(
permission_str, url))
deny_permission()
return
if feature not in self._options:
log.webview.error("Unhandled feature permission {}".format(
permission_str))
deny_permission()
return
if (
feature in [QWebEnginePage.Feature.DesktopVideoCapture,
QWebEnginePage.Feature.DesktopAudioVideoCapture] and
qtutils.version_check('5.13', compiled=False) and
not qtutils.version_check('5.13.2', compiled=False)
):
# WORKAROUND for https://bugreports.qt.io/browse/QTBUG-78016
log.webview.warning("Ignoring desktop sharing request due to "
"crashes in Qt < 5.13.2")
deny_permission()
return
question = shared.feature_permission(
url=url.adjusted(QUrl.UrlFormattingOption.RemovePath),
option=self._options[feature], msg=self._messages[feature],
yes_action=grant_permission, no_action=deny_permission,
abort_on=[self._tab.abort_questions])
if question is not None:
page.featurePermissionRequestCanceled.connect(
functools.partial(self._on_feature_permission_cancelled,
question, url, feature))
| Ask the user for approval for geolocation/media/etc.. | 7 | 125 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _on_feature_permission_requested(self, url, feature):
page = self._widget.page()
grant_permission = functools.partial(
page.setFeaturePermission, url, feature,
QWebEnginePage.PermissionPolicy.PermissionGrantedByUser)
deny_permission = functools.partial(
page.setFeaturePermission, url, feature,
QWebEnginePage.PermissionPolicy.PermissionDeniedByUser)
permission_str = debug.qenum_key(QWebEnginePage, feature)
if not url.isValid():
# WORKAROUND for https://bugreports.qt.io/browse/QTBUG-85116
is_qtbug = (qtutils.version_check('5.15.0',
compiled=False,
exact=True) and
self._tab.is_private and
feature == QWebEnginePage.Feature.Notifications)
logger = log.webview.debug if is_qtbug else log.webview.warning
logger("Ignoring feature permission {} for invalid URL {}".format(
permission_str, url))
deny_permission()
return
if feature not in self._options:
log.webview.error("Unhandled feature permission {}".format(
permission_str))
deny_permission()
return
if (
feature in [QWebEnginePage.Feature.DesktopVideoCapture,
QWebEnginePage.Feature.DesktopAudioVideoCapture] and
qtutils.version_check('5.13', compiled=False) and
not qtutils.version_check('5.13.2', compiled=False)
):
# WORKAROUND for https://bugreports.qt.io/browse/QTBUG-78016
log.webview.warning("Ignoring desktop sharing request due to "
"crashes in Qt < 5.13.2")
deny_permission()
return
question = shared.feature_permission(
url=url.adjusted(QUrl.UrlFormattingOption.RemovePath),
option=self._options[feature], msg=self._messages[feature],
yes_action=grant_permission, no_action=deny_permission,
abort_on=[self._tab.abort_questions])
if question is not None:
page.featurePermissionRequestCanceled.connect(
functools.partial(self._on_feature_permission_cancelled,
question, url, feature))
```
###Assistant : Ask the user for approval for geolocation/media/etc..
|
18 | def add_find_python(self):
start = 402
for ver in self.versions:
install_path = r"SOFTWARE\Python\PythonCore\%s\InstallPath" % ver
machine_reg = "python.machine." + ver
user_reg = "python.user." + ver
machine_prop = "PYTHON.MACHINE." + ver
user_prop = "PYTHON.USER." + ver
machine_action = "PythonFromMachine" + ver
user_action = "PythonFromUser" + ver
exe_action = "PythonExe" + ver
target_dir_prop = "TARGETDIR" + ver
exe_prop = "PYTHON" + ver
if msilib.Win64:
# type: msidbLocatorTypeRawValue + msidbLocatorType64bit
Type = 2+16
else:
Type = 2
add_data(self.db, "RegLocator",
[(machine_reg, 2, install_path, None, Type),
(user_reg, 1, install_path, None, Type)])
add_data(self.db, "AppSearch",
[(machine_prop, machine_reg),
(user_prop, user_reg)])
add_data(self.db, "CustomAction",
[(machine_action, 51+256, target_dir_prop, "[" + machine_prop + "]"),
(user_action, 51+256, target_dir_prop, "[" + user_prop + "]"),
(exe_action, 51+256, exe_prop, "[" + target_dir_prop + "]\\python.exe"),
])
add_data(self.db, "InstallExecuteSequence",
[(machine_action, machine_prop, start),
(user_action, user_prop, start + 1),
(exe_action, None, start + 2),
])
add_data(self.db, "InstallUISequence",
[(machine_action, machine_prop, start),
(user_action, user_prop, start + 1),
(exe_action, None, start + 2),
])
add_data(self.db, "Condition",
[("Python" + ver, 0, "NOT TARGETDIR" + ver)])
start += 4
assert start < 500
| Adds code to the installer to compute the location of Python.
Properties PYTHON.MACHINE.X.Y and PYTHON.USER.X.Y will be set from the
registry for each version of Python.
Properties TARGETDIRX.Y will be set from PYTHON.USER.X.Y if defined,
else from PYTHON.MACHINE.X.Y.
Properties PYTHONX.Y will be set to TARGETDIRX.Y\\python.exe | 45 | 167 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def add_find_python(self):
start = 402
for ver in self.versions:
install_path = r"SOFTWARE\Python\PythonCore\%s\InstallPath" % ver
machine_reg = "python.machine." + ver
user_reg = "python.user." + ver
machine_prop = "PYTHON.MACHINE." + ver
user_prop = "PYTHON.USER." + ver
machine_action = "PythonFromMachine" + ver
user_action = "PythonFromUser" + ver
exe_action = "PythonExe" + ver
target_dir_prop = "TARGETDIR" + ver
exe_prop = "PYTHON" + ver
if msilib.Win64:
# type: msidbLocatorTypeRawValue + msidbLocatorType64bit
Type = 2+16
else:
Type = 2
add_data(self.db, "RegLocator",
[(machine_reg, 2, install_path, None, Type),
(user_reg, 1, install_path, None, Type)])
add_data(self.db, "AppSearch",
[(machine_prop, machine_reg),
(user_prop, user_reg)])
add_data(self.db, "CustomAction",
[(machine_action, 51+256, target_dir_prop, "[" + machine_prop + "]"),
(user_action, 51+256, target_dir_prop, "[" + user_prop + "]"),
(exe_action, 51+256, exe_prop, "[" + target_dir_prop + "]\\python.exe"),
])
add_data(self.db, "InstallExecuteSequence",
[(machine_action, machine_prop, start),
(user_action, user_prop, start + 1),
(exe_action, None, start + 2),
])
add_data(self.db, "InstallUISequence",
[(machine_action, machine_prop, start),
(user_action, user_prop, start + 1),
(exe_action, None, start + 2),
])
add_data(self.db, "Condition",
[("Python" + ver, 0, "NOT TARGETDIR" + ver)])
start += 4
assert start < 500
```
###Assistant : Adds code to the installer to compute the location of Python.
Properties PYTHON.MACHINE.X.Y and PYTHON.USER.X.Y will be set from the
registry for each version of Python.
Properties TARGETDIRX.Y will be set from PYTHON.USER.X.Y if defined,
else from PYTHON.MACHINE.X.Y.
Properties PYTHONX.Y will be set to TARGETDIRX.Y\\python.exe
|
19 | def write_exports(self, exports):
rf = self.get_distinfo_file(EXPORTS_FILENAME)
with open(rf, 'w') as f:
write_exports(exports, f)
|
Write a dictionary of exports to a file in .ini format.
:param exports: A dictionary of exports, mapping an export category to
a list of :class:`ExportEntry` instances describing the
individual export entries.
| 32 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def write_exports(self, exports):
rf = self.get_distinfo_file(EXPORTS_FILENAME)
with open(rf, 'w') as f:
write_exports(exports, f)
```
###Assistant :
Write a dictionary of exports to a file in .ini format.
:param exports: A dictionary of exports, mapping an export category to
a list of :class:`ExportEntry` instances describing the
individual export entries.
|
20 | def _get_action_handler_with_module_context(self, connection, templar):
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# Check if the module has specified an action handler
module = self._shared_loader_obj.module_loader.find_plugin_with_context(
self._task.action, collection_list=collections
)
if not module.resolved or not module.action_plugin:
module = None
if module is not None:
handler_name = module.action_plugin
# let action plugin override module, fallback to 'normal' action plugin otherwise
elif self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler, module
|
Returns the correct action plugin to handle the requestion task action and the module context
| 15 | 191 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_action_handler_with_module_context(self, connection, templar):
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# Check if the module has specified an action handler
module = self._shared_loader_obj.module_loader.find_plugin_with_context(
self._task.action, collection_list=collections
)
if not module.resolved or not module.action_plugin:
module = None
if module is not None:
handler_name = module.action_plugin
# let action plugin override module, fallback to 'normal' action plugin otherwise
elif self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler, module
```
###Assistant :
Returns the correct action plugin to handle the requestion task action and the module context
|
21 | def forward(self, y_hat, y, length):
mask = sequence_mask(sequence_length=length, max_len=y.size(1)).unsqueeze(2)
y_norm = sample_wise_min_max(y, mask)
y_hat_norm = sample_wise_min_max(y_hat, mask)
ssim_loss = self.loss_func((y_norm * mask).unsqueeze(1), (y_hat_norm * mask).unsqueeze(1))
if ssim_loss.item() > 1.0:
print(f" > SSIM loss is out-of-range {ssim_loss.item()}, setting it 1.0")
ssim_loss == 1.0
if ssim_loss.item() < 0.0:
print(f" > SSIM loss is out-of-range {ssim_loss.item()}, setting it 0.0")
ssim_loss == 0.0
return ssim_loss
|
Args:
y_hat (tensor): model prediction values.
y (tensor): target values.
length (tensor): length of each sample in a batch for masking.
Shapes:
y_hat: B x T X D
y: B x T x D
length: B
Returns:
loss: An average loss value in range [0, 1] masked by the length.
| 50 | 61 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def forward(self, y_hat, y, length):
mask = sequence_mask(sequence_length=length, max_len=y.size(1)).unsqueeze(2)
y_norm = sample_wise_min_max(y, mask)
y_hat_norm = sample_wise_min_max(y_hat, mask)
ssim_loss = self.loss_func((y_norm * mask).unsqueeze(1), (y_hat_norm * mask).unsqueeze(1))
if ssim_loss.item() > 1.0:
print(f" > SSIM loss is out-of-range {ssim_loss.item()}, setting it 1.0")
ssim_loss == 1.0
if ssim_loss.item() < 0.0:
print(f" > SSIM loss is out-of-range {ssim_loss.item()}, setting it 0.0")
ssim_loss == 0.0
return ssim_loss
```
###Assistant :
Args:
y_hat (tensor): model prediction values.
y (tensor): target values.
length (tensor): length of each sample in a batch for masking.
Shapes:
y_hat: B x T X D
y: B x T x D
length: B
Returns:
loss: An average loss value in range [0, 1] masked by the length.
|
22 | def get_commands():
commands = {name: 'django.core' for name in find_commands(__path__[0])}
if not settings.configured:
return commands
for app_config in reversed(apps.get_app_configs()):
path = os.path.join(app_config.path, 'management')
commands.update({name: app_config.name for name in find_commands(path)})
return commands
|
Return a dictionary mapping command names to their callback applications.
Look for a management.commands package in django.core, and in each
installed application -- if a commands package exists, register all
commands in that package.
Core commands are always included. If a settings module has been
specified, also include user-defined commands.
The dictionary is in the format {command_name: app_name}. Key-value
pairs from this dictionary can then be used in calls to
load_command_class(app_name, command_name)
If a specific version of a command must be loaded (e.g., with the
startapp command), the instantiated module can be placed in the
dictionary in place of the application name.
The dictionary is cached on the first call and reused on subsequent
calls.
| 115 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_commands():
commands = {name: 'django.core' for name in find_commands(__path__[0])}
if not settings.configured:
return commands
for app_config in reversed(apps.get_app_configs()):
path = os.path.join(app_config.path, 'management')
commands.update({name: app_config.name for name in find_commands(path)})
return commands
```
###Assistant :
Return a dictionary mapping command names to their callback applications.
Look for a management.commands package in django.core, and in each
installed application -- if a commands package exists, register all
commands in that package.
Core commands are always included. If a settings module has been
specified, also include user-defined commands.
The dictionary is in the format {command_name: app_name}. Key-value
pairs from this dictionary can then be used in calls to
load_command_class(app_name, command_name)
If a specific version of a command must be loaded (e.g., with the
startapp command), the instantiated module can be placed in the
dictionary in place of the application name.
The dictionary is cached on the first call and reused on subsequent
calls.
|
23 | def getphraselist(self):
plist = []
while self.pos < len(self.field):
if self.field[self.pos] in self.FWS:
self.pos += 1
elif self.field[self.pos] == '"':
plist.append(self.getquote())
elif self.field[self.pos] == '(':
self.commentlist.append(self.getcomment())
elif self.field[self.pos] in self.phraseends:
break
else:
plist.append(self.getatom(self.phraseends))
return plist
| Parse a sequence of RFC 2822 phrases.
A phrase is a sequence of words, which are in turn either RFC 2822
atoms or quoted-strings. Phrases are canonicalized by squeezing all
runs of continuous whitespace into one space.
| 37 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def getphraselist(self):
plist = []
while self.pos < len(self.field):
if self.field[self.pos] in self.FWS:
self.pos += 1
elif self.field[self.pos] == '"':
plist.append(self.getquote())
elif self.field[self.pos] == '(':
self.commentlist.append(self.getcomment())
elif self.field[self.pos] in self.phraseends:
break
else:
plist.append(self.getatom(self.phraseends))
return plist
```
###Assistant : Parse a sequence of RFC 2822 phrases.
A phrase is a sequence of words, which are in turn either RFC 2822
atoms or quoted-strings. Phrases are canonicalized by squeezing all
runs of continuous whitespace into one space.
|
24 | def set_location(self, location):
# This puts the rectangle into figure-relative coordinates.
if isinstance(location, str):
_api.check_in_list(self._locstrings, location=location)
self._pos = 1. if location in ('top', 'right') else 0.
elif isinstance(location, numbers.Real):
self._pos = location
else:
raise ValueError(
f"location must be {self._locstrings[0]!r}, "
f"{self._locstrings[1]!r}, or a float, not {location!r}")
self._loc = location
if self._orientation == 'x':
# An x-secondary axes is like an inset axes from x = 0 to x = 1 and
# from y = pos to y = pos + eps, in the parent's transAxes coords.
bounds = [0, self._pos, 1., 1e-10]
else: # 'y'
bounds = [self._pos, 0, 1e-10, 1]
# this locator lets the axes move in the parent axes coordinates.
# so it never needs to know where the parent is explicitly in
# figure coordinates.
# it gets called in ax.apply_aspect() (of all places)
self.set_axes_locator(
_TransformedBoundsLocator(bounds, self._parent.transAxes))
|
Set the vertical or horizontal location of the axes in
parent-normalized coordinates.
Parameters
----------
location : {'top', 'bottom', 'left', 'right'} or float
The position to put the secondary axis. Strings can be 'top' or
'bottom' for orientation='x' and 'right' or 'left' for
orientation='y'. A float indicates the relative position on the
parent axes to put the new axes, 0.0 being the bottom (or left)
and 1.0 being the top (or right).
| 71 | 142 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_location(self, location):
# This puts the rectangle into figure-relative coordinates.
if isinstance(location, str):
_api.check_in_list(self._locstrings, location=location)
self._pos = 1. if location in ('top', 'right') else 0.
elif isinstance(location, numbers.Real):
self._pos = location
else:
raise ValueError(
f"location must be {self._locstrings[0]!r}, "
f"{self._locstrings[1]!r}, or a float, not {location!r}")
self._loc = location
if self._orientation == 'x':
# An x-secondary axes is like an inset axes from x = 0 to x = 1 and
# from y = pos to y = pos + eps, in the parent's transAxes coords.
bounds = [0, self._pos, 1., 1e-10]
else: # 'y'
bounds = [self._pos, 0, 1e-10, 1]
# this locator lets the axes move in the parent axes coordinates.
# so it never needs to know where the parent is explicitly in
# figure coordinates.
# it gets called in ax.apply_aspect() (of all places)
self.set_axes_locator(
_TransformedBoundsLocator(bounds, self._parent.transAxes))
```
###Assistant :
Set the vertical or horizontal location of the axes in
parent-normalized coordinates.
Parameters
----------
location : {'top', 'bottom', 'left', 'right'} or float
The position to put the secondary axis. Strings can be 'top' or
'bottom' for orientation='x' and 'right' or 'left' for
orientation='y'. A float indicates the relative position on the
parent axes to put the new axes, 0.0 being the bottom (or left)
and 1.0 being the top (or right).
|
25 | def length(self):
if self._length_cache is None:
if len(self.call_queue):
self.drain_call_queue()
else:
self._length_cache, self._width_cache = _get_index_and_columns.remote(
self.oid
)
if isinstance(self._length_cache, ObjectIDType):
self._length_cache = ray.get(self._length_cache)
return self._length_cache
|
Get the length of the object wrapped by this partition.
Returns
-------
int
The length of the object.
| 18 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def length(self):
if self._length_cache is None:
if len(self.call_queue):
self.drain_call_queue()
else:
self._length_cache, self._width_cache = _get_index_and_columns.remote(
self.oid
)
if isinstance(self._length_cache, ObjectIDType):
self._length_cache = ray.get(self._length_cache)
return self._length_cache
```
###Assistant :
Get the length of the object wrapped by this partition.
Returns
-------
int
The length of the object.
|
26 | def dmp_l2_norm_squared(f, u, K):
if not u:
return dup_l2_norm_squared(f, K)
v = u - 1
return sum([ dmp_l2_norm_squared(c, v, K) for c in f ])
|
Returns squared l2 norm of a polynomial in ``K[X]``.
Examples
========
>>> from sympy.polys import ring, ZZ
>>> R, x,y = ring("x,y", ZZ)
>>> R.dmp_l2_norm_squared(2*x*y - x - 3)
14
| 30 | 25 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def dmp_l2_norm_squared(f, u, K):
if not u:
return dup_l2_norm_squared(f, K)
v = u - 1
return sum([ dmp_l2_norm_squared(c, v, K) for c in f ])
```
###Assistant :
Returns squared l2 norm of a polynomial in ``K[X]``.
Examples
========
>>> from sympy.polys import ring, ZZ
>>> R, x,y = ring("x,y", ZZ)
>>> R.dmp_l2_norm_squared(2*x*y - x - 3)
14
|
27 | def cloud_filter(args, targets): # type: (IntegrationConfig, t.Tuple[IntegrationTarget, ...]) -> t.List[str]
if args.metadata.cloud_config is not None:
return [] # cloud filter already performed prior to delegation
exclude = [] # type: t.List[str]
for provider in get_cloud_providers(args, targets):
provider.filter(targets, exclude)
return exclude
| Return a list of target names to exclude based on the given targets. | 13 | 40 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cloud_filter(args, targets): # type: (IntegrationConfig, t.Tuple[IntegrationTarget, ...]) -> t.List[str]
if args.metadata.cloud_config is not None:
return [] # cloud filter already performed prior to delegation
exclude = [] # type: t.List[str]
for provider in get_cloud_providers(args, targets):
provider.filter(targets, exclude)
return exclude
```
###Assistant : Return a list of target names to exclude based on the given targets.
|
28 | def test_upgrade_available_none():
chk_upgrade_out = (
"Last metadata expiration check: 22:5:48 ago on Mon Dec 6 19:26:36 EST 2021."
)
dnf_call = MagicMock(return_value={"retcode": 100, "stdout": chk_upgrade_out})
version_mock = MagicMock(return_value="6.6-2")
with patch("pathlib.Path.is_file", return_value=True):
with patch.dict(
aixpkg.__salt__,
{"cmd.run_all": dnf_call, "config.get": MagicMock(return_value=False)},
), patch.object(aixpkg, "version", version_mock):
result = aixpkg.upgrade_available("info")
assert dnf_call.call_count == 1
libpath_env = {"LIBPATH": "/opt/freeware/lib:/usr/lib"}
dnf_call.assert_any_call(
"/opt/freeware/bin/dnf check-update info",
env=libpath_env,
ignore_retcode=True,
python_shell=False,
)
assert not result
|
test upgrade available where a valid upgrade is not available
| 10 | 64 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_upgrade_available_none():
chk_upgrade_out = (
"Last metadata expiration check: 22:5:48 ago on Mon Dec 6 19:26:36 EST 2021."
)
dnf_call = MagicMock(return_value={"retcode": 100, "stdout": chk_upgrade_out})
version_mock = MagicMock(return_value="6.6-2")
with patch("pathlib.Path.is_file", return_value=True):
with patch.dict(
aixpkg.__salt__,
{"cmd.run_all": dnf_call, "config.get": MagicMock(return_value=False)},
), patch.object(aixpkg, "version", version_mock):
result = aixpkg.upgrade_available("info")
assert dnf_call.call_count == 1
libpath_env = {"LIBPATH": "/opt/freeware/lib:/usr/lib"}
dnf_call.assert_any_call(
"/opt/freeware/bin/dnf check-update info",
env=libpath_env,
ignore_retcode=True,
python_shell=False,
)
assert not result
```
###Assistant :
test upgrade available where a valid upgrade is not available
|
29 | def test_too_many_boosted_releases_do_not_boost_anymore(self):
release_2 = Release.get_or_create(self.project, "2.0")
release_3 = Release.get_or_create(self.project, "3.0")
for release_id in (self.release.id, release_2.id):
self.redis_client.set(f"ds::p:{self.project.id}:r:{release_id}", 1, 60 * 60 * 24)
self.redis_client.hset(
f"ds::p:{self.project.id}:boosted_releases",
release_id,
time(),
)
with self.options(
{
"dynamic-sampling:boost-latest-release": True,
}
):
self.make_release_transaction(
release_version=release_3.version,
environment_name=self.environment1.name,
project_id=self.project.id,
checksum="b" * 32,
timestamp=self.timestamp,
)
assert self.redis_client.hgetall(f"ds::p:{self.project.id}:boosted_releases") == {
str(self.release.id): str(time()),
str(release_2.id): str(time()),
}
assert self.redis_client.get(f"ds::p:{self.project.id}:r:{release_3.id}") is None
|
This test tests the case when we have already too many boosted releases, in this case we want to skip the
boosting of anymore releases
| 25 | 56 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_too_many_boosted_releases_do_not_boost_anymore(self):
release_2 = Release.get_or_create(self.project, "2.0")
release_3 = Release.get_or_create(self.project, "3.0")
for release_id in (self.release.id, release_2.id):
self.redis_client.set(f"ds::p:{self.project.id}:r:{release_id}", 1, 60 * 60 * 24)
self.redis_client.hset(
f"ds::p:{self.project.id}:boosted_releases",
release_id,
time(),
)
with self.options(
{
"dynamic-sampling:boost-latest-release": True,
}
):
self.make_release_transaction(
release_version=release_3.version,
environment_name=self.environment1.name,
project_id=self.project.id,
checksum="b" * 32,
timestamp=self.timestamp,
)
assert self.redis_client.hgetall(f"ds::p:{self.project.id}:boosted_releases") == {
str(self.release.id): str(time()),
str(release_2.id): str(time()),
}
assert self.redis_client.get(f"ds::p:{self.project.id}:r:{release_3.id}") is None
```
###Assistant :
This test tests the case when we have already too many boosted releases, in this case we want to skip the
boosting of anymore releases
|
30 | def hits(G, max_iter=100, tol=1.0e-8, nstart=None, normalized=True):
import numpy as np
import scipy as sp
import scipy.sparse.linalg # call as sp.sparse.linalg
if len(G) == 0:
return {}, {}
A = nx.adjacency_matrix(G, nodelist=list(G), dtype=float)
if nstart is None:
u, s, vt = sp.sparse.linalg.svds(A, k=1, maxiter=max_iter, tol=tol)
else:
nstart = np.array(list(nstart.values()))
u, s, vt = sp.sparse.linalg.svds(A, k=1, v0=nstart, maxiter=max_iter, tol=tol)
a = vt.flatten().real
h = A @ a
if normalized:
h = h / h.sum()
a = a / a.sum()
hubs = dict(zip(G, map(float, h)))
authorities = dict(zip(G, map(float, a)))
return hubs, authorities
| Returns HITS hubs and authorities values for nodes.
The HITS algorithm computes two numbers for a node.
Authorities estimates the node value based on the incoming links.
Hubs estimates the node value based on outgoing links.
Parameters
----------
G : graph
A NetworkX graph
max_iter : integer, optional
Maximum number of iterations in power method.
tol : float, optional
Error tolerance used to check convergence in power method iteration.
nstart : dictionary, optional
Starting value of each node for power method iteration.
normalized : bool (default=True)
Normalize results by the sum of all of the values.
Returns
-------
(hubs,authorities) : two-tuple of dictionaries
Two dictionaries keyed by node containing the hub and authority
values.
Raises
------
PowerIterationFailedConvergence
If the algorithm fails to converge to the specified tolerance
within the specified number of iterations of the power iteration
method.
Examples
--------
>>> G = nx.path_graph(4)
>>> h, a = nx.hits(G)
Notes
-----
The eigenvector calculation is done by the power iteration method
and has no guarantee of convergence. The iteration will stop
after max_iter iterations or an error tolerance of
number_of_nodes(G)*tol has been reached.
The HITS algorithm was designed for directed graphs but this
algorithm does not check if the input graph is directed and will
execute on undirected graphs.
References
----------
.. [1] A. Langville and C. Meyer,
"A survey of eigenvector methods of web information retrieval."
http://citeseer.ist.psu.edu/713792.html
.. [2] Jon Kleinberg,
Authoritative sources in a hyperlinked environment
Journal of the ACM 46 (5): 604-32, 1999.
doi:10.1145/324133.324140.
http://www.cs.cornell.edu/home/kleinber/auth.pdf.
| 248 | 90 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def hits(G, max_iter=100, tol=1.0e-8, nstart=None, normalized=True):
import numpy as np
import scipy as sp
import scipy.sparse.linalg # call as sp.sparse.linalg
if len(G) == 0:
return {}, {}
A = nx.adjacency_matrix(G, nodelist=list(G), dtype=float)
if nstart is None:
u, s, vt = sp.sparse.linalg.svds(A, k=1, maxiter=max_iter, tol=tol)
else:
nstart = np.array(list(nstart.values()))
u, s, vt = sp.sparse.linalg.svds(A, k=1, v0=nstart, maxiter=max_iter, tol=tol)
a = vt.flatten().real
h = A @ a
if normalized:
h = h / h.sum()
a = a / a.sum()
hubs = dict(zip(G, map(float, h)))
authorities = dict(zip(G, map(float, a)))
return hubs, authorities
```
###Assistant : Returns HITS hubs and authorities values for nodes.
The HITS algorithm computes two numbers for a node.
Authorities estimates the node value based on the incoming links.
Hubs estimates the node value based on outgoing links.
Parameters
----------
G : graph
A NetworkX graph
max_iter : integer, optional
Maximum number of iterations in power method.
tol : float, optional
Error tolerance used to check convergence in power method iteration.
nstart : dictionary, optional
Starting value of each node for power method iteration.
normalized : bool (default=True)
Normalize results by the sum of all of the values.
Returns
-------
(hubs,authorities) : two-tuple of dictionaries
Two dictionaries keyed by node containing the hub and authority
values.
Raises
------
PowerIterationFailedConvergence
If the algorithm fails to converge to the specified tolerance
within the specified number of iterations of the power iteration
method.
Examples
--------
>>> G = nx.path_graph(4)
>>> h, a = nx.hits(G)
Notes
-----
The eigenvector calculation is done by the power iteration method
and has no guarantee of convergence. The iteration will stop
after max_iter iterations or an error tolerance of
number_of_nodes(G)*tol has been reached.
The HITS algorithm was designed for directed graphs but this
algorithm does not check if the input graph is directed and will
execute on undirected graphs.
References
----------
.. [1] A. Langville and C. Meyer,
"A survey of eigenvector methods of web information retrieval."
http://citeseer.ist.psu.edu/713792.html
.. [2] Jon Kleinberg,
Authoritative sources in a hyperlinked environment
Journal of the ACM 46 (5): 604-32, 1999.
doi:10.1145/324133.324140.
http://www.cs.cornell.edu/home/kleinber/auth.pdf.
|
31 | def test_connection(self) -> Tuple[bool, str]:
try:
conn = self.get_conn()
conn.pwd
return True, "Connection successfully tested"
except Exception as e:
return False, str(e)
| Test the FTP connection by calling path with directory | 9 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_connection(self) -> Tuple[bool, str]:
try:
conn = self.get_conn()
conn.pwd
return True, "Connection successfully tested"
except Exception as e:
return False, str(e)
```
###Assistant : Test the FTP connection by calling path with directory
|
32 | def call_price(self, other_args):
parser = argparse.ArgumentParser(
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
prog="price",
description=,
)
parser.add_argument(
"-s",
"--symbol",
required="-h" not in other_args,
type=str,
dest="symbol",
help="Symbol of coin to load data for, ~100 symbols are available",
)
if other_args and "-" not in other_args[0][0]:
other_args.insert(0, "-s")
ns_parser = self.parse_known_args_and_warn(parser, other_args)
if ns_parser:
if ns_parser.symbol in pyth_model.ASSETS.keys():
console.print(
"[param]If it takes too long, you can use 'Ctrl + C' to cancel.\n[/param]"
)
pyth_view.display_price(ns_parser.symbol)
else:
console.print("[red]The symbol selected does not exist.[/red]\n")
| Process price commandDisplay price and interval of confidence in real-time. [Source: Pyth] | 12 | 74 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def call_price(self, other_args):
parser = argparse.ArgumentParser(
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
prog="price",
description=,
)
parser.add_argument(
"-s",
"--symbol",
required="-h" not in other_args,
type=str,
dest="symbol",
help="Symbol of coin to load data for, ~100 symbols are available",
)
if other_args and "-" not in other_args[0][0]:
other_args.insert(0, "-s")
ns_parser = self.parse_known_args_and_warn(parser, other_args)
if ns_parser:
if ns_parser.symbol in pyth_model.ASSETS.keys():
console.print(
"[param]If it takes too long, you can use 'Ctrl + C' to cancel.\n[/param]"
)
pyth_view.display_price(ns_parser.symbol)
else:
console.print("[red]The symbol selected does not exist.[/red]\n")
```
###Assistant : Process price commandDisplay price and interval of confidence in real-time. [Source: Pyth]
|
33 | def _single_map_nested(args):
function, data_struct, types, rank, disable_tqdm, desc = args
# Singleton first to spare some computation
if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
return function(data_struct)
# Reduce logging to keep things readable in multiprocessing with tqdm
if rank is not None and logging.get_verbosity() < logging.WARNING:
logging.set_verbosity_warning()
# Print at least one thing to fix tqdm in notebooks in multiprocessing
# see https://github.com/tqdm/tqdm/issues/485#issuecomment-473338308
if rank is not None and not disable_tqdm and any("notebook" in tqdm_cls.__name__ for tqdm_cls in tqdm.__mro__):
print(" ", end="", flush=True)
# Loop over single examples or batches and write to buffer/file if examples are to be updated
pbar_iterable = data_struct.items() if isinstance(data_struct, dict) else data_struct
pbar_desc = (desc + " " if desc is not None else "") + "#" + str(rank) if rank is not None else desc
pbar = utils.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
if isinstance(data_struct, dict):
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
else:
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
if isinstance(data_struct, list):
return mapped
elif isinstance(data_struct, tuple):
return tuple(mapped)
else:
return np.array(mapped)
| Apply a function recursively to each element of a nested data struct. | 12 | 182 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _single_map_nested(args):
function, data_struct, types, rank, disable_tqdm, desc = args
# Singleton first to spare some computation
if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
return function(data_struct)
# Reduce logging to keep things readable in multiprocessing with tqdm
if rank is not None and logging.get_verbosity() < logging.WARNING:
logging.set_verbosity_warning()
# Print at least one thing to fix tqdm in notebooks in multiprocessing
# see https://github.com/tqdm/tqdm/issues/485#issuecomment-473338308
if rank is not None and not disable_tqdm and any("notebook" in tqdm_cls.__name__ for tqdm_cls in tqdm.__mro__):
print(" ", end="", flush=True)
# Loop over single examples or batches and write to buffer/file if examples are to be updated
pbar_iterable = data_struct.items() if isinstance(data_struct, dict) else data_struct
pbar_desc = (desc + " " if desc is not None else "") + "#" + str(rank) if rank is not None else desc
pbar = utils.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
if isinstance(data_struct, dict):
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
else:
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
if isinstance(data_struct, list):
return mapped
elif isinstance(data_struct, tuple):
return tuple(mapped)
else:
return np.array(mapped)
```
###Assistant : Apply a function recursively to each element of a nested data struct.
|
34 | def test_unified(self):
self.write_settings("settings_to_diff.py", sdict={"FOO": '"bar"'})
args = ["diffsettings", "--settings=settings_to_diff", "--output=unified"]
out, err = self.run_manage(args)
self.assertNoOutput(err)
self.assertOutput(out, "+ FOO = 'bar'")
self.assertOutput(out, "- SECRET_KEY = ''")
self.assertOutput(out, "+ SECRET_KEY = 'django_tests_secret_key'")
self.assertNotInOutput(out, " APPEND_SLASH = True")
| --output=unified emits settings diff in unified mode. | 7 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_unified(self):
self.write_settings("settings_to_diff.py", sdict={"FOO": '"bar"'})
args = ["diffsettings", "--settings=settings_to_diff", "--output=unified"]
out, err = self.run_manage(args)
self.assertNoOutput(err)
self.assertOutput(out, "+ FOO = 'bar'")
self.assertOutput(out, "- SECRET_KEY = ''")
self.assertOutput(out, "+ SECRET_KEY = 'django_tests_secret_key'")
self.assertNotInOutput(out, " APPEND_SLASH = True")
```
###Assistant : --output=unified emits settings diff in unified mode.
|
35 | def runtime_env(self):
return RuntimeEnv.deserialize(self._get_runtime_env_string())
| Get the runtime env of the current job/worker.
If this API is called in driver or ray client, returns the job level runtime
env.
If this API is called in workers/actors, returns the worker level runtime env.
Returns:
A new ray.runtime_env.RuntimeEnv instance.
To merge from the current runtime env in some specific cases, you can get the
current runtime env by this API and modify it by yourself.
Example:
>>> # Inherit current runtime env, except `env_vars`
>>> Actor.options( # doctest: +SKIP
... runtime_env=ray.get_runtime_context().runtime_env.update(
... {"env_vars": {"A": "a", "B": "b"}})
... ) # doctest: +SKIP
| 95 | 4 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def runtime_env(self):
return RuntimeEnv.deserialize(self._get_runtime_env_string())
```
###Assistant : Get the runtime env of the current job/worker.
If this API is called in driver or ray client, returns the job level runtime
env.
If this API is called in workers/actors, returns the worker level runtime env.
Returns:
A new ray.runtime_env.RuntimeEnv instance.
To merge from the current runtime env in some specific cases, you can get the
current runtime env by this API and modify it by yourself.
Example:
>>> # Inherit current runtime env, except `env_vars`
>>> Actor.options( # doctest: +SKIP
... runtime_env=ray.get_runtime_context().runtime_env.update(
... {"env_vars": {"A": "a", "B": "b"}})
... ) # doctest: +SKIP
|
36 | def sleeper(self, duration):
s = time()
yield
time_to_sleep = duration - (time() - s)
if time_to_sleep > 0:
self.wait(time_to_sleep)
|
Do something and then wait for a given duration minus the time it took doing something
| 16 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def sleeper(self, duration):
s = time()
yield
time_to_sleep = duration - (time() - s)
if time_to_sleep > 0:
self.wait(time_to_sleep)
```
###Assistant :
Do something and then wait for a given duration minus the time it took doing something
|
37 | def test_BoundaryNorm():
boundaries = [0, 1.1, 2.2]
vals = [-1, 0, 1, 2, 2.2, 4]
# Without interpolation
expected = [-1, 0, 0, 1, 2, 2]
ncolors = len(boundaries) - 1
bn = mcolors.BoundaryNorm(boundaries, ncolors)
assert_array_equal(bn(vals), expected)
# ncolors != len(boundaries) - 1 triggers interpolation
expected = [-1, 0, 0, 2, 3, 3]
ncolors = len(boundaries)
bn = mcolors.BoundaryNorm(boundaries, ncolors)
assert_array_equal(bn(vals), expected)
# with a single region and interpolation
expected = [-1, 1, 1, 1, 3, 3]
bn = mcolors.BoundaryNorm([0, 2.2], ncolors)
assert_array_equal(bn(vals), expected)
# more boundaries for a third color
boundaries = [0, 1, 2, 3]
vals = [-1, 0.1, 1.1, 2.2, 4]
ncolors = 5
expected = [-1, 0, 2, 4, 5]
bn = mcolors.BoundaryNorm(boundaries, ncolors)
assert_array_equal(bn(vals), expected)
# a scalar as input should not trigger an error and should return a scalar
boundaries = [0, 1, 2]
vals = [-1, 0.1, 1.1, 2.2]
bn = mcolors.BoundaryNorm(boundaries, 2)
expected = [-1, 0, 1, 2]
for v, ex in zip(vals, expected):
ret = bn(v)
assert isinstance(ret, int)
assert_array_equal(ret, ex)
assert_array_equal(bn([v]), ex)
# same with interp
bn = mcolors.BoundaryNorm(boundaries, 3)
expected = [-1, 0, 2, 3]
for v, ex in zip(vals, expected):
ret = bn(v)
assert isinstance(ret, int)
assert_array_equal(ret, ex)
assert_array_equal(bn([v]), ex)
# Clipping
bn = mcolors.BoundaryNorm(boundaries, 3, clip=True)
expected = [0, 0, 2, 2]
for v, ex in zip(vals, expected):
ret = bn(v)
assert isinstance(ret, int)
assert_array_equal(ret, ex)
assert_array_equal(bn([v]), ex)
# Masked arrays
boundaries = [0, 1.1, 2.2]
vals = np.ma.masked_invalid([-1., np.NaN, 0, 1.4, 9])
# Without interpolation
ncolors = len(boundaries) - 1
bn = mcolors.BoundaryNorm(boundaries, ncolors)
expected = np.ma.masked_array([-1, -99, 0, 1, 2], mask=[0, 1, 0, 0, 0])
assert_array_equal(bn(vals), expected)
# With interpolation
bn = mcolors.BoundaryNorm(boundaries, len(boundaries))
expected = np.ma.masked_array([-1, -99, 0, 2, 3], mask=[0, 1, 0, 0, 0])
assert_array_equal(bn(vals), expected)
# Non-trivial masked arrays
vals = np.ma.masked_invalid([np.Inf, np.NaN])
assert np.all(bn(vals).mask)
vals = np.ma.masked_invalid([np.Inf])
assert np.all(bn(vals).mask)
# Incompatible extend and clip
with pytest.raises(ValueError, match="not compatible"):
mcolors.BoundaryNorm(np.arange(4), 5, extend='both', clip=True)
# Too small ncolors argument
with pytest.raises(ValueError, match="ncolors must equal or exceed"):
mcolors.BoundaryNorm(np.arange(4), 2)
with pytest.raises(ValueError, match="ncolors must equal or exceed"):
mcolors.BoundaryNorm(np.arange(4), 3, extend='min')
with pytest.raises(ValueError, match="ncolors must equal or exceed"):
mcolors.BoundaryNorm(np.arange(4), 4, extend='both')
# Testing extend keyword, with interpolation (large cmap)
bounds = [1, 2, 3]
cmap = mpl.colormaps['viridis']
mynorm = mcolors.BoundaryNorm(bounds, cmap.N, extend='both')
refnorm = mcolors.BoundaryNorm([0] + bounds + [4], cmap.N)
x = np.random.randn(100) * 10 + 2
ref = refnorm(x)
ref[ref == 0] = -1
ref[ref == cmap.N - 1] = cmap.N
assert_array_equal(mynorm(x), ref)
# Without interpolation
cmref = mcolors.ListedColormap(['blue', 'red'])
cmref.set_over('black')
cmref.set_under('white')
cmshould = mcolors.ListedColormap(['white', 'blue', 'red', 'black'])
assert mcolors.same_color(cmref.get_over(), 'black')
assert mcolors.same_color(cmref.get_under(), 'white')
refnorm = mcolors.BoundaryNorm(bounds, cmref.N)
mynorm = mcolors.BoundaryNorm(bounds, cmshould.N, extend='both')
assert mynorm.vmin == refnorm.vmin
assert mynorm.vmax == refnorm.vmax
assert mynorm(bounds[0] - 0.1) == -1 # under
assert mynorm(bounds[0] + 0.1) == 1 # first bin -> second color
assert mynorm(bounds[-1] - 0.1) == cmshould.N - 2 # next-to-last color
assert mynorm(bounds[-1] + 0.1) == cmshould.N # over
x = [-1, 1.2, 2.3, 9.6]
assert_array_equal(cmshould(mynorm(x)), cmshould([0, 1, 2, 3]))
x = np.random.randn(100) * 10 + 2
assert_array_equal(cmshould(mynorm(x)), cmref(refnorm(x)))
# Just min
cmref = mcolors.ListedColormap(['blue', 'red'])
cmref.set_under('white')
cmshould = mcolors.ListedColormap(['white', 'blue', 'red'])
assert mcolors.same_color(cmref.get_under(), 'white')
assert cmref.N == 2
assert cmshould.N == 3
refnorm = mcolors.BoundaryNorm(bounds, cmref.N)
mynorm = mcolors.BoundaryNorm(bounds, cmshould.N, extend='min')
assert mynorm.vmin == refnorm.vmin
assert mynorm.vmax == refnorm.vmax
x = [-1, 1.2, 2.3]
assert_array_equal(cmshould(mynorm(x)), cmshould([0, 1, 2]))
x = np.random.randn(100) * 10 + 2
assert_array_equal(cmshould(mynorm(x)), cmref(refnorm(x)))
# Just max
cmref = mcolors.ListedColormap(['blue', 'red'])
cmref.set_over('black')
cmshould = mcolors.ListedColormap(['blue', 'red', 'black'])
assert mcolors.same_color(cmref.get_over(), 'black')
assert cmref.N == 2
assert cmshould.N == 3
refnorm = mcolors.BoundaryNorm(bounds, cmref.N)
mynorm = mcolors.BoundaryNorm(bounds, cmshould.N, extend='max')
assert mynorm.vmin == refnorm.vmin
assert mynorm.vmax == refnorm.vmax
x = [1.2, 2.3, 4]
assert_array_equal(cmshould(mynorm(x)), cmshould([0, 1, 2]))
x = np.random.randn(100) * 10 + 2
assert_array_equal(cmshould(mynorm(x)), cmref(refnorm(x)))
|
GitHub issue #1258: interpolation was failing with numpy
1.7 pre-release.
| 10 | 623 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_BoundaryNorm():
boundaries = [0, 1.1, 2.2]
vals = [-1, 0, 1, 2, 2.2, 4]
# Without interpolation
expected = [-1, 0, 0, 1, 2, 2]
ncolors = len(boundaries) - 1
bn = mcolors.BoundaryNorm(boundaries, ncolors)
assert_array_equal(bn(vals), expected)
# ncolors != len(boundaries) - 1 triggers interpolation
expected = [-1, 0, 0, 2, 3, 3]
ncolors = len(boundaries)
bn = mcolors.BoundaryNorm(boundaries, ncolors)
assert_array_equal(bn(vals), expected)
# with a single region and interpolation
expected = [-1, 1, 1, 1, 3, 3]
bn = mcolors.BoundaryNorm([0, 2.2], ncolors)
assert_array_equal(bn(vals), expected)
# more boundaries for a third color
boundaries = [0, 1, 2, 3]
vals = [-1, 0.1, 1.1, 2.2, 4]
ncolors = 5
expected = [-1, 0, 2, 4, 5]
bn = mcolors.BoundaryNorm(boundaries, ncolors)
assert_array_equal(bn(vals), expected)
# a scalar as input should not trigger an error and should return a scalar
boundaries = [0, 1, 2]
vals = [-1, 0.1, 1.1, 2.2]
bn = mcolors.BoundaryNorm(boundaries, 2)
expected = [-1, 0, 1, 2]
for v, ex in zip(vals, expected):
ret = bn(v)
assert isinstance(ret, int)
assert_array_equal(ret, ex)
assert_array_equal(bn([v]), ex)
# same with interp
bn = mcolors.BoundaryNorm(boundaries, 3)
expected = [-1, 0, 2, 3]
for v, ex in zip(vals, expected):
ret = bn(v)
assert isinstance(ret, int)
assert_array_equal(ret, ex)
assert_array_equal(bn([v]), ex)
# Clipping
bn = mcolors.BoundaryNorm(boundaries, 3, clip=True)
expected = [0, 0, 2, 2]
for v, ex in zip(vals, expected):
ret = bn(v)
assert isinstance(ret, int)
assert_array_equal(ret, ex)
assert_array_equal(bn([v]), ex)
# Masked arrays
boundaries = [0, 1.1, 2.2]
vals = np.ma.masked_invalid([-1., np.NaN, 0, 1.4, 9])
# Without interpolation
ncolors = len(boundaries) - 1
bn = mcolors.BoundaryNorm(boundaries, ncolors)
expected = np.ma.masked_array([-1, -99, 0, 1, 2], mask=[0, 1, 0, 0, 0])
assert_array_equal(bn(vals), expected)
# With interpolation
bn = mcolors.BoundaryNorm(boundaries, len(boundaries))
expected = np.ma.masked_array([-1, -99, 0, 2, 3], mask=[0, 1, 0, 0, 0])
assert_array_equal(bn(vals), expected)
# Non-trivial masked arrays
vals = np.ma.masked_invalid([np.Inf, np.NaN])
assert np.all(bn(vals).mask)
vals = np.ma.masked_invalid([np.Inf])
assert np.all(bn(vals).mask)
# Incompatible extend and clip
with pytest.raises(ValueError, match="not compatible"):
mcolors.BoundaryNorm(np.arange(4), 5, extend='both', clip=True)
# Too small ncolors argument
with pytest.raises(ValueError, match="ncolors must equal or exceed"):
mcolors.BoundaryNorm(np.arange(4), 2)
with pytest.raises(ValueError, match="ncolors must equal or exceed"):
mcolors.BoundaryNorm(np.arange(4), 3, extend='min')
with pytest.raises(ValueError, match="ncolors must equal or exceed"):
mcolors.BoundaryNorm(np.arange(4), 4, extend='both')
# Testing extend keyword, with interpolation (large cmap)
bounds = [1, 2, 3]
cmap = mpl.colormaps['viridis']
mynorm = mcolors.BoundaryNorm(bounds, cmap.N, extend='both')
refnorm = mcolors.BoundaryNorm([0] + bounds + [4], cmap.N)
x = np.random.randn(100) * 10 + 2
ref = refnorm(x)
ref[ref == 0] = -1
ref[ref == cmap.N - 1] = cmap.N
assert_array_equal(mynorm(x), ref)
# Without interpolation
cmref = mcolors.ListedColormap(['blue', 'red'])
cmref.set_over('black')
cmref.set_under('white')
cmshould = mcolors.ListedColormap(['white', 'blue', 'red', 'black'])
assert mcolors.same_color(cmref.get_over(), 'black')
assert mcolors.same_color(cmref.get_under(), 'white')
refnorm = mcolors.BoundaryNorm(bounds, cmref.N)
mynorm = mcolors.BoundaryNorm(bounds, cmshould.N, extend='both')
assert mynorm.vmin == refnorm.vmin
assert mynorm.vmax == refnorm.vmax
assert mynorm(bounds[0] - 0.1) == -1 # under
assert mynorm(bounds[0] + 0.1) == 1 # first bin -> second color
assert mynorm(bounds[-1] - 0.1) == cmshould.N - 2 # next-to-last color
assert mynorm(bounds[-1] + 0.1) == cmshould.N # over
x = [-1, 1.2, 2.3, 9.6]
assert_array_equal(cmshould(mynorm(x)), cmshould([0, 1, 2, 3]))
x = np.random.randn(100) * 10 + 2
assert_array_equal(cmshould(mynorm(x)), cmref(refnorm(x)))
# Just min
cmref = mcolors.ListedColormap(['blue', 'red'])
cmref.set_under('white')
cmshould = mcolors.ListedColormap(['white', 'blue', 'red'])
assert mcolors.same_color(cmref.get_under(), 'white')
assert cmref.N == 2
assert cmshould.N == 3
refnorm = mcolors.BoundaryNorm(bounds, cmref.N)
mynorm = mcolors.BoundaryNorm(bounds, cmshould.N, extend='min')
assert mynorm.vmin == refnorm.vmin
assert mynorm.vmax == refnorm.vmax
x = [-1, 1.2, 2.3]
assert_array_equal(cmshould(mynorm(x)), cmshould([0, 1, 2]))
x = np.random.randn(100) * 10 + 2
assert_array_equal(cmshould(mynorm(x)), cmref(refnorm(x)))
# Just max
cmref = mcolors.ListedColormap(['blue', 'red'])
cmref.set_over('black')
cmshould = mcolors.ListedColormap(['blue', 'red', 'black'])
assert mcolors.same_color(cmref.get_over(), 'black')
assert cmref.N == 2
assert cmshould.N == 3
refnorm = mcolors.BoundaryNorm(bounds, cmref.N)
mynorm = mcolors.BoundaryNorm(bounds, cmshould.N, extend='max')
assert mynorm.vmin == refnorm.vmin
assert mynorm.vmax == refnorm.vmax
x = [1.2, 2.3, 4]
assert_array_equal(cmshould(mynorm(x)), cmshould([0, 1, 2]))
x = np.random.randn(100) * 10 + 2
assert_array_equal(cmshould(mynorm(x)), cmref(refnorm(x)))
```
###Assistant :
GitHub issue #1258: interpolation was failing with numpy
1.7 pre-release.
|
38 | def _join_by_index(self, other_modin_frames, how, sort, ignore_index):
if how == "outer":
raise NotImplementedError("outer join is not supported in HDK engine")
lhs = self._maybe_materialize_rowid()
reset_index_names = False
for rhs in other_modin_frames:
rhs = rhs._maybe_materialize_rowid()
if len(lhs._index_cols) != len(rhs._index_cols):
raise NotImplementedError(
"join by indexes with different sizes is not supported"
)
reset_index_names = reset_index_names or lhs._index_cols != rhs._index_cols
condition = lhs._build_equi_join_condition(
rhs, lhs._index_cols, rhs._index_cols
)
exprs = lhs._index_exprs()
new_columns = lhs.columns.to_list()
for col in lhs.columns:
exprs[col] = lhs.ref(col)
for col in rhs.columns:
# Handle duplicating column names here. When user specifies
# suffixes to make a join, actual renaming is done in front-end.
new_col_name = col
rename_idx = 0
while new_col_name in exprs:
new_col_name = f"{col}{rename_idx}"
rename_idx += 1
exprs[new_col_name] = rhs.ref(col)
new_columns.append(new_col_name)
op = JoinNode(
lhs,
rhs,
how=how,
exprs=exprs,
condition=condition,
)
new_columns = Index.__new__(
Index, data=new_columns, dtype=self.columns.dtype
)
lhs = lhs.__constructor__(
dtypes=lhs._dtypes_for_exprs(exprs),
columns=new_columns,
index_cols=lhs._index_cols,
op=op,
force_execution_mode=self._force_execution_mode,
)
if sort:
lhs = lhs.sort_rows(
lhs._index_cols,
ascending=True,
ignore_index=False,
na_position="last",
)
if reset_index_names:
lhs = lhs._reset_index_names()
if ignore_index:
new_columns = Index.__new__(RangeIndex, data=range(len(lhs.columns)))
lhs = lhs._set_columns(new_columns)
return lhs
|
Perform equi-join operation for multiple frames by index columns.
Parameters
----------
other_modin_frames : list of HdkOnNativeDataframe
Frames to join with.
how : str
A type of join.
sort : bool
Sort the result by join keys.
ignore_index : bool
If True then reset column index for the resulting frame.
Returns
-------
HdkOnNativeDataframe
The new frame.
| 55 | 171 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _join_by_index(self, other_modin_frames, how, sort, ignore_index):
if how == "outer":
raise NotImplementedError("outer join is not supported in HDK engine")
lhs = self._maybe_materialize_rowid()
reset_index_names = False
for rhs in other_modin_frames:
rhs = rhs._maybe_materialize_rowid()
if len(lhs._index_cols) != len(rhs._index_cols):
raise NotImplementedError(
"join by indexes with different sizes is not supported"
)
reset_index_names = reset_index_names or lhs._index_cols != rhs._index_cols
condition = lhs._build_equi_join_condition(
rhs, lhs._index_cols, rhs._index_cols
)
exprs = lhs._index_exprs()
new_columns = lhs.columns.to_list()
for col in lhs.columns:
exprs[col] = lhs.ref(col)
for col in rhs.columns:
# Handle duplicating column names here. When user specifies
# suffixes to make a join, actual renaming is done in front-end.
new_col_name = col
rename_idx = 0
while new_col_name in exprs:
new_col_name = f"{col}{rename_idx}"
rename_idx += 1
exprs[new_col_name] = rhs.ref(col)
new_columns.append(new_col_name)
op = JoinNode(
lhs,
rhs,
how=how,
exprs=exprs,
condition=condition,
)
new_columns = Index.__new__(
Index, data=new_columns, dtype=self.columns.dtype
)
lhs = lhs.__constructor__(
dtypes=lhs._dtypes_for_exprs(exprs),
columns=new_columns,
index_cols=lhs._index_cols,
op=op,
force_execution_mode=self._force_execution_mode,
)
if sort:
lhs = lhs.sort_rows(
lhs._index_cols,
ascending=True,
ignore_index=False,
na_position="last",
)
if reset_index_names:
lhs = lhs._reset_index_names()
if ignore_index:
new_columns = Index.__new__(RangeIndex, data=range(len(lhs.columns)))
lhs = lhs._set_columns(new_columns)
return lhs
```
###Assistant :
Perform equi-join operation for multiple frames by index columns.
Parameters
----------
other_modin_frames : list of HdkOnNativeDataframe
Frames to join with.
how : str
A type of join.
sort : bool
Sort the result by join keys.
ignore_index : bool
If True then reset column index for the resulting frame.
Returns
-------
HdkOnNativeDataframe
The new frame.
|
39 | def _object2proto(self) -> RunFunctionOrConstructorAction_PB:
return RunFunctionOrConstructorAction_PB(
path=self.path,
args=[serialize(x, to_bytes=True) for x in self.args],
kwargs={k: serialize(v, to_bytes=True) for k, v in self.kwargs.items()},
id_at_location=serialize(self.id_at_location),
address=serialize(self.address),
msg_id=serialize(self.id),
)
| Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: RunFunctionOrConstructorAction_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
| 68 | 25 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _object2proto(self) -> RunFunctionOrConstructorAction_PB:
return RunFunctionOrConstructorAction_PB(
path=self.path,
args=[serialize(x, to_bytes=True) for x in self.args],
kwargs={k: serialize(v, to_bytes=True) for k, v in self.kwargs.items()},
id_at_location=serialize(self.id_at_location),
address=serialize(self.address),
msg_id=serialize(self.id),
)
```
###Assistant : Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: RunFunctionOrConstructorAction_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
|
40 | def truncated_cube_graph(create_using=None):
description = [
"adjacencylist",
"Truncated Cube Graph",
24,
[
[2, 3, 5],
[12, 15],
[4, 5],
[7, 9],
[6],
[17, 19],
[8, 9],
[11, 13],
[10],
[18, 21],
[12, 13],
[15],
[14],
[22, 23],
[16],
[20, 24],
[18, 19],
[21],
[20],
[24],
[22],
[23],
[24],
[],
],
]
G = make_small_undirected_graph(description, create_using)
return G
|
Returns the skeleton of the truncated cube.
The truncated cube is an Archimedean solid with 14 regular
faces (6 octagonal and 8 triangular), 36 edges and 24 nodes [1]_.
The truncated cube is created by truncating (cutting off) the tips
of the cube one third of the way into each edge [2]_.
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
Skeleton of the truncated cube
References
----------
.. [1] https://en.wikipedia.org/wiki/Truncated_cube
.. [2] https://www.coolmath.com/reference/polyhedra-truncated-cube
| 91 | 56 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def truncated_cube_graph(create_using=None):
description = [
"adjacencylist",
"Truncated Cube Graph",
24,
[
[2, 3, 5],
[12, 15],
[4, 5],
[7, 9],
[6],
[17, 19],
[8, 9],
[11, 13],
[10],
[18, 21],
[12, 13],
[15],
[14],
[22, 23],
[16],
[20, 24],
[18, 19],
[21],
[20],
[24],
[22],
[23],
[24],
[],
],
]
G = make_small_undirected_graph(description, create_using)
return G
```
###Assistant :
Returns the skeleton of the truncated cube.
The truncated cube is an Archimedean solid with 14 regular
faces (6 octagonal and 8 triangular), 36 edges and 24 nodes [1]_.
The truncated cube is created by truncating (cutting off) the tips
of the cube one third of the way into each edge [2]_.
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
Skeleton of the truncated cube
References
----------
.. [1] https://en.wikipedia.org/wiki/Truncated_cube
.. [2] https://www.coolmath.com/reference/polyhedra-truncated-cube
|
41 | def get_admin_urls_for_registration(self):
urls = ()
for instance in self.modeladmin_instances:
urls += instance.get_admin_urls_for_registration()
return urls
|
Utilised by Wagtail's 'register_admin_urls' hook to register urls for
used by any associated ModelAdmin instances
| 15 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_admin_urls_for_registration(self):
urls = ()
for instance in self.modeladmin_instances:
urls += instance.get_admin_urls_for_registration()
return urls
```
###Assistant :
Utilised by Wagtail's 'register_admin_urls' hook to register urls for
used by any associated ModelAdmin instances
|
42 | def setName(self, name):
self.name = name
self.errmsg = "Expected " + self.name
if __diag__.enable_debug_on_named_expressions:
self.setDebug()
return self
|
Define name for this expression, makes debugging and exception messages clearer.
Example::
Word(nums).parseString("ABC") # -> Exception: Expected W:(0123...) (at char 0), (line:1, col:1)
Word(nums).setName("integer").parseString("ABC") # -> Exception: Expected integer (at char 0), (line:1, col:1)
| 34 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def setName(self, name):
self.name = name
self.errmsg = "Expected " + self.name
if __diag__.enable_debug_on_named_expressions:
self.setDebug()
return self
```
###Assistant :
Define name for this expression, makes debugging and exception messages clearer.
Example::
Word(nums).parseString("ABC") # -> Exception: Expected W:(0123...) (at char 0), (line:1, col:1)
Word(nums).setName("integer").parseString("ABC") # -> Exception: Expected integer (at char 0), (line:1, col:1)
|
43 | def get_func(cls, key, **kwargs):
if "agg_func" in kwargs:
return cls.inplace_applyier_builder(key, kwargs["agg_func"])
elif "func_dict" in kwargs:
return cls.inplace_applyier_builder(key, kwargs["func_dict"])
else:
return cls.inplace_applyier_builder(key)
|
Extract aggregation function from groupby arguments.
Parameters
----------
key : callable or str
Default aggregation function. If aggregation function is not specified
via groupby arguments, then `key` function is used.
**kwargs : dict
GroupBy arguments that may contain aggregation function.
Returns
-------
callable
Aggregation function.
Notes
-----
There are two ways of how groupby aggregation can be invoked:
1. Explicitly with query compiler method: `qc.groupby_sum()`.
2. By passing aggregation function as an argument: `qc.groupby_agg("sum")`.
Both are going to produce the same result, however in the first case actual aggregation
function can be extracted from the method name, while for the second only from the method arguments.
| 106 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_func(cls, key, **kwargs):
if "agg_func" in kwargs:
return cls.inplace_applyier_builder(key, kwargs["agg_func"])
elif "func_dict" in kwargs:
return cls.inplace_applyier_builder(key, kwargs["func_dict"])
else:
return cls.inplace_applyier_builder(key)
```
###Assistant :
Extract aggregation function from groupby arguments.
Parameters
----------
key : callable or str
Default aggregation function. If aggregation function is not specified
via groupby arguments, then `key` function is used.
**kwargs : dict
GroupBy arguments that may contain aggregation function.
Returns
-------
callable
Aggregation function.
Notes
-----
There are two ways of how groupby aggregation can be invoked:
1. Explicitly with query compiler method: `qc.groupby_sum()`.
2. By passing aggregation function as an argument: `qc.groupby_agg("sum")`.
Both are going to produce the same result, however in the first case actual aggregation
function can be extracted from the method name, while for the second only from the method arguments.
|
44 | def update_scheduler(self, metric):
self.worker_group.apply_all_operators(
lambda op: [sched.step(metric) for sched in op._schedulers]
)
| Calls ``scheduler.step(metric)`` on all registered schedulers.
This is useful for lr_schedulers such as ``ReduceLROnPlateau``.
| 14 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update_scheduler(self, metric):
self.worker_group.apply_all_operators(
lambda op: [sched.step(metric) for sched in op._schedulers]
)
```
###Assistant : Calls ``scheduler.step(metric)`` on all registered schedulers.
This is useful for lr_schedulers such as ``ReduceLROnPlateau``.
|
45 | def paired_cosine_distances(X, Y):
X, Y = check_paired_arrays(X, Y)
return 0.5 * row_norms(normalize(X) - normalize(Y), squared=True)
PAIRED_DISTANCES = {
"cosine": paired_cosine_distances,
"euclidean": paired_euclidean_distances,
"l2": paired_euclidean_distances,
"l1": paired_manhattan_distances,
"manhattan": paired_manhattan_distances,
"cityblock": paired_manhattan_distances,
}
|
Compute the paired cosine distances between X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : array-like of shape (n_samples, n_features)
An array where each row is a sample and each column is a feature.
Y : array-like of shape (n_samples, n_features)
An array where each row is a sample and each column is a feature.
Returns
-------
distances : ndarray of shape (n_samples,)
Returns the distances between the row vectors of `X`
and the row vectors of `Y`, where `distances[i]` is the
distance between `X[i]` and `Y[i]`.
Notes
-----
The cosine distance is equivalent to the half the squared
euclidean distance if each sample is normalized to unit norm.
| 114 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def paired_cosine_distances(X, Y):
X, Y = check_paired_arrays(X, Y)
return 0.5 * row_norms(normalize(X) - normalize(Y), squared=True)
PAIRED_DISTANCES = {
"cosine": paired_cosine_distances,
"euclidean": paired_euclidean_distances,
"l2": paired_euclidean_distances,
"l1": paired_manhattan_distances,
"manhattan": paired_manhattan_distances,
"cityblock": paired_manhattan_distances,
}
```
###Assistant :
Compute the paired cosine distances between X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : array-like of shape (n_samples, n_features)
An array where each row is a sample and each column is a feature.
Y : array-like of shape (n_samples, n_features)
An array where each row is a sample and each column is a feature.
Returns
-------
distances : ndarray of shape (n_samples,)
Returns the distances between the row vectors of `X`
and the row vectors of `Y`, where `distances[i]` is the
distance between `X[i]` and `Y[i]`.
Notes
-----
The cosine distance is equivalent to the half the squared
euclidean distance if each sample is normalized to unit norm.
|
46 | def torchdynamo_smart_context_manager(self):
ctx_manager = contextlib.nullcontext()
if is_torchdynamo_available():
import torchdynamo
from torchdynamo.optimizations.training import aot_autograd_speedup_strategy
if self.args.torchdynamo == "eager":
ctx_manager = torchdynamo.optimize("eager")
elif self.args.torchdynamo == "nvfuser":
ctx_manager = torchdynamo.optimize(aot_autograd_speedup_strategy)
return ctx_manager
|
A helper wrapper that creates an appropriate context manager for `torchdynamo`.
| 11 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def torchdynamo_smart_context_manager(self):
ctx_manager = contextlib.nullcontext()
if is_torchdynamo_available():
import torchdynamo
from torchdynamo.optimizations.training import aot_autograd_speedup_strategy
if self.args.torchdynamo == "eager":
ctx_manager = torchdynamo.optimize("eager")
elif self.args.torchdynamo == "nvfuser":
ctx_manager = torchdynamo.optimize(aot_autograd_speedup_strategy)
return ctx_manager
```
###Assistant :
A helper wrapper that creates an appropriate context manager for `torchdynamo`.
|
47 | def check_aug_version(self) -> bool:
self.aug.set("/test/path/testing/arg", "aRgUMeNT")
try:
matches = self.aug.match(
"/test//*[self::arg=~regexp('argument', 'i')]")
except RuntimeError:
self.aug.remove("/test/path")
return False
self.aug.remove("/test/path")
return matches
| Checks that we have recent enough version of libaugeas.
If augeas version is recent enough, it will support case insensitive
regexp matching | 22 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_aug_version(self) -> bool:
self.aug.set("/test/path/testing/arg", "aRgUMeNT")
try:
matches = self.aug.match(
"/test//*[self::arg=~regexp('argument', 'i')]")
except RuntimeError:
self.aug.remove("/test/path")
return False
self.aug.remove("/test/path")
return matches
```
###Assistant : Checks that we have recent enough version of libaugeas.
If augeas version is recent enough, it will support case insensitive
regexp matching
|
48 | def resize_depth(depth, width, height):
depth = torch.squeeze(depth[0, :, :, :]).to("cpu")
depth_resized = cv2.resize(
depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC
)
return depth_resized
| Resize depth map and bring to CPU (numpy).
Args:
depth (tensor): depth
width (int): image width
height (int): image height
Returns:
array: processed depth
| 24 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def resize_depth(depth, width, height):
depth = torch.squeeze(depth[0, :, :, :]).to("cpu")
depth_resized = cv2.resize(
depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC
)
return depth_resized
```
###Assistant : Resize depth map and bring to CPU (numpy).
Args:
depth (tensor): depth
width (int): image width
height (int): image height
Returns:
array: processed depth
|
49 | def comp(z1, z2, tol=None):
r
if type(z2) is str:
if not pure_complex(z1, or_real=True):
raise ValueError('when z2 is a str z1 must be a Number')
return str(z1) == z2
if not z1:
z1, z2 = z2, z1
if not z1:
return True
if not tol:
a, b = z1, z2
if tol == '':
return str(a) == str(b)
if tol is None:
a, b = sympify(a), sympify(b)
if not all(i.is_number for i in (a, b)):
raise ValueError('expecting 2 numbers')
fa = a.atoms(Float)
fb = b.atoms(Float)
if not fa and not fb:
# no floats -- compare exactly
return a == b
# get a to be pure_complex
for _ in range(2):
ca = pure_complex(a, or_real=True)
if not ca:
if fa:
a = a.n(prec_to_dps(min([i._prec for i in fa])))
ca = pure_complex(a, or_real=True)
break
else:
fa, fb = fb, fa
a, b = b, a
cb = pure_complex(b)
if not cb and fb:
b = b.n(prec_to_dps(min([i._prec for i in fb])))
cb = pure_complex(b, or_real=True)
if ca and cb and (ca[1] or cb[1]):
return all(comp(i, j) for i, j in zip(ca, cb))
tol = 10**prec_to_dps(min(a._prec, getattr(b, '_prec', a._prec)))
return int(abs(a - b)*tol) <= 5
diff = abs(z1 - z2)
az1 = abs(z1)
if z2 and az1 > 1:
return diff/az1 <= tol
else:
return diff <= tol
| Return a bool indicating whether the error between z1 and z2
is $\le$ ``tol``.
Examples
========
If ``tol`` is ``None`` then ``True`` will be returned if
:math:`|z1 - z2|\times 10^p \le 5` where $p$ is minimum value of the
decimal precision of each value.
>>> from sympy import comp, pi
>>> pi4 = pi.n(4); pi4
3.142
>>> comp(_, 3.142)
True
>>> comp(pi4, 3.141)
False
>>> comp(pi4, 3.143)
False
A comparison of strings will be made
if ``z1`` is a Number and ``z2`` is a string or ``tol`` is ''.
>>> comp(pi4, 3.1415)
True
>>> comp(pi4, 3.1415, '')
False
When ``tol`` is provided and $z2$ is non-zero and
:math:`|z1| > 1` the error is normalized by :math:`|z1|`:
>>> abs(pi4 - 3.14)/pi4
0.000509791731426756
>>> comp(pi4, 3.14, .001) # difference less than 0.1%
True
>>> comp(pi4, 3.14, .0005) # difference less than 0.1%
False
When :math:`|z1| \le 1` the absolute error is used:
>>> 1/pi4
0.3183
>>> abs(1/pi4 - 0.3183)/(1/pi4)
3.07371499106316e-5
>>> abs(1/pi4 - 0.3183)
9.78393554684764e-6
>>> comp(1/pi4, 0.3183, 1e-5)
True
To see if the absolute error between ``z1`` and ``z2`` is less
than or equal to ``tol``, call this as ``comp(z1 - z2, 0, tol)``
or ``comp(z1 - z2, tol=tol)``:
>>> abs(pi4 - 3.14)
0.00160156249999988
>>> comp(pi4 - 3.14, 0, .002)
True
>>> comp(pi4 - 3.14, 0, .001)
False
| 217 | 213 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def comp(z1, z2, tol=None):
r
if type(z2) is str:
if not pure_complex(z1, or_real=True):
raise ValueError('when z2 is a str z1 must be a Number')
return str(z1) == z2
if not z1:
z1, z2 = z2, z1
if not z1:
return True
if not tol:
a, b = z1, z2
if tol == '':
return str(a) == str(b)
if tol is None:
a, b = sympify(a), sympify(b)
if not all(i.is_number for i in (a, b)):
raise ValueError('expecting 2 numbers')
fa = a.atoms(Float)
fb = b.atoms(Float)
if not fa and not fb:
# no floats -- compare exactly
return a == b
# get a to be pure_complex
for _ in range(2):
ca = pure_complex(a, or_real=True)
if not ca:
if fa:
a = a.n(prec_to_dps(min([i._prec for i in fa])))
ca = pure_complex(a, or_real=True)
break
else:
fa, fb = fb, fa
a, b = b, a
cb = pure_complex(b)
if not cb and fb:
b = b.n(prec_to_dps(min([i._prec for i in fb])))
cb = pure_complex(b, or_real=True)
if ca and cb and (ca[1] or cb[1]):
return all(comp(i, j) for i, j in zip(ca, cb))
tol = 10**prec_to_dps(min(a._prec, getattr(b, '_prec', a._prec)))
return int(abs(a - b)*tol) <= 5
diff = abs(z1 - z2)
az1 = abs(z1)
if z2 and az1 > 1:
return diff/az1 <= tol
else:
return diff <= tol
```
###Assistant : Return a bool indicating whether the error between z1 and z2
is $\le$ ``tol``.
Examples
========
If ``tol`` is ``None`` then ``True`` will be returned if
:math:`|z1 - z2|\times 10^p \le 5` where $p$ is minimum value of the
decimal precision of each value.
>>> from sympy import comp, pi
>>> pi4 = pi.n(4); pi4
3.142
>>> comp(_, 3.142)
True
>>> comp(pi4, 3.141)
False
>>> comp(pi4, 3.143)
False
A comparison of strings will be made
if ``z1`` is a Number and ``z2`` is a string or ``tol`` is ''.
>>> comp(pi4, 3.1415)
True
>>> comp(pi4, 3.1415, '')
False
When ``tol`` is provided and $z2$ is non-zero and
:math:`|z1| > 1` the error is normalized by :math:`|z1|`:
>>> abs(pi4 - 3.14)/pi4
0.000509791731426756
>>> comp(pi4, 3.14, .001) # difference less than 0.1%
True
>>> comp(pi4, 3.14, .0005) # difference less than 0.1%
False
When :math:`|z1| \le 1` the absolute error is used:
>>> 1/pi4
0.3183
>>> abs(1/pi4 - 0.3183)/(1/pi4)
3.07371499106316e-5
>>> abs(1/pi4 - 0.3183)
9.78393554684764e-6
>>> comp(1/pi4, 0.3183, 1e-5)
True
To see if the absolute error between ``z1`` and ``z2`` is less
than or equal to ``tol``, call this as ``comp(z1 - z2, 0, tol)``
or ``comp(z1 - z2, tol=tol)``:
>>> abs(pi4 - 3.14)
0.00160156249999988
>>> comp(pi4 - 3.14, 0, .002)
True
>>> comp(pi4 - 3.14, 0, .001)
False
|
50 | def _parse_img_level_ann(self, image_level_ann_file):
item_lists = defaultdict(list)
with self.file_client.get_local_path(
image_level_ann_file) as local_path:
with open(local_path, 'r') as f:
reader = csv.reader(f)
i = -1
for line in reader:
i += 1
if i == 0:
continue
else:
img_id = line[0]
label_id = line[1]
assert label_id in self.label_id_mapping
image_level_label = int(
self.label_id_mapping[label_id])
confidence = float(line[2])
item_lists[img_id].append(
dict(
image_level_label=image_level_label,
confidence=confidence))
return item_lists
| Parse image level annotations from csv style ann_file.
Args:
image_level_ann_file (str): CSV style image level annotation
file path.
Returns:
defaultdict[list[dict]]: Annotations where item of the defaultdict
indicates an image, each of which has (n) dicts.
Keys of dicts are:
- `image_level_label` (int): of shape 1.
- `confidence` (float): of shape 1.
| 51 | 58 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _parse_img_level_ann(self, image_level_ann_file):
item_lists = defaultdict(list)
with self.file_client.get_local_path(
image_level_ann_file) as local_path:
with open(local_path, 'r') as f:
reader = csv.reader(f)
i = -1
for line in reader:
i += 1
if i == 0:
continue
else:
img_id = line[0]
label_id = line[1]
assert label_id in self.label_id_mapping
image_level_label = int(
self.label_id_mapping[label_id])
confidence = float(line[2])
item_lists[img_id].append(
dict(
image_level_label=image_level_label,
confidence=confidence))
return item_lists
```
###Assistant : Parse image level annotations from csv style ann_file.
Args:
image_level_ann_file (str): CSV style image level annotation
file path.
Returns:
defaultdict[list[dict]]: Annotations where item of the defaultdict
indicates an image, each of which has (n) dicts.
Keys of dicts are:
- `image_level_label` (int): of shape 1.
- `confidence` (float): of shape 1.
|
51 | def logical_and(self, a, b):
a = _convert_other(a, raiseit=True)
return a.logical_and(b, context=self)
| Applies the logical operation 'and' between each operand's digits.
The operands must be both logical numbers.
>>> ExtendedContext.logical_and(Decimal('0'), Decimal('0'))
Decimal('0')
>>> ExtendedContext.logical_and(Decimal('0'), Decimal('1'))
Decimal('0')
>>> ExtendedContext.logical_and(Decimal('1'), Decimal('0'))
Decimal('0')
>>> ExtendedContext.logical_and(Decimal('1'), Decimal('1'))
Decimal('1')
>>> ExtendedContext.logical_and(Decimal('1100'), Decimal('1010'))
Decimal('1000')
>>> ExtendedContext.logical_and(Decimal('1111'), Decimal('10'))
Decimal('10')
>>> ExtendedContext.logical_and(110, 1101)
Decimal('100')
>>> ExtendedContext.logical_and(Decimal(110), 1101)
Decimal('100')
>>> ExtendedContext.logical_and(110, Decimal(1101))
Decimal('100')
| 52 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def logical_and(self, a, b):
a = _convert_other(a, raiseit=True)
return a.logical_and(b, context=self)
```
###Assistant : Applies the logical operation 'and' between each operand's digits.
The operands must be both logical numbers.
>>> ExtendedContext.logical_and(Decimal('0'), Decimal('0'))
Decimal('0')
>>> ExtendedContext.logical_and(Decimal('0'), Decimal('1'))
Decimal('0')
>>> ExtendedContext.logical_and(Decimal('1'), Decimal('0'))
Decimal('0')
>>> ExtendedContext.logical_and(Decimal('1'), Decimal('1'))
Decimal('1')
>>> ExtendedContext.logical_and(Decimal('1100'), Decimal('1010'))
Decimal('1000')
>>> ExtendedContext.logical_and(Decimal('1111'), Decimal('10'))
Decimal('10')
>>> ExtendedContext.logical_and(110, 1101)
Decimal('100')
>>> ExtendedContext.logical_and(Decimal(110), 1101)
Decimal('100')
>>> ExtendedContext.logical_and(110, Decimal(1101))
Decimal('100')
|
52 | def decrement_part_small(self, part, ub):
if self.lpart >= ub - 1:
self.p1 += 1 # increment to keep track of usefulness of tests
return False
plen = len(part)
for j in range(plen - 1, -1, -1):
# Knuth's mod, (answer to problem 7.2.1.5.69)
if j == 0 and (part[0].v - 1)*(ub - self.lpart) < part[0].u:
self.k1 += 1
return False
if j == 0 and part[j].v > 1 or j > 0 and part[j].v > 0:
# found val to decrement
part[j].v -= 1
# Reset trailing parts back to maximum
for k in range(j + 1, plen):
part[k].v = part[k].u
# Have now decremented part, but are we doomed to
# failure when it is expanded? Check one oddball case
# that turns out to be surprisingly common - exactly
# enough room to expand the leading component, but no
# room for the second component, which has v=0.
if (plen > 1 and part[1].v == 0 and
(part[0].u - part[0].v) ==
((ub - self.lpart - 1) * part[0].v)):
self.k2 += 1
self.db_trace("Decrement fails test 3")
return False
return True
return False
| Decrements part (a subrange of pstack), if possible, returning
True iff the part was successfully decremented.
Parameters
==========
part
part to be decremented (topmost part on the stack)
ub
the maximum number of parts allowed in a partition
returned by the calling traversal.
Notes
=====
The goal of this modification of the ordinary decrement method
is to fail (meaning that the subtree rooted at this part is to
be skipped) when it can be proved that this part can only have
child partitions which are larger than allowed by ``ub``. If a
decision is made to fail, it must be accurate, otherwise the
enumeration will miss some partitions. But, it is OK not to
capture all the possible failures -- if a part is passed that
should not be, the resulting too-large partitions are filtered
by the enumeration one level up. However, as is usual in
constrained enumerations, failing early is advantageous.
The tests used by this method catch the most common cases,
although this implementation is by no means the last word on
this problem. The tests include:
1) ``lpart`` must be less than ``ub`` by at least 2. This is because
once a part has been decremented, the partition
will gain at least one child in the spread step.
2) If the leading component of the part is about to be
decremented, check for how many parts will be added in
order to use up the unallocated multiplicity in that
leading component, and fail if this number is greater than
allowed by ``ub``. (See code for the exact expression.) This
test is given in the answer to Knuth's problem 7.2.1.5.69.
3) If there is *exactly* enough room to expand the leading
component by the above test, check the next component (if
it exists) once decrementing has finished. If this has
``v == 0``, this next component will push the expansion over the
limit by 1, so fail.
| 319 | 182 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def decrement_part_small(self, part, ub):
if self.lpart >= ub - 1:
self.p1 += 1 # increment to keep track of usefulness of tests
return False
plen = len(part)
for j in range(plen - 1, -1, -1):
# Knuth's mod, (answer to problem 7.2.1.5.69)
if j == 0 and (part[0].v - 1)*(ub - self.lpart) < part[0].u:
self.k1 += 1
return False
if j == 0 and part[j].v > 1 or j > 0 and part[j].v > 0:
# found val to decrement
part[j].v -= 1
# Reset trailing parts back to maximum
for k in range(j + 1, plen):
part[k].v = part[k].u
# Have now decremented part, but are we doomed to
# failure when it is expanded? Check one oddball case
# that turns out to be surprisingly common - exactly
# enough room to expand the leading component, but no
# room for the second component, which has v=0.
if (plen > 1 and part[1].v == 0 and
(part[0].u - part[0].v) ==
((ub - self.lpart - 1) * part[0].v)):
self.k2 += 1
self.db_trace("Decrement fails test 3")
return False
return True
return False
```
###Assistant : Decrements part (a subrange of pstack), if possible, returning
True iff the part was successfully decremented.
Parameters
==========
part
part to be decremented (topmost part on the stack)
ub
the maximum number of parts allowed in a partition
returned by the calling traversal.
Notes
=====
The goal of this modification of the ordinary decrement method
is to fail (meaning that the subtree rooted at this part is to
be skipped) when it can be proved that this part can only have
child partitions which are larger than allowed by ``ub``. If a
decision is made to fail, it must be accurate, otherwise the
enumeration will miss some partitions. But, it is OK not to
capture all the possible failures -- if a part is passed that
should not be, the resulting too-large partitions are filtered
by the enumeration one level up. However, as is usual in
constrained enumerations, failing early is advantageous.
The tests used by this method catch the most common cases,
although this implementation is by no means the last word on
this problem. The tests include:
1) ``lpart`` must be less than ``ub`` by at least 2. This is because
once a part has been decremented, the partition
will gain at least one child in the spread step.
2) If the leading component of the part is about to be
decremented, check for how many parts will be added in
order to use up the unallocated multiplicity in that
leading component, and fail if this number is greater than
allowed by ``ub``. (See code for the exact expression.) This
test is given in the answer to Knuth's problem 7.2.1.5.69.
3) If there is *exactly* enough room to expand the leading
component by the above test, check the next component (if
it exists) once decrementing has finished. If this has
``v == 0``, this next component will push the expansion over the
limit by 1, so fail.
|
53 | def get_node_id(self) -> str:
node_id = self.worker.current_node_id
assert not node_id.is_nil()
return node_id.hex()
| Get current node ID for this worker or driver.
Node ID is the id of a node that your driver, task, or actor runs.
The ID will be in hex format.
Returns:
A node id in hex format for this worker or driver.
| 43 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_node_id(self) -> str:
node_id = self.worker.current_node_id
assert not node_id.is_nil()
return node_id.hex()
```
###Assistant : Get current node ID for this worker or driver.
Node ID is the id of a node that your driver, task, or actor runs.
The ID will be in hex format.
Returns:
A node id in hex format for this worker or driver.
|
54 | def list_option(*, info):
return _option(
info,
"List options",
lambda opt: (isinstance(info.config.get_obj(opt.name), list) and
not opt.no_autoconfig)
)
| A CompletionModel filled with settings whose values are lists. | 9 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def list_option(*, info):
return _option(
info,
"List options",
lambda opt: (isinstance(info.config.get_obj(opt.name), list) and
not opt.no_autoconfig)
)
```
###Assistant : A CompletionModel filled with settings whose values are lists.
|
55 | def get_instance_from_config_with_end_date(config, query):
start_date = "2021-03-04"
end_date = "2021-04-04"
conversion_window_days = 14
google_api = GoogleAds(credentials=config["credentials"], customer_id=config["customer_id"])
instance = CustomQuery(
api=google_api,
conversion_window_days=conversion_window_days,
start_date=start_date,
end_date=end_date,
time_zone="local",
custom_query_config={"query": query, "table_name": "whatever_table"},
)
return instance
@pytest.mark.parametrize(
"query, fields",
[
(
,
["campaign.id", "campaign.name", "campaign.status", "metrics.impressions"],
),
(
,
["campaign.accessible_bidding_strategy", "segments.ad_destination_type", "campaign.start_date", "campaign.end_date"],
),
(, []),
],
) |
SELecT
campaign.id,
campaign.name,
campaign.status,
metrics.impressions FROM campaign
wheRe campaign.status = 'PAUSED'
AND metrics.impressions > 100
order by campaign.status
SELECT
campaign.accessible_bidding_strategy,
segments.ad_destination_type,
campaign.start_date,
campaign.end_date
FROM campaign
selet aasdasd from aaa | 29 | 53 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_instance_from_config_with_end_date(config, query):
start_date = "2021-03-04"
end_date = "2021-04-04"
conversion_window_days = 14
google_api = GoogleAds(credentials=config["credentials"], customer_id=config["customer_id"])
instance = CustomQuery(
api=google_api,
conversion_window_days=conversion_window_days,
start_date=start_date,
end_date=end_date,
time_zone="local",
custom_query_config={"query": query, "table_name": "whatever_table"},
)
return instance
@pytest.mark.parametrize(
"query, fields",
[
(
,
["campaign.id", "campaign.name", "campaign.status", "metrics.impressions"],
),
(
,
["campaign.accessible_bidding_strategy", "segments.ad_destination_type", "campaign.start_date", "campaign.end_date"],
),
(, []),
],
)
```
###Assistant :
SELecT
campaign.id,
campaign.name,
campaign.status,
metrics.impressions FROM campaign
wheRe campaign.status = 'PAUSED'
AND metrics.impressions > 100
order by campaign.status
SELECT
campaign.accessible_bidding_strategy,
segments.ad_destination_type,
campaign.start_date,
campaign.end_date
FROM campaign
selet aasdasd from aaa
|
56 | def node_degree_xy(G, x="out", y="in", weight=None, nodes=None):
nodes = set(G) if nodes is None else set(nodes)
if G.is_directed():
direction = {"out": G.out_degree, "in": G.in_degree}
xdeg = direction[x]
ydeg = direction[y]
else:
xdeg = ydeg = G.degree
for u, degu in xdeg(nodes, weight=weight):
# use G.edges to treat multigraphs correctly
neighbors = (nbr for _, nbr in G.edges(u) if nbr in nodes)
for _, degv in ydeg(neighbors, weight=weight):
yield degu, degv
| Generate node degree-degree pairs for edges in G.
Parameters
----------
G: NetworkX graph
x: string ('in','out')
The degree type for source node (directed graphs only).
y: string ('in','out')
The degree type for target node (directed graphs only).
weight: string or None, optional (default=None)
The edge attribute that holds the numerical value used
as a weight. If None, then each edge has weight 1.
The degree is the sum of the edge weights adjacent to the node.
nodes: list or iterable (optional)
Use only edges that are adjacency to specified nodes.
The default is all nodes.
Returns
-------
(x, y): 2-tuple
Generates 2-tuple of (degree, degree) values.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edge(1, 2)
>>> list(nx.node_degree_xy(G, x="out", y="in"))
[(1, 1)]
>>> list(nx.node_degree_xy(G, x="in", y="out"))
[(0, 0)]
Notes
-----
For undirected graphs each edge is produced twice, once for each edge
representation (u, v) and (v, u), with the exception of self-loop edges
which only appear once.
| 157 | 69 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def node_degree_xy(G, x="out", y="in", weight=None, nodes=None):
nodes = set(G) if nodes is None else set(nodes)
if G.is_directed():
direction = {"out": G.out_degree, "in": G.in_degree}
xdeg = direction[x]
ydeg = direction[y]
else:
xdeg = ydeg = G.degree
for u, degu in xdeg(nodes, weight=weight):
# use G.edges to treat multigraphs correctly
neighbors = (nbr for _, nbr in G.edges(u) if nbr in nodes)
for _, degv in ydeg(neighbors, weight=weight):
yield degu, degv
```
###Assistant : Generate node degree-degree pairs for edges in G.
Parameters
----------
G: NetworkX graph
x: string ('in','out')
The degree type for source node (directed graphs only).
y: string ('in','out')
The degree type for target node (directed graphs only).
weight: string or None, optional (default=None)
The edge attribute that holds the numerical value used
as a weight. If None, then each edge has weight 1.
The degree is the sum of the edge weights adjacent to the node.
nodes: list or iterable (optional)
Use only edges that are adjacency to specified nodes.
The default is all nodes.
Returns
-------
(x, y): 2-tuple
Generates 2-tuple of (degree, degree) values.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edge(1, 2)
>>> list(nx.node_degree_xy(G, x="out", y="in"))
[(1, 1)]
>>> list(nx.node_degree_xy(G, x="in", y="out"))
[(0, 0)]
Notes
-----
For undirected graphs each edge is produced twice, once for each edge
representation (u, v) and (v, u), with the exception of self-loop edges
which only appear once.
|
57 | def validate(self, num_steps=None, profile=False, reduce_results=True, info=None):
worker_stats = self.worker_group.validate(
num_steps=num_steps, profile=profile, info=info
)
if reduce_results:
return self._process_stats(worker_stats)
else:
return worker_stats
| Evaluates the model on the validation data set.
Args:
num_steps (int): Number of batches to compute update steps on
per worker. This corresponds also to the number of times
``TrainingOperator.validate_batch`` is called per worker.
profile (bool): Returns time stats for the evaluation procedure.
reduce_results (bool): Whether to average all metrics across
all workers into one dict. If a metric is a non-numerical
value (or nested dictionaries), one value will be randomly
selected among the workers. If False, returns a list of dicts.
info (dict): Optional dictionary passed to the training
operator for `validate` and `validate_batch`.
Returns:
A dictionary of metrics for validation.
You can provide custom metrics by passing in a custom
``training_operator_cls``.
| 113 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def validate(self, num_steps=None, profile=False, reduce_results=True, info=None):
worker_stats = self.worker_group.validate(
num_steps=num_steps, profile=profile, info=info
)
if reduce_results:
return self._process_stats(worker_stats)
else:
return worker_stats
```
###Assistant : Evaluates the model on the validation data set.
Args:
num_steps (int): Number of batches to compute update steps on
per worker. This corresponds also to the number of times
``TrainingOperator.validate_batch`` is called per worker.
profile (bool): Returns time stats for the evaluation procedure.
reduce_results (bool): Whether to average all metrics across
all workers into one dict. If a metric is a non-numerical
value (or nested dictionaries), one value will be randomly
selected among the workers. If False, returns a list of dicts.
info (dict): Optional dictionary passed to the training
operator for `validate` and `validate_batch`.
Returns:
A dictionary of metrics for validation.
You can provide custom metrics by passing in a custom
``training_operator_cls``.
|
58 | def set_raw_scale(self, in_, scale):
self.__check_input(in_)
self.raw_scale[in_] = scale
|
Set the scale of raw features s.t. the input blob = input * scale.
While Python represents images in [0, 1], certain Caffe models
like CaffeNet and AlexNet represent images in [0, 255] so the raw_scale
of these models must be 255.
Parameters
----------
in_ : which input to assign this scale factor
scale : scale coefficient
| 57 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_raw_scale(self, in_, scale):
self.__check_input(in_)
self.raw_scale[in_] = scale
```
###Assistant :
Set the scale of raw features s.t. the input blob = input * scale.
While Python represents images in [0, 1], certain Caffe models
like CaffeNet and AlexNet represent images in [0, 255] so the raw_scale
of these models must be 255.
Parameters
----------
in_ : which input to assign this scale factor
scale : scale coefficient
|
59 | def test_add_rule_to_best_shard():
# If we start with an empty list, then add to first shard
shards: List[List[bazel_sharding.BazelRule]] = [list() for _ in range(4)]
optimum = 600
rule = bazel_sharding.BazelRule("mock", "medium")
bazel_sharding.add_rule_to_best_shard(rule, shards, optimum)
assert shards[0][0] == rule
assert all(not shard for shard in shards[1:])
# Add to first shard below optimum
old_rule = bazel_sharding.BazelRule("mock", "medium")
shards: List[List[bazel_sharding.BazelRule]] = [[old_rule] for _ in range(4)]
shards[3] = []
optimum = old_rule.actual_timeout_s
rule = bazel_sharding.BazelRule("mock", "small")
bazel_sharding.add_rule_to_best_shard(rule, shards, optimum)
assert shards[3][0] == rule
assert all(shard[-1] == old_rule for shard in shards[0:3])
# If all shards are above or equal optimum, add to the one with the smallest
# difference
old_rule = bazel_sharding.BazelRule("mock", "large")
shards: List[List[bazel_sharding.BazelRule]] = [[old_rule] for _ in range(4)]
optimum = old_rule.actual_timeout_s
old_rule_medium = bazel_sharding.BazelRule("mock", "medium")
shards[3][0] = old_rule_medium
rule = bazel_sharding.BazelRule("mock", "small")
bazel_sharding.add_rule_to_best_shard(rule, shards, optimum)
assert shards[3][0] == old_rule_medium
assert shards[3][-1] == rule
assert all(shard[-1] == old_rule for shard in shards[0:3])
| Test that the best shard in optimal strategy is chosen correctly. | 11 | 151 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_add_rule_to_best_shard():
# If we start with an empty list, then add to first shard
shards: List[List[bazel_sharding.BazelRule]] = [list() for _ in range(4)]
optimum = 600
rule = bazel_sharding.BazelRule("mock", "medium")
bazel_sharding.add_rule_to_best_shard(rule, shards, optimum)
assert shards[0][0] == rule
assert all(not shard for shard in shards[1:])
# Add to first shard below optimum
old_rule = bazel_sharding.BazelRule("mock", "medium")
shards: List[List[bazel_sharding.BazelRule]] = [[old_rule] for _ in range(4)]
shards[3] = []
optimum = old_rule.actual_timeout_s
rule = bazel_sharding.BazelRule("mock", "small")
bazel_sharding.add_rule_to_best_shard(rule, shards, optimum)
assert shards[3][0] == rule
assert all(shard[-1] == old_rule for shard in shards[0:3])
# If all shards are above or equal optimum, add to the one with the smallest
# difference
old_rule = bazel_sharding.BazelRule("mock", "large")
shards: List[List[bazel_sharding.BazelRule]] = [[old_rule] for _ in range(4)]
optimum = old_rule.actual_timeout_s
old_rule_medium = bazel_sharding.BazelRule("mock", "medium")
shards[3][0] = old_rule_medium
rule = bazel_sharding.BazelRule("mock", "small")
bazel_sharding.add_rule_to_best_shard(rule, shards, optimum)
assert shards[3][0] == old_rule_medium
assert shards[3][-1] == rule
assert all(shard[-1] == old_rule for shard in shards[0:3])
```
###Assistant : Test that the best shard in optimal strategy is chosen correctly.
|
60 | def async_heartbeat(self) -> None:
self._computed_state = False
self._restart_timer()
self.async_write_ha_state()
| Mark the device as online, and restart the 25 hour timer.
This gets called when the heartbeat node beats, but also when the
parent sensor sends any events, as we can trust that to mean the device
is online. This mitigates the risk of false positives due to a single
missed heartbeat event.
| 53 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def async_heartbeat(self) -> None:
self._computed_state = False
self._restart_timer()
self.async_write_ha_state()
```
###Assistant : Mark the device as online, and restart the 25 hour timer.
This gets called when the heartbeat node beats, but also when the
parent sensor sends any events, as we can trust that to mean the device
is online. This mitigates the risk of false positives due to a single
missed heartbeat event.
|
61 | def test_assert_series_equal_interval_dtype_mismatch():
# https://github.com/pandas-dev/pandas/issues/32747
left = Series([pd.Interval(0, 1, "right")], dtype="interval")
right = left.astype(object)
msg =
tm.assert_series_equal(left, right, check_dtype=False)
with pytest.raises(AssertionError, match=msg):
tm.assert_series_equal(left, right, check_dtype=True)
| Attributes of Series are different
Attribute "dtype" are different
\\[left\\]: interval\\[int64, right\\]
\\[right\\]: object | 14 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_assert_series_equal_interval_dtype_mismatch():
# https://github.com/pandas-dev/pandas/issues/32747
left = Series([pd.Interval(0, 1, "right")], dtype="interval")
right = left.astype(object)
msg =
tm.assert_series_equal(left, right, check_dtype=False)
with pytest.raises(AssertionError, match=msg):
tm.assert_series_equal(left, right, check_dtype=True)
```
###Assistant : Attributes of Series are different
Attribute "dtype" are different
\\[left\\]: interval\\[int64, right\\]
\\[right\\]: object
|
62 | def rc_file_defaults():
# Deprecation warnings were already handled when creating rcParamsOrig, no
# need to reemit them here.
with _api.suppress_matplotlib_deprecation_warning():
from .style.core import STYLE_BLACKLIST
rcParams.update({k: rcParamsOrig[k] for k in rcParamsOrig
if k not in STYLE_BLACKLIST})
|
Restore the `.rcParams` from the original rc file loaded by Matplotlib.
Style-blacklisted `.rcParams` (defined in
``matplotlib.style.core.STYLE_BLACKLIST``) are not updated.
| 19 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def rc_file_defaults():
# Deprecation warnings were already handled when creating rcParamsOrig, no
# need to reemit them here.
with _api.suppress_matplotlib_deprecation_warning():
from .style.core import STYLE_BLACKLIST
rcParams.update({k: rcParamsOrig[k] for k in rcParamsOrig
if k not in STYLE_BLACKLIST})
```
###Assistant :
Restore the `.rcParams` from the original rc file loaded by Matplotlib.
Style-blacklisted `.rcParams` (defined in
``matplotlib.style.core.STYLE_BLACKLIST``) are not updated.
|
63 | def lexer(self) -> Optional[Lexer]:
if isinstance(self._lexer, Lexer):
return self._lexer
try:
return get_lexer_by_name(
self._lexer,
stripnl=False,
ensurenl=True,
tabsize=self.tab_size,
)
except ClassNotFound:
return None
| The lexer for this syntax, or None if no lexer was found.
Tries to find the lexer by name if a string was passed to the constructor.
| 27 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def lexer(self) -> Optional[Lexer]:
if isinstance(self._lexer, Lexer):
return self._lexer
try:
return get_lexer_by_name(
self._lexer,
stripnl=False,
ensurenl=True,
tabsize=self.tab_size,
)
except ClassNotFound:
return None
```
###Assistant : The lexer for this syntax, or None if no lexer was found.
Tries to find the lexer by name if a string was passed to the constructor.
|
64 | def test_numeric_repl(file, multiline_file):
file.replace(multiline_file, r"Etiam", 123)
assert "123" in multiline_file.read_text()
|
This test covers cases where the replacement string is numeric. The CLI
parser yaml-fies it into a numeric type. If not converted back to a string
type in file.replace, a TypeError occurs when the replace is attempted. See
https://github.com/saltstack/salt/issues/9097 for more information.
| 42 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_numeric_repl(file, multiline_file):
file.replace(multiline_file, r"Etiam", 123)
assert "123" in multiline_file.read_text()
```
###Assistant :
This test covers cases where the replacement string is numeric. The CLI
parser yaml-fies it into a numeric type. If not converted back to a string
type in file.replace, a TypeError occurs when the replace is attempted. See
https://github.com/saltstack/salt/issues/9097 for more information.
|
65 | def set_interpret_parameters(self, segments=16):
self.interpretation_segments = segments
return self
|
Calculates interpretation score of image subsections by splitting the image into subsections, then using a "leave one out" method to calculate the score of each subsection by whiting out the subsection and measuring the delta of the output value.
Parameters:
segments (int): Number of interpretation segments to split image into.
| 50 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_interpret_parameters(self, segments=16):
self.interpretation_segments = segments
return self
```
###Assistant :
Calculates interpretation score of image subsections by splitting the image into subsections, then using a "leave one out" method to calculate the score of each subsection by whiting out the subsection and measuring the delta of the output value.
Parameters:
segments (int): Number of interpretation segments to split image into.
|
66 | def can_native_upsert(self) -> bool:
return sqlite3.sqlite_version_info >= (3, 24, 0)
|
Do we support native UPSERTs? This requires SQLite3 3.24+, plus some
more work we haven't done yet to tell what was inserted vs updated.
| 24 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def can_native_upsert(self) -> bool:
return sqlite3.sqlite_version_info >= (3, 24, 0)
```
###Assistant :
Do we support native UPSERTs? This requires SQLite3 3.24+, plus some
more work we haven't done yet to tell what was inserted vs updated.
|
67 | async def get_actors(self) -> dict:
reply = await self._client.get_all_actor_info(timeout=DEFAULT_RPC_TIMEOUT)
result = {}
for message in reply.actor_table_data:
data = self._message_to_dict(message=message, fields_to_decode=["actor_id"])
data = filter_fields(data, ActorState)
result[data["actor_id"]] = data
return result
| List all actor information from the cluster.
Returns:
{actor_id -> actor_data_in_dict}
actor_data_in_dict's schema is in ActorState
| 16 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def get_actors(self) -> dict:
reply = await self._client.get_all_actor_info(timeout=DEFAULT_RPC_TIMEOUT)
result = {}
for message in reply.actor_table_data:
data = self._message_to_dict(message=message, fields_to_decode=["actor_id"])
data = filter_fields(data, ActorState)
result[data["actor_id"]] = data
return result
```
###Assistant : List all actor information from the cluster.
Returns:
{actor_id -> actor_data_in_dict}
actor_data_in_dict's schema is in ActorState
|
68 | def insert_predictor_answer(self, insert):
model_interface = self.session.model_interface
data_store = self.session.data_store
select_data_query = insert.get('select_data_query')
if isinstance(select_data_query, str) is False or len(select_data_query) == 0:
self.packet(
ErrPacket,
err_code=ERR.ER_WRONG_ARGUMENTS,
msg="'select_data_query' should not be empty"
).send()
return
models = model_interface.get_models()
if insert['name'] in [x['name'] for x in models]:
self.packet(
ErrPacket,
err_code=ERR.ER_WRONG_ARGUMENTS,
msg=f"predictor with name '{insert['name']}'' already exists"
).send()
return
kwargs = {}
if isinstance(insert.get('training_options'), str) \
and len(insert['training_options']) > 0:
try:
kwargs = json.loads(insert['training_options'])
except Exception:
self.packet(
ErrPacket,
err_code=ERR.ER_WRONG_ARGUMENTS,
msg='training_options should be in valid JSON string'
).send()
return
integration = self.session.integration
if isinstance(integration, str) is False or len(integration) == 0:
self.packet(
ErrPacket,
err_code=ERR.ER_WRONG_ARGUMENTS,
msg='select_data_query can be used only in query from database'
).send()
return
insert['select_data_query'] = insert['select_data_query'].replace(r"\'", "'")
ds_name = data_store.get_vacant_name(insert['name'])
ds = data_store.save_datasource(ds_name, integration, {'query': insert['select_data_query']})
insert['predict'] = [x.strip() for x in insert['predict'].split(',')]
ds_data = data_store.get_datasource(ds_name)
if ds_data is None:
raise Exception(f"DataSource '{ds_name}' does not exists")
ds_columns = [x['name'] for x in ds_data['columns']]
for col in insert['predict']:
if col not in ds_columns:
data_store.delete_datasource(ds_name)
raise Exception(f"Column '{col}' not exists")
try:
insert['predict'] = self._check_predict_columns(insert['predict'], ds_columns)
except Exception:
data_store.delete_datasource(ds_name)
raise
model_interface.learn(
insert['name'], ds, insert['predict'], ds_data['id'], kwargs=kwargs, delete_ds_on_fail=True
)
self.packet(OkPacket).send()
| Start learn new predictor.
Parameters:
- insert - dict with keys as columns of mindsb.predictors table.
| 16 | 181 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def insert_predictor_answer(self, insert):
model_interface = self.session.model_interface
data_store = self.session.data_store
select_data_query = insert.get('select_data_query')
if isinstance(select_data_query, str) is False or len(select_data_query) == 0:
self.packet(
ErrPacket,
err_code=ERR.ER_WRONG_ARGUMENTS,
msg="'select_data_query' should not be empty"
).send()
return
models = model_interface.get_models()
if insert['name'] in [x['name'] for x in models]:
self.packet(
ErrPacket,
err_code=ERR.ER_WRONG_ARGUMENTS,
msg=f"predictor with name '{insert['name']}'' already exists"
).send()
return
kwargs = {}
if isinstance(insert.get('training_options'), str) \
and len(insert['training_options']) > 0:
try:
kwargs = json.loads(insert['training_options'])
except Exception:
self.packet(
ErrPacket,
err_code=ERR.ER_WRONG_ARGUMENTS,
msg='training_options should be in valid JSON string'
).send()
return
integration = self.session.integration
if isinstance(integration, str) is False or len(integration) == 0:
self.packet(
ErrPacket,
err_code=ERR.ER_WRONG_ARGUMENTS,
msg='select_data_query can be used only in query from database'
).send()
return
insert['select_data_query'] = insert['select_data_query'].replace(r"\'", "'")
ds_name = data_store.get_vacant_name(insert['name'])
ds = data_store.save_datasource(ds_name, integration, {'query': insert['select_data_query']})
insert['predict'] = [x.strip() for x in insert['predict'].split(',')]
ds_data = data_store.get_datasource(ds_name)
if ds_data is None:
raise Exception(f"DataSource '{ds_name}' does not exists")
ds_columns = [x['name'] for x in ds_data['columns']]
for col in insert['predict']:
if col not in ds_columns:
data_store.delete_datasource(ds_name)
raise Exception(f"Column '{col}' not exists")
try:
insert['predict'] = self._check_predict_columns(insert['predict'], ds_columns)
except Exception:
data_store.delete_datasource(ds_name)
raise
model_interface.learn(
insert['name'], ds, insert['predict'], ds_data['id'], kwargs=kwargs, delete_ds_on_fail=True
)
self.packet(OkPacket).send()
```
###Assistant : Start learn new predictor.
Parameters:
- insert - dict with keys as columns of mindsb.predictors table.
|
69 | def populate_historical_trade_data(self):
trade_data = self.__orderbook.pivot(
index="Date",
columns="Ticker",
values=[
"Type",
"Sector",
"Industry",
"Country",
"Price",
"Quantity",
"Fees",
"Premium",
"Investment",
"Side",
"Currency",
],
)
# Make historical prices columns a multi-index. This helps the merging.
self.portfolio_historical_prices.columns = pd.MultiIndex.from_product(
[["Close"], self.portfolio_historical_prices.columns]
)
# Merge with historical close prices (and fillna)
trade_data = pd.merge(
trade_data,
self.portfolio_historical_prices,
how="right",
left_index=True,
right_index=True,
).fillna(0)
# Accumulate quantity held by trade date
trade_data["Quantity"] = trade_data["Quantity"].cumsum()
trade_data["Investment"] = trade_data["Investment"].cumsum()
trade_data.loc[:, ("Investment", "Total")] = trade_data["Investment"][
self.tickers_list
].sum(axis=1)
self.historical_trade_data = trade_data
| Create a new dataframe to store historical prices by ticker | 10 | 78 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def populate_historical_trade_data(self):
trade_data = self.__orderbook.pivot(
index="Date",
columns="Ticker",
values=[
"Type",
"Sector",
"Industry",
"Country",
"Price",
"Quantity",
"Fees",
"Premium",
"Investment",
"Side",
"Currency",
],
)
# Make historical prices columns a multi-index. This helps the merging.
self.portfolio_historical_prices.columns = pd.MultiIndex.from_product(
[["Close"], self.portfolio_historical_prices.columns]
)
# Merge with historical close prices (and fillna)
trade_data = pd.merge(
trade_data,
self.portfolio_historical_prices,
how="right",
left_index=True,
right_index=True,
).fillna(0)
# Accumulate quantity held by trade date
trade_data["Quantity"] = trade_data["Quantity"].cumsum()
trade_data["Investment"] = trade_data["Investment"].cumsum()
trade_data.loc[:, ("Investment", "Total")] = trade_data["Investment"][
self.tickers_list
].sum(axis=1)
self.historical_trade_data = trade_data
```
###Assistant : Create a new dataframe to store historical prices by ticker
|
70 | def test_fetch_openml_equivalence_array_dataframe(monkeypatch, parser):
pytest.importorskip("pandas")
data_id = 61
_monkey_patch_webbased_functions(monkeypatch, data_id, gzip_response=True)
bunch_as_frame_true = fetch_openml(
data_id=data_id,
as_frame=True,
cache=False,
parser=parser,
)
bunch_as_frame_false = fetch_openml(
data_id=data_id,
as_frame=False,
cache=False,
parser=parser,
)
assert_allclose(bunch_as_frame_false.data, bunch_as_frame_true.data)
assert_array_equal(bunch_as_frame_false.target, bunch_as_frame_true.target)
# Known failure of PyPy for OpenML. See the following issue:
# https://github.com/scikit-learn/scikit-learn/issues/18906
@fails_if_pypy
@pytest.mark.parametrize("parser", ["liac-arff", "pandas"]) | Check the equivalence of the dataset when using `as_frame=False` and
`as_frame=True`.
| 11 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_fetch_openml_equivalence_array_dataframe(monkeypatch, parser):
pytest.importorskip("pandas")
data_id = 61
_monkey_patch_webbased_functions(monkeypatch, data_id, gzip_response=True)
bunch_as_frame_true = fetch_openml(
data_id=data_id,
as_frame=True,
cache=False,
parser=parser,
)
bunch_as_frame_false = fetch_openml(
data_id=data_id,
as_frame=False,
cache=False,
parser=parser,
)
assert_allclose(bunch_as_frame_false.data, bunch_as_frame_true.data)
assert_array_equal(bunch_as_frame_false.target, bunch_as_frame_true.target)
# Known failure of PyPy for OpenML. See the following issue:
# https://github.com/scikit-learn/scikit-learn/issues/18906
@fails_if_pypy
@pytest.mark.parametrize("parser", ["liac-arff", "pandas"])
```
###Assistant : Check the equivalence of the dataset when using `as_frame=False` and
`as_frame=True`.
|
71 | def wire_type(self):
if hasattr(self, '_m_wire_type'):
return self._m_wire_type
self._m_wire_type = KaitaiStream.resolve_enum(GoogleProtobuf.Pair.WireTypes, (self.key.value & 7))
return getattr(self, '_m_wire_type', None)
| "Wire type" is a part of the "key" that carries enough
information to parse value from the wire, i.e. read correct
amount of bytes, but there's not enough informaton to
interprete in unambiguously. For example, one can't clearly
distinguish 64-bit fixed-sized integers from 64-bit floats,
signed zigzag-encoded varints from regular unsigned varints,
arbitrary bytes from UTF-8 encoded strings, etc.
| 59 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def wire_type(self):
if hasattr(self, '_m_wire_type'):
return self._m_wire_type
self._m_wire_type = KaitaiStream.resolve_enum(GoogleProtobuf.Pair.WireTypes, (self.key.value & 7))
return getattr(self, '_m_wire_type', None)
```
###Assistant : "Wire type" is a part of the "key" that carries enough
information to parse value from the wire, i.e. read correct
amount of bytes, but there's not enough informaton to
interprete in unambiguously. For example, one can't clearly
distinguish 64-bit fixed-sized integers from 64-bit floats,
signed zigzag-encoded varints from regular unsigned varints,
arbitrary bytes from UTF-8 encoded strings, etc.
|
72 | def address(self): # pragma: no cover
warnings.warn(
"Client.address is deprecated, use Client.peername instead.",
DeprecationWarning,
stacklevel=2,
)
return self.peername
| *Deprecated:* An outdated alias for Client.peername. | 6 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def address(self): # pragma: no cover
warnings.warn(
"Client.address is deprecated, use Client.peername instead.",
DeprecationWarning,
stacklevel=2,
)
return self.peername
```
###Assistant : *Deprecated:* An outdated alias for Client.peername.
|
73 | def jordan_block(kls, size=None, eigenvalue=None, *, band='upper', **kwargs):
if 'rows' in kwargs or 'cols' in kwargs:
msg =
if 'rows' in kwargs and 'cols' in kwargs:
msg += f | Returns a Jordan block
Parameters
==========
size : Integer, optional
Specifies the shape of the Jordan block matrix.
eigenvalue : Number or Symbol
Specifies the value for the main diagonal of the matrix.
.. note::
The keyword ``eigenval`` is also specified as an alias
of this keyword, but it is not recommended to use.
We may deprecate the alias in later release.
band : 'upper' or 'lower', optional
Specifies the position of the off-diagonal to put `1` s on.
cls : Matrix, optional
Specifies the matrix class of the output form.
If it is not specified, the class type where the method is
being executed on will be returned.
rows, cols : Integer, optional
Specifies the shape of the Jordan block matrix. See Notes
section for the details of how these key works.
.. deprecated:: 1.4
The rows and cols parameters are deprecated and will be
removed in a future version.
Returns
=======
Matrix
A Jordan block matrix.
Raises
======
ValueError
If insufficient arguments are given for matrix size
specification, or no eigenvalue is given.
Examples
========
Creating a default Jordan block:
>>> from sympy import Matrix
>>> from sympy.abc import x
>>> Matrix.jordan_block(4, x)
Matrix([
[x, 1, 0, 0],
[0, x, 1, 0],
[0, 0, x, 1],
[0, 0, 0, x]])
Creating an alternative Jordan block matrix where `1` is on
lower off-diagonal:
>>> Matrix.jordan_block(4, x, band='lower')
Matrix([
[x, 0, 0, 0],
[1, x, 0, 0],
[0, 1, x, 0],
[0, 0, 1, x]])
Creating a Jordan block with keyword arguments
>>> Matrix.jordan_block(size=4, eigenvalue=x)
Matrix([
[x, 1, 0, 0],
[0, x, 1, 0],
[0, 0, x, 1],
[0, 0, 0, x]])
Notes
=====
.. deprecated:: 1.4
This feature is deprecated and will be removed in a future
version.
The keyword arguments ``size``, ``rows``, ``cols`` relates to
the Jordan block size specifications.
If you want to create a square Jordan block, specify either
one of the three arguments.
If you want to create a rectangular Jordan block, specify
``rows`` and ``cols`` individually.
+--------------------------------+---------------------+
| Arguments Given | Matrix Shape |
+----------+----------+----------+----------+----------+
| size | rows | cols | rows | cols |
+==========+==========+==========+==========+==========+
| size | Any | size | size |
+----------+----------+----------+----------+----------+
| | None | ValueError |
| +----------+----------+----------+----------+
| None | rows | None | rows | rows |
| +----------+----------+----------+----------+
| | None | cols | cols | cols |
+ +----------+----------+----------+----------+
| | rows | cols | rows | cols |
+----------+----------+----------+----------+----------+
References
==========
.. [1] https://en.wikipedia.org/wiki/Jordan_matrix
The 'rows' and 'cols' keywords to Matrix.jordan_block() are
deprecated. Use the 'size' parameter instead.
\
To get a non-square Jordan block matrix use a more generic
banded matrix constructor, like
| 442 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def jordan_block(kls, size=None, eigenvalue=None, *, band='upper', **kwargs):
if 'rows' in kwargs or 'cols' in kwargs:
msg =
if 'rows' in kwargs and 'cols' in kwargs:
msg += f
```
###Assistant : Returns a Jordan block
Parameters
==========
size : Integer, optional
Specifies the shape of the Jordan block matrix.
eigenvalue : Number or Symbol
Specifies the value for the main diagonal of the matrix.
.. note::
The keyword ``eigenval`` is also specified as an alias
of this keyword, but it is not recommended to use.
We may deprecate the alias in later release.
band : 'upper' or 'lower', optional
Specifies the position of the off-diagonal to put `1` s on.
cls : Matrix, optional
Specifies the matrix class of the output form.
If it is not specified, the class type where the method is
being executed on will be returned.
rows, cols : Integer, optional
Specifies the shape of the Jordan block matrix. See Notes
section for the details of how these key works.
.. deprecated:: 1.4
The rows and cols parameters are deprecated and will be
removed in a future version.
Returns
=======
Matrix
A Jordan block matrix.
Raises
======
ValueError
If insufficient arguments are given for matrix size
specification, or no eigenvalue is given.
Examples
========
Creating a default Jordan block:
>>> from sympy import Matrix
>>> from sympy.abc import x
>>> Matrix.jordan_block(4, x)
Matrix([
[x, 1, 0, 0],
[0, x, 1, 0],
[0, 0, x, 1],
[0, 0, 0, x]])
Creating an alternative Jordan block matrix where `1` is on
lower off-diagonal:
>>> Matrix.jordan_block(4, x, band='lower')
Matrix([
[x, 0, 0, 0],
[1, x, 0, 0],
[0, 1, x, 0],
[0, 0, 1, x]])
Creating a Jordan block with keyword arguments
>>> Matrix.jordan_block(size=4, eigenvalue=x)
Matrix([
[x, 1, 0, 0],
[0, x, 1, 0],
[0, 0, x, 1],
[0, 0, 0, x]])
Notes
=====
.. deprecated:: 1.4
This feature is deprecated and will be removed in a future
version.
The keyword arguments ``size``, ``rows``, ``cols`` relates to
the Jordan block size specifications.
If you want to create a square Jordan block, specify either
one of the three arguments.
If you want to create a rectangular Jordan block, specify
``rows`` and ``cols`` individually.
+--------------------------------+---------------------+
| Arguments Given | Matrix Shape |
+----------+----------+----------+----------+----------+
| size | rows | cols | rows | cols |
+==========+==========+==========+==========+==========+
| size | Any | size | size |
+----------+----------+----------+----------+----------+
| | None | ValueError |
| +----------+----------+----------+----------+
| None | rows | None | rows | rows |
| +----------+----------+----------+----------+
| | None | cols | cols | cols |
+ +----------+----------+----------+----------+
| | rows | cols | rows | cols |
+----------+----------+----------+----------+----------+
References
==========
.. [1] https://en.wikipedia.org/wiki/Jordan_matrix
The 'rows' and 'cols' keywords to Matrix.jordan_block() are
deprecated. Use the 'size' parameter instead.
\
To get a non-square Jordan block matrix use a more generic
banded matrix constructor, like
|
74 | def get_openapi_specs() -> dict:
app = get_app()
return get_openapi(
title=app.title,
version=app.version,
openapi_version=app.openapi_version,
description=app.description,
routes=app.routes,
servers=[{"url": "http://localhost:8000"}],
)
|
Used to autogenerate OpenAPI specs file to use in the documentation.
Returns `servers` to specify base URL for OpenAPI Playground (see https://swagger.io/docs/specification/api-host-and-base-path/)
See `.github/utils/generate_openapi_specs.py`
| 24 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_openapi_specs() -> dict:
app = get_app()
return get_openapi(
title=app.title,
version=app.version,
openapi_version=app.openapi_version,
description=app.description,
routes=app.routes,
servers=[{"url": "http://localhost:8000"}],
)
```
###Assistant :
Used to autogenerate OpenAPI specs file to use in the documentation.
Returns `servers` to specify base URL for OpenAPI Playground (see https://swagger.io/docs/specification/api-host-and-base-path/)
See `.github/utils/generate_openapi_specs.py`
|
75 | def get_all_employee_emails(company):
employee_list = frappe.get_all(
"Employee", fields=["name", "employee_name"], filters={"status": "Active", "company": company}
)
employee_emails = []
for employee in employee_list:
if not employee:
continue
user, company_email, personal_email = frappe.db.get_value(
"Employee", employee, ["user_id", "company_email", "personal_email"]
)
email = user or company_email or personal_email
if email:
employee_emails.append(email)
return employee_emails
| Returns list of employee emails either based on user_id or company_email | 11 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_all_employee_emails(company):
employee_list = frappe.get_all(
"Employee", fields=["name", "employee_name"], filters={"status": "Active", "company": company}
)
employee_emails = []
for employee in employee_list:
if not employee:
continue
user, company_email, personal_email = frappe.db.get_value(
"Employee", employee, ["user_id", "company_email", "personal_email"]
)
email = user or company_email or personal_email
if email:
employee_emails.append(email)
return employee_emails
```
###Assistant : Returns list of employee emails either based on user_id or company_email
|
76 | def test_in_non_semver_projects_resolved_in_next_release_is_equated_to_in_release(self):
release_1 = self.create_release(
date_added=timezone.now() - timedelta(minutes=45), version="foobar 1"
)
release_2 = self.create_release(version="foobar 2")
self.create_release(version="foobar 3")
group = self.store_event(
data={
"timestamp": iso_format(before_now(seconds=12)),
"fingerprint": ["group-1"],
"release": release_1.version,
},
project_id=self.project.id,
).group
self.login_as(user=self.user)
response = self.get_success_response(
qs_params={"id": group.id}, status="resolvedInNextRelease"
)
assert response.data["status"] == "resolved"
assert response.data["statusDetails"]["inNextRelease"]
grp_resolution = GroupResolution.objects.filter(group=group)
assert len(grp_resolution) == 1
grp_resolution = grp_resolution[0]
assert grp_resolution.current_release_version == release_1.version
assert grp_resolution.release.id == release_2.id
assert grp_resolution.type == GroupResolution.Type.in_release
assert grp_resolution.status == GroupResolution.Status.resolved
activity = Activity.objects.filter(
group=grp_resolution.group,
type=Activity.SET_RESOLVED_IN_RELEASE,
ident=grp_resolution.id,
).first()
assert activity.data["version"] == release_2.version
|
Test that ensures that if we basically know the next release when clicking on Resolved
In Next Release because that release exists, then we can short circuit setting
GroupResolution to type "inNextRelease", and then having `clear_exrired_resolutions` run
once a new release is created to convert GroupResolution to in_release and set Activity.
Basically we treat "ResolvedInNextRelease" as "ResolvedInRelease" when there is a release
that was created after the last release associated with the group being resolved
| 75 | 81 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_in_non_semver_projects_resolved_in_next_release_is_equated_to_in_release(self):
release_1 = self.create_release(
date_added=timezone.now() - timedelta(minutes=45), version="foobar 1"
)
release_2 = self.create_release(version="foobar 2")
self.create_release(version="foobar 3")
group = self.store_event(
data={
"timestamp": iso_format(before_now(seconds=12)),
"fingerprint": ["group-1"],
"release": release_1.version,
},
project_id=self.project.id,
).group
self.login_as(user=self.user)
response = self.get_success_response(
qs_params={"id": group.id}, status="resolvedInNextRelease"
)
assert response.data["status"] == "resolved"
assert response.data["statusDetails"]["inNextRelease"]
grp_resolution = GroupResolution.objects.filter(group=group)
assert len(grp_resolution) == 1
grp_resolution = grp_resolution[0]
assert grp_resolution.current_release_version == release_1.version
assert grp_resolution.release.id == release_2.id
assert grp_resolution.type == GroupResolution.Type.in_release
assert grp_resolution.status == GroupResolution.Status.resolved
activity = Activity.objects.filter(
group=grp_resolution.group,
type=Activity.SET_RESOLVED_IN_RELEASE,
ident=grp_resolution.id,
).first()
assert activity.data["version"] == release_2.version
```
###Assistant :
Test that ensures that if we basically know the next release when clicking on Resolved
In Next Release because that release exists, then we can short circuit setting
GroupResolution to type "inNextRelease", and then having `clear_exrired_resolutions` run
once a new release is created to convert GroupResolution to in_release and set Activity.
Basically we treat "ResolvedInNextRelease" as "ResolvedInRelease" when there is a release
that was created after the last release associated with the group being resolved
|
77 | def subscription_order_updated_webhook(subscription_webhook):
return subscription_webhook(
ORDER_UPDATED_SUBSCRIPTION_QUERY, WebhookEventAsyncType.ORDER_UPDATED
)
ORDER_CONFIRMED_SUBSCRIPTION_QUERY =
@pytest.fixture |
subscription{
event{
...on OrderConfirmed{
order{
id
}
}
}
}
| 10 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def subscription_order_updated_webhook(subscription_webhook):
return subscription_webhook(
ORDER_UPDATED_SUBSCRIPTION_QUERY, WebhookEventAsyncType.ORDER_UPDATED
)
ORDER_CONFIRMED_SUBSCRIPTION_QUERY =
@pytest.fixture
```
###Assistant :
subscription{
event{
...on OrderConfirmed{
order{
id
}
}
}
}
|
78 | def cast_to_compatible_types(table):
schema = table.schema
new_schema = schema
need_cast = False
uint_to_int_cast = False
new_cols = {}
uint_to_int_map = {
pa.uint8(): pa.int16(),
pa.uint16(): pa.int32(),
pa.uint32(): pa.int64(),
pa.uint64(): pa.int64(), # May cause overflow
}
for i, field in enumerate(schema):
# Currently OmniSci doesn't support Arrow table import with
# dictionary columns. Here we cast dictionaries until support
# is in place.
# https://github.com/modin-project/modin/issues/1738
if pa.types.is_dictionary(field.type):
# Conversion for dictionary of null type to string is not supported
# in Arrow. Build new column for this case for now.
if pa.types.is_null(field.type.value_type):
mask = np.full(table.num_rows, True, dtype=bool)
new_col_data = np.empty(table.num_rows, dtype=str)
new_col = pa.array(new_col_data, pa.string(), mask)
new_cols[i] = new_col
else:
need_cast = True
new_field = pa.field(
field.name, pa.string(), field.nullable, field.metadata
)
new_schema = new_schema.set(i, new_field)
# OmniSci doesn't support importing Arrow's date type:
# https://github.com/omnisci/omniscidb/issues/678
elif pa.types.is_date(field.type):
# Arrow's date is the number of days since the UNIX-epoch, so we can convert it
# to a timestamp[s] (number of seconds since the UNIX-epoch) without losing precision
new_field = pa.field(
field.name, pa.timestamp("s"), field.nullable, field.metadata
)
new_schema = new_schema.set(i, new_field)
need_cast = True
# OmniSci doesn't support unsigned types
elif pa.types.is_unsigned_integer(field.type):
new_field = pa.field(
field.name,
uint_to_int_map[field.type],
field.nullable,
field.metadata,
)
new_schema = new_schema.set(i, new_field)
need_cast = True
uint_to_int_cast = True
# Such cast may affect the data, so we have to raise a warning about it
if uint_to_int_cast:
ErrorMessage.single_warning(
"OmniSci does not support unsigned integer types, such types will be rounded up to the signed equivalent."
)
for i, col in new_cols.items():
table = table.set_column(i, new_schema[i], col)
if need_cast:
try:
table = table.cast(new_schema)
except pa.lib.ArrowInvalid as e:
raise (OverflowError if uint_to_int_cast else RuntimeError)(
"An error occurred when trying to convert unsupported by OmniSci 'dtypes' "
+ f"to the supported ones, the schema to cast was: \n{new_schema}."
) from e
return table
|
Cast PyArrow table to be fully compatible with OmniSci.
Parameters
----------
table : pyarrow.Table
Source table.
Returns
-------
pyarrow.Table
Table with fully compatible types with OmniSci.
| 26 | 295 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cast_to_compatible_types(table):
schema = table.schema
new_schema = schema
need_cast = False
uint_to_int_cast = False
new_cols = {}
uint_to_int_map = {
pa.uint8(): pa.int16(),
pa.uint16(): pa.int32(),
pa.uint32(): pa.int64(),
pa.uint64(): pa.int64(), # May cause overflow
}
for i, field in enumerate(schema):
# Currently OmniSci doesn't support Arrow table import with
# dictionary columns. Here we cast dictionaries until support
# is in place.
# https://github.com/modin-project/modin/issues/1738
if pa.types.is_dictionary(field.type):
# Conversion for dictionary of null type to string is not supported
# in Arrow. Build new column for this case for now.
if pa.types.is_null(field.type.value_type):
mask = np.full(table.num_rows, True, dtype=bool)
new_col_data = np.empty(table.num_rows, dtype=str)
new_col = pa.array(new_col_data, pa.string(), mask)
new_cols[i] = new_col
else:
need_cast = True
new_field = pa.field(
field.name, pa.string(), field.nullable, field.metadata
)
new_schema = new_schema.set(i, new_field)
# OmniSci doesn't support importing Arrow's date type:
# https://github.com/omnisci/omniscidb/issues/678
elif pa.types.is_date(field.type):
# Arrow's date is the number of days since the UNIX-epoch, so we can convert it
# to a timestamp[s] (number of seconds since the UNIX-epoch) without losing precision
new_field = pa.field(
field.name, pa.timestamp("s"), field.nullable, field.metadata
)
new_schema = new_schema.set(i, new_field)
need_cast = True
# OmniSci doesn't support unsigned types
elif pa.types.is_unsigned_integer(field.type):
new_field = pa.field(
field.name,
uint_to_int_map[field.type],
field.nullable,
field.metadata,
)
new_schema = new_schema.set(i, new_field)
need_cast = True
uint_to_int_cast = True
# Such cast may affect the data, so we have to raise a warning about it
if uint_to_int_cast:
ErrorMessage.single_warning(
"OmniSci does not support unsigned integer types, such types will be rounded up to the signed equivalent."
)
for i, col in new_cols.items():
table = table.set_column(i, new_schema[i], col)
if need_cast:
try:
table = table.cast(new_schema)
except pa.lib.ArrowInvalid as e:
raise (OverflowError if uint_to_int_cast else RuntimeError)(
"An error occurred when trying to convert unsupported by OmniSci 'dtypes' "
+ f"to the supported ones, the schema to cast was: \n{new_schema}."
) from e
return table
```
###Assistant :
Cast PyArrow table to be fully compatible with OmniSci.
Parameters
----------
table : pyarrow.Table
Source table.
Returns
-------
pyarrow.Table
Table with fully compatible types with OmniSci.
|
79 | def data(self) -> 'DataRequest._DataContent':
return DataRequest._DataContent(self.proto.data)
| Get the data contaned in this data request
:return: the data content as an instance of _DataContent wrapping docs and groundtruths
| 21 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def data(self) -> 'DataRequest._DataContent':
return DataRequest._DataContent(self.proto.data)
```
###Assistant : Get the data contaned in this data request
:return: the data content as an instance of _DataContent wrapping docs and groundtruths
|
80 | def model_from_json(json_string, custom_objects=None):
from keras.layers import (
deserialize_from_json,
) # pylint: disable=g-import-not-at-top
return deserialize_from_json(json_string, custom_objects=custom_objects)
| Parses a JSON model configuration string and returns a model instance.
Usage:
>>> model = tf.keras.Sequential([
... tf.keras.layers.Dense(5, input_shape=(3,)),
... tf.keras.layers.Softmax()])
>>> config = model.to_json()
>>> loaded_model = tf.keras.models.model_from_json(config)
Args:
json_string: JSON string encoding a model configuration.
custom_objects: Optional dictionary mapping names
(strings) to custom classes or functions to be
considered during deserialization.
Returns:
A Keras model instance (uncompiled).
| 59 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def model_from_json(json_string, custom_objects=None):
from keras.layers import (
deserialize_from_json,
) # pylint: disable=g-import-not-at-top
return deserialize_from_json(json_string, custom_objects=custom_objects)
```
###Assistant : Parses a JSON model configuration string and returns a model instance.
Usage:
>>> model = tf.keras.Sequential([
... tf.keras.layers.Dense(5, input_shape=(3,)),
... tf.keras.layers.Softmax()])
>>> config = model.to_json()
>>> loaded_model = tf.keras.models.model_from_json(config)
Args:
json_string: JSON string encoding a model configuration.
custom_objects: Optional dictionary mapping names
(strings) to custom classes or functions to be
considered during deserialization.
Returns:
A Keras model instance (uncompiled).
|
81 | def test_do_nothing_if_converted(self, run_convert_mock):
stdout, _ = self.call_command()
run_convert_mock.assert_not_called()
self.assertIn("Converting all PNG thumbnails to WebP", stdout)
|
GIVEN:
- Document exists with default WebP thumbnail path
WHEN:
- Thumbnail conversion is attempted
THEN:
- Nothing is converted
| 20 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_do_nothing_if_converted(self, run_convert_mock):
stdout, _ = self.call_command()
run_convert_mock.assert_not_called()
self.assertIn("Converting all PNG thumbnails to WebP", stdout)
```
###Assistant :
GIVEN:
- Document exists with default WebP thumbnail path
WHEN:
- Thumbnail conversion is attempted
THEN:
- Nothing is converted
|
82 | def __ror__(self, other):
if isinstance(other, str_type):
other = self._literalStringClass(other)
if not isinstance(other, ParserElement):
raise TypeError(
"Cannot combine element of type {} with ParserElement".format(
type(other).__name__
)
)
return other | self
|
Implementation of ``|`` operator when left operand is not a :class:`ParserElement`
| 11 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __ror__(self, other):
if isinstance(other, str_type):
other = self._literalStringClass(other)
if not isinstance(other, ParserElement):
raise TypeError(
"Cannot combine element of type {} with ParserElement".format(
type(other).__name__
)
)
return other | self
```
###Assistant :
Implementation of ``|`` operator when left operand is not a :class:`ParserElement`
|
83 | def shash(value):
length = len(value)
if length == 0:
return 0
x = Hash.ordinal(value[0]) << 7
for c in value:
x = (1000003 * x) ^ Hash.ordinal(c)
x ^= length
x &= 0xFFFFFFFFFFFFFFFF
if x == -1:
x = -2
# Convert to C long type
return ctypes.c_long(x).value
|
Returns a Python 2.7 hash for a string.
Logic ported from the 2.7 Python branch: cpython/Objects/stringobject.c
Method: static long string_hash(PyStringObject *a)
Args:
value: input string
Returns:
Python 2.7 hash
| 29 | 48 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def shash(value):
length = len(value)
if length == 0:
return 0
x = Hash.ordinal(value[0]) << 7
for c in value:
x = (1000003 * x) ^ Hash.ordinal(c)
x ^= length
x &= 0xFFFFFFFFFFFFFFFF
if x == -1:
x = -2
# Convert to C long type
return ctypes.c_long(x).value
```
###Assistant :
Returns a Python 2.7 hash for a string.
Logic ported from the 2.7 Python branch: cpython/Objects/stringobject.c
Method: static long string_hash(PyStringObject *a)
Args:
value: input string
Returns:
Python 2.7 hash
|
84 | def data_files_with_one_split_and_metadata(tmp_path, auto_text_file):
data_dir = tmp_path / "autofolder_data_dir_with_metadata_one_split"
data_dir.mkdir(parents=True, exist_ok=True)
subdir = data_dir / "subdir"
subdir.mkdir(parents=True, exist_ok=True)
filename = data_dir / "file.txt"
shutil.copyfile(auto_text_file, filename)
filename2 = data_dir / "file2.txt"
shutil.copyfile(auto_text_file, filename2)
filename3 = subdir / "file3.txt" # in subdir
shutil.copyfile(auto_text_file, filename3)
metadata_filename = data_dir / "metadata.jsonl"
metadata = textwrap.dedent(
)
with open(metadata_filename, "w", encoding="utf-8") as f:
f.write(metadata)
data_files_with_one_split_and_metadata = DataFilesDict.from_local_or_remote(
get_data_patterns_locally(data_dir), data_dir
)
assert len(data_files_with_one_split_and_metadata) == 1
assert len(data_files_with_one_split_and_metadata["train"]) == 4
return data_files_with_one_split_and_metadata
@pytest.fixture | \
{"file_name": "file.txt", "additional_feature": "Dummy file"}
{"file_name": "file2.txt", "additional_feature": "Second dummy file"}
{"file_name": "subdir/file3.txt", "additional_feature": "Third dummy file"}
| 18 | 74 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def data_files_with_one_split_and_metadata(tmp_path, auto_text_file):
data_dir = tmp_path / "autofolder_data_dir_with_metadata_one_split"
data_dir.mkdir(parents=True, exist_ok=True)
subdir = data_dir / "subdir"
subdir.mkdir(parents=True, exist_ok=True)
filename = data_dir / "file.txt"
shutil.copyfile(auto_text_file, filename)
filename2 = data_dir / "file2.txt"
shutil.copyfile(auto_text_file, filename2)
filename3 = subdir / "file3.txt" # in subdir
shutil.copyfile(auto_text_file, filename3)
metadata_filename = data_dir / "metadata.jsonl"
metadata = textwrap.dedent(
)
with open(metadata_filename, "w", encoding="utf-8") as f:
f.write(metadata)
data_files_with_one_split_and_metadata = DataFilesDict.from_local_or_remote(
get_data_patterns_locally(data_dir), data_dir
)
assert len(data_files_with_one_split_and_metadata) == 1
assert len(data_files_with_one_split_and_metadata["train"]) == 4
return data_files_with_one_split_and_metadata
@pytest.fixture
```
###Assistant : \
{"file_name": "file.txt", "additional_feature": "Dummy file"}
{"file_name": "file2.txt", "additional_feature": "Second dummy file"}
{"file_name": "subdir/file3.txt", "additional_feature": "Third dummy file"}
|
85 | def test_upload_room_keys_wrong_version(self) -> None:
version = self.get_success(
self.handler.create_version(
self.local_user,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "first_version_auth_data",
},
)
)
self.assertEqual(version, "1")
version = self.get_success(
self.handler.create_version(
self.local_user,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "second_version_auth_data",
},
)
)
self.assertEqual(version, "2")
e = self.get_failure(
self.handler.upload_room_keys(self.local_user, "1", room_keys), SynapseError
)
res = e.value.code
self.assertEqual(res, 403)
| Check that we get a 403 on uploading keys for an old version | 13 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_upload_room_keys_wrong_version(self) -> None:
version = self.get_success(
self.handler.create_version(
self.local_user,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "first_version_auth_data",
},
)
)
self.assertEqual(version, "1")
version = self.get_success(
self.handler.create_version(
self.local_user,
{
"algorithm": "m.megolm_backup.v1",
"auth_data": "second_version_auth_data",
},
)
)
self.assertEqual(version, "2")
e = self.get_failure(
self.handler.upload_room_keys(self.local_user, "1", room_keys), SynapseError
)
res = e.value.code
self.assertEqual(res, 403)
```
###Assistant : Check that we get a 403 on uploading keys for an old version
|
86 | def previewtrain(self) -> Dict[str, List[Union[Image.Image, ImageTk.PhotoImage, None, float]]]:
return self._previewtrain
| dict or ``None``: The training preview images. Dictionary key is the image name
(`str`). Dictionary values are a `list` of the training image (:class:`PIL.Image`), the
image formatted for tkinter display (:class:`PIL.ImageTK.PhotoImage`), the last
modification time of the image (`float`).
The value of this property is ``None`` if training is not running or there are no preview
images available.
| 58 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def previewtrain(self) -> Dict[str, List[Union[Image.Image, ImageTk.PhotoImage, None, float]]]:
return self._previewtrain
```
###Assistant : dict or ``None``: The training preview images. Dictionary key is the image name
(`str`). Dictionary values are a `list` of the training image (:class:`PIL.Image`), the
image formatted for tkinter display (:class:`PIL.ImageTK.PhotoImage`), the last
modification time of the image (`float`).
The value of this property is ``None`` if training is not running or there are no preview
images available.
|
87 | def test_padding(self):
n = coord_net_spec()
ax, a, b = coord_map_from_to(n.deconv, n.data)
pad = random.randint(0, 10)
# conv padding
n = coord_net_spec(pad=pad)
_, a_pad, b_pad = coord_map_from_to(n.deconv, n.data)
self.assertEquals(a, a_pad)
self.assertEquals(b - pad, b_pad)
# deconv padding
n = coord_net_spec(dpad=pad)
_, a_pad, b_pad = coord_map_from_to(n.deconv, n.data)
self.assertEquals(a, a_pad)
self.assertEquals(b + pad, b_pad)
# pad both to cancel out
n = coord_net_spec(pad=pad, dpad=pad)
_, a_pad, b_pad = coord_map_from_to(n.deconv, n.data)
self.assertEquals(a, a_pad)
self.assertEquals(b, b_pad)
|
Padding conv adds offset while padding deconv subtracts offset.
| 9 | 71 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_padding(self):
n = coord_net_spec()
ax, a, b = coord_map_from_to(n.deconv, n.data)
pad = random.randint(0, 10)
# conv padding
n = coord_net_spec(pad=pad)
_, a_pad, b_pad = coord_map_from_to(n.deconv, n.data)
self.assertEquals(a, a_pad)
self.assertEquals(b - pad, b_pad)
# deconv padding
n = coord_net_spec(dpad=pad)
_, a_pad, b_pad = coord_map_from_to(n.deconv, n.data)
self.assertEquals(a, a_pad)
self.assertEquals(b + pad, b_pad)
# pad both to cancel out
n = coord_net_spec(pad=pad, dpad=pad)
_, a_pad, b_pad = coord_map_from_to(n.deconv, n.data)
self.assertEquals(a, a_pad)
self.assertEquals(b, b_pad)
```
###Assistant :
Padding conv adds offset while padding deconv subtracts offset.
|
88 | def require_cuda(test_case):
return unittest.skipUnless(torch.cuda.is_available(), "test requires a GPU")(test_case)
|
Decorator marking a test that requires CUDA. These tests are skipped when there are no GPU available.
| 17 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def require_cuda(test_case):
return unittest.skipUnless(torch.cuda.is_available(), "test requires a GPU")(test_case)
```
###Assistant :
Decorator marking a test that requires CUDA. These tests are skipped when there are no GPU available.
|
89 | def _flush_periodically(self) -> None:
while self._active:
# flush is thread-safe; it acquires and releases the lock internally
self.flush()
time.sleep(self._flush_period)
|
Whilst this handler is active, flush the handler periodically.
| 9 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _flush_periodically(self) -> None:
while self._active:
# flush is thread-safe; it acquires and releases the lock internally
self.flush()
time.sleep(self._flush_period)
```
###Assistant :
Whilst this handler is active, flush the handler periodically.
|
90 | def save_flagged(self, dir, label, data, encryption_key) -> str | Dict:
if "confidences" in data:
return json.dumps(
{
example["label"]: example["confidence"]
for example in data["confidences"]
}
)
else:
return data["label"]
|
Returns:
Either a string representing the main category label, or a dictionary with category keys mapping to confidence levels.
| 19 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def save_flagged(self, dir, label, data, encryption_key) -> str | Dict:
if "confidences" in data:
return json.dumps(
{
example["label"]: example["confidence"]
for example in data["confidences"]
}
)
else:
return data["label"]
```
###Assistant :
Returns:
Either a string representing the main category label, or a dictionary with category keys mapping to confidence levels.
|
91 | def test_higher_rank_inputs_for_importance_weights(self):
for fw in framework_iterator(frameworks=("torch", "tf"), session=True):
vtrace = vtrace_tf if fw != "torch" else vtrace_torch
if fw == "tf":
inputs_ = {
"log_rhos": tf1.placeholder(
dtype=tf.float32, shape=[None, None, 1]
),
"discounts": tf1.placeholder(
dtype=tf.float32, shape=[None, None, 1]
),
"rewards": tf1.placeholder(
dtype=tf.float32, shape=[None, None, 42]
),
"values": tf1.placeholder(dtype=tf.float32, shape=[None, None, 42]),
"bootstrap_value": tf1.placeholder(
dtype=tf.float32, shape=[None, 42]
),
}
else:
inputs_ = {
"log_rhos": Box(-1.0, 1.0, (8, 10, 1)).sample(),
"discounts": Box(-1.0, 1.0, (8, 10, 1)).sample(),
"rewards": Box(-1.0, 1.0, (8, 10, 42)).sample(),
"values": Box(-1.0, 1.0, (8, 10, 42)).sample(),
"bootstrap_value": Box(-1.0, 1.0, (10, 42)).sample(),
}
output = vtrace.from_importance_weights(**inputs_)
check(int(output.vs.shape[-1]), 42)
| Checks support for additional dimensions in inputs. | 7 | 96 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_higher_rank_inputs_for_importance_weights(self):
for fw in framework_iterator(frameworks=("torch", "tf"), session=True):
vtrace = vtrace_tf if fw != "torch" else vtrace_torch
if fw == "tf":
inputs_ = {
"log_rhos": tf1.placeholder(
dtype=tf.float32, shape=[None, None, 1]
),
"discounts": tf1.placeholder(
dtype=tf.float32, shape=[None, None, 1]
),
"rewards": tf1.placeholder(
dtype=tf.float32, shape=[None, None, 42]
),
"values": tf1.placeholder(dtype=tf.float32, shape=[None, None, 42]),
"bootstrap_value": tf1.placeholder(
dtype=tf.float32, shape=[None, 42]
),
}
else:
inputs_ = {
"log_rhos": Box(-1.0, 1.0, (8, 10, 1)).sample(),
"discounts": Box(-1.0, 1.0, (8, 10, 1)).sample(),
"rewards": Box(-1.0, 1.0, (8, 10, 42)).sample(),
"values": Box(-1.0, 1.0, (8, 10, 42)).sample(),
"bootstrap_value": Box(-1.0, 1.0, (10, 42)).sample(),
}
output = vtrace.from_importance_weights(**inputs_)
check(int(output.vs.shape[-1]), 42)
```
###Assistant : Checks support for additional dimensions in inputs.
|
92 | def test_task_fail_duration(app, admin_client, dag_maker, session):
with dag_maker() as dag:
op1 = BashOperator(task_id='fail', bash_command='exit 1')
op2 = BashOperator(task_id='success', bash_command='exit 0')
with pytest.raises(AirflowException):
op1.run()
op2.run()
op1_fails = (
session.query(TaskFail)
.filter(
TaskFail.task_id == 'fail',
TaskFail.dag_id == dag.dag_id,
)
.all()
)
op2_fails = (
session.query(TaskFail)
.filter(
TaskFail.task_id == 'success',
TaskFail.dag_id == dag.dag_id,
)
.all()
)
assert len(op1_fails) == 1
assert len(op2_fails) == 0
with unittest.mock.patch.object(app, 'dag_bag') as mocked_dag_bag:
mocked_dag_bag.get_dag.return_value = dag
resp = admin_client.get(f"dags/{dag.dag_id}/duration", follow_redirects=True)
html = resp.get_data().decode()
cumulative_chart = json.loads(re.search("data_cumlinechart=(.*);", html).group(1))
line_chart = json.loads(re.search("data_linechart=(.*);", html).group(1))
assert resp.status_code == 200
assert sorted(item["key"] for item in cumulative_chart) == ["fail", "success"]
assert sorted(item["key"] for item in line_chart) == ["fail", "success"]
| Task duration page with a TaskFail entry should render without error. | 11 | 104 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_task_fail_duration(app, admin_client, dag_maker, session):
with dag_maker() as dag:
op1 = BashOperator(task_id='fail', bash_command='exit 1')
op2 = BashOperator(task_id='success', bash_command='exit 0')
with pytest.raises(AirflowException):
op1.run()
op2.run()
op1_fails = (
session.query(TaskFail)
.filter(
TaskFail.task_id == 'fail',
TaskFail.dag_id == dag.dag_id,
)
.all()
)
op2_fails = (
session.query(TaskFail)
.filter(
TaskFail.task_id == 'success',
TaskFail.dag_id == dag.dag_id,
)
.all()
)
assert len(op1_fails) == 1
assert len(op2_fails) == 0
with unittest.mock.patch.object(app, 'dag_bag') as mocked_dag_bag:
mocked_dag_bag.get_dag.return_value = dag
resp = admin_client.get(f"dags/{dag.dag_id}/duration", follow_redirects=True)
html = resp.get_data().decode()
cumulative_chart = json.loads(re.search("data_cumlinechart=(.*);", html).group(1))
line_chart = json.loads(re.search("data_linechart=(.*);", html).group(1))
assert resp.status_code == 200
assert sorted(item["key"] for item in cumulative_chart) == ["fail", "success"]
assert sorted(item["key"] for item in line_chart) == ["fail", "success"]
```
###Assistant : Task duration page with a TaskFail entry should render without error.
|
93 | def test_all_users(self) -> None:
self._create_users(2)
channel = self.make_request(
"GET",
self.url + "?deactivated=true",
{},
access_token=self.admin_user_tok,
)
self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body)
self.assertEqual(3, len(channel.json_body["users"]))
self.assertEqual(3, channel.json_body["total"])
# Check that all fields are available
self._check_fields(channel.json_body["users"])
|
List all users, including deactivated users.
| 6 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_all_users(self) -> None:
self._create_users(2)
channel = self.make_request(
"GET",
self.url + "?deactivated=true",
{},
access_token=self.admin_user_tok,
)
self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body)
self.assertEqual(3, len(channel.json_body["users"]))
self.assertEqual(3, channel.json_body["total"])
# Check that all fields are available
self._check_fields(channel.json_body["users"])
```
###Assistant :
List all users, including deactivated users.
|
94 | def real_quick_ratio(self):
la, lb = len(self.a), len(self.b)
# can't have more matches than the number of elements in the
# shorter sequence
return _calculate_ratio(min(la, lb), la + lb)
__class_getitem__ = classmethod(GenericAlias)
| Return an upper bound on ratio() very quickly.
This isn't defined beyond that it is an upper bound on .ratio(), and
is faster to compute than either .ratio() or .quick_ratio().
| 30 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def real_quick_ratio(self):
la, lb = len(self.a), len(self.b)
# can't have more matches than the number of elements in the
# shorter sequence
return _calculate_ratio(min(la, lb), la + lb)
__class_getitem__ = classmethod(GenericAlias)
```
###Assistant : Return an upper bound on ratio() very quickly.
This isn't defined beyond that it is an upper bound on .ratio(), and
is faster to compute than either .ratio() or .quick_ratio().
|
95 | def test_unpublish_view_invalid_page_id(self):
# Request confirm unpublish page but with illegal page id
response = self.client.get(
reverse(
"wagtail_bulk_action",
args=(
"wagtailcore",
"page",
"unpublish",
),
)
)
# Check that the user received a 404 response
self.assertEqual(response.status_code, 404)
|
This tests that the unpublish view returns an error if the page id is invalid
| 15 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_unpublish_view_invalid_page_id(self):
# Request confirm unpublish page but with illegal page id
response = self.client.get(
reverse(
"wagtail_bulk_action",
args=(
"wagtailcore",
"page",
"unpublish",
),
)
)
# Check that the user received a 404 response
self.assertEqual(response.status_code, 404)
```
###Assistant :
This tests that the unpublish view returns an error if the page id is invalid
|
96 | def test_warn_report():
fn = report
with warnings.catch_warnings(record=True) as record:
# Ignore Deprecation warnings.
warnings.filterwarnings("ignore", category=DeprecationWarning)
assert not fn(dict())
assert fn.__name__ in record[0].message.args[0]
reset_log_once_with_str(fn.__name__)
| Checks if calling session.report function outside of session raises warning. | 10 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_warn_report():
fn = report
with warnings.catch_warnings(record=True) as record:
# Ignore Deprecation warnings.
warnings.filterwarnings("ignore", category=DeprecationWarning)
assert not fn(dict())
assert fn.__name__ in record[0].message.args[0]
reset_log_once_with_str(fn.__name__)
```
###Assistant : Checks if calling session.report function outside of session raises warning.
|
97 | def forward(self, feats, img_metas):
batch_size = len(img_metas)
mask_features, multi_scale_memorys = self.pixel_decoder(feats)
# multi_scale_memorys (from low resolution to high resolution)
decoder_inputs = []
decoder_positional_encodings = []
for i in range(self.num_transformer_feat_level):
decoder_input = self.decoder_input_projs[i](multi_scale_memorys[i])
# shape (batch_size, c, h, w) -> (h*w, batch_size, c)
decoder_input = decoder_input.flatten(2).permute(2, 0, 1)
level_embed = self.level_embed.weight[i].view(1, 1, -1)
decoder_input = decoder_input + level_embed
# shape (batch_size, c, h, w) -> (h*w, batch_size, c)
mask = decoder_input.new_zeros(
(batch_size, ) + multi_scale_memorys[i].shape[-2:],
dtype=torch.bool)
decoder_positional_encoding = self.decoder_positional_encoding(
mask)
decoder_positional_encoding = decoder_positional_encoding.flatten(
2).permute(2, 0, 1)
decoder_inputs.append(decoder_input)
decoder_positional_encodings.append(decoder_positional_encoding)
# shape (num_queries, c) -> (num_queries, batch_size, c)
query_feat = self.query_feat.weight.unsqueeze(1).repeat(
(1, batch_size, 1))
query_embed = self.query_embed.weight.unsqueeze(1).repeat(
(1, batch_size, 1))
cls_pred_list = []
mask_pred_list = []
cls_pred, mask_pred, attn_mask = self.forward_head(
query_feat, mask_features, multi_scale_memorys[0].shape[-2:])
cls_pred_list.append(cls_pred)
mask_pred_list.append(mask_pred)
for i in range(self.num_transformer_decoder_layers):
level_idx = i % self.num_transformer_feat_level
# if a mask is all True(all background), then set it all False.
attn_mask[torch.where(
attn_mask.sum(-1) == attn_mask.shape[-1])] = False
# cross_attn + self_attn
layer = self.transformer_decoder.layers[i]
attn_masks = [attn_mask, None]
query_feat = layer(
query=query_feat,
key=decoder_inputs[level_idx],
value=decoder_inputs[level_idx],
query_pos=query_embed,
key_pos=decoder_positional_encodings[level_idx],
attn_masks=attn_masks,
query_key_padding_mask=None,
# here we do not apply masking on padded region
key_padding_mask=None)
cls_pred, mask_pred, attn_mask = self.forward_head(
query_feat, mask_features, multi_scale_memorys[
(i + 1) % self.num_transformer_feat_level].shape[-2:])
cls_pred_list.append(cls_pred)
mask_pred_list.append(mask_pred)
return cls_pred_list, mask_pred_list
| Forward function.
Args:
feats (list[Tensor]): Multi scale Features from the
upstream network, each is a 4D-tensor.
img_metas (list[dict]): List of image information.
Returns:
tuple: A tuple contains two elements.
- cls_pred_list (list[Tensor)]: Classification logits \
for each decoder layer. Each is a 3D-tensor with shape \
(batch_size, num_queries, cls_out_channels). \
Note `cls_out_channels` should includes background.
- mask_pred_list (list[Tensor]): Mask logits for each \
decoder layer. Each with shape (batch_size, num_queries, \
h, w).
| 73 | 201 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def forward(self, feats, img_metas):
batch_size = len(img_metas)
mask_features, multi_scale_memorys = self.pixel_decoder(feats)
# multi_scale_memorys (from low resolution to high resolution)
decoder_inputs = []
decoder_positional_encodings = []
for i in range(self.num_transformer_feat_level):
decoder_input = self.decoder_input_projs[i](multi_scale_memorys[i])
# shape (batch_size, c, h, w) -> (h*w, batch_size, c)
decoder_input = decoder_input.flatten(2).permute(2, 0, 1)
level_embed = self.level_embed.weight[i].view(1, 1, -1)
decoder_input = decoder_input + level_embed
# shape (batch_size, c, h, w) -> (h*w, batch_size, c)
mask = decoder_input.new_zeros(
(batch_size, ) + multi_scale_memorys[i].shape[-2:],
dtype=torch.bool)
decoder_positional_encoding = self.decoder_positional_encoding(
mask)
decoder_positional_encoding = decoder_positional_encoding.flatten(
2).permute(2, 0, 1)
decoder_inputs.append(decoder_input)
decoder_positional_encodings.append(decoder_positional_encoding)
# shape (num_queries, c) -> (num_queries, batch_size, c)
query_feat = self.query_feat.weight.unsqueeze(1).repeat(
(1, batch_size, 1))
query_embed = self.query_embed.weight.unsqueeze(1).repeat(
(1, batch_size, 1))
cls_pred_list = []
mask_pred_list = []
cls_pred, mask_pred, attn_mask = self.forward_head(
query_feat, mask_features, multi_scale_memorys[0].shape[-2:])
cls_pred_list.append(cls_pred)
mask_pred_list.append(mask_pred)
for i in range(self.num_transformer_decoder_layers):
level_idx = i % self.num_transformer_feat_level
# if a mask is all True(all background), then set it all False.
attn_mask[torch.where(
attn_mask.sum(-1) == attn_mask.shape[-1])] = False
# cross_attn + self_attn
layer = self.transformer_decoder.layers[i]
attn_masks = [attn_mask, None]
query_feat = layer(
query=query_feat,
key=decoder_inputs[level_idx],
value=decoder_inputs[level_idx],
query_pos=query_embed,
key_pos=decoder_positional_encodings[level_idx],
attn_masks=attn_masks,
query_key_padding_mask=None,
# here we do not apply masking on padded region
key_padding_mask=None)
cls_pred, mask_pred, attn_mask = self.forward_head(
query_feat, mask_features, multi_scale_memorys[
(i + 1) % self.num_transformer_feat_level].shape[-2:])
cls_pred_list.append(cls_pred)
mask_pred_list.append(mask_pred)
return cls_pred_list, mask_pred_list
```
###Assistant : Forward function.
Args:
feats (list[Tensor]): Multi scale Features from the
upstream network, each is a 4D-tensor.
img_metas (list[dict]): List of image information.
Returns:
tuple: A tuple contains two elements.
- cls_pred_list (list[Tensor)]: Classification logits \
for each decoder layer. Each is a 3D-tensor with shape \
(batch_size, num_queries, cls_out_channels). \
Note `cls_out_channels` should includes background.
- mask_pred_list (list[Tensor]): Mask logits for each \
decoder layer. Each with shape (batch_size, num_queries, \
h, w).
|
98 | def formfield_for_manytomany(self, db_field, request, **kwargs):
# If it uses an intermediary model that isn't auto created, don't show
# a field in admin.
if not db_field.remote_field.through._meta.auto_created:
return None
db = kwargs.get("using")
if "widget" not in kwargs:
autocomplete_fields = self.get_autocomplete_fields(request)
if db_field.name in autocomplete_fields:
kwargs["widget"] = AutocompleteSelectMultiple(
db_field,
self.admin_site,
using=db,
)
elif db_field.name in self.raw_id_fields:
kwargs["widget"] = widgets.ManyToManyRawIdWidget(
db_field.remote_field,
self.admin_site,
using=db,
)
elif db_field.name in [*self.filter_vertical, *self.filter_horizontal]:
kwargs["widget"] = widgets.FilteredSelectMultiple(
db_field.verbose_name, db_field.name in self.filter_vertical
)
if "queryset" not in kwargs:
queryset = self.get_field_queryset(db, db_field, request)
if queryset is not None:
kwargs["queryset"] = queryset
form_field = db_field.formfield(**kwargs)
if isinstance(form_field.widget, SelectMultiple) and not isinstance(
form_field.widget, (CheckboxSelectMultiple, AutocompleteSelectMultiple)
):
msg = _(
"Hold down “Control”, or “Command” on a Mac, to select more than one."
)
help_text = form_field.help_text
form_field.help_text = (
format_lazy("{} {}", help_text, msg) if help_text else msg
)
return form_field
|
Get a form Field for a ManyToManyField.
| 7 | 139 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def formfield_for_manytomany(self, db_field, request, **kwargs):
# If it uses an intermediary model that isn't auto created, don't show
# a field in admin.
if not db_field.remote_field.through._meta.auto_created:
return None
db = kwargs.get("using")
if "widget" not in kwargs:
autocomplete_fields = self.get_autocomplete_fields(request)
if db_field.name in autocomplete_fields:
kwargs["widget"] = AutocompleteSelectMultiple(
db_field,
self.admin_site,
using=db,
)
elif db_field.name in self.raw_id_fields:
kwargs["widget"] = widgets.ManyToManyRawIdWidget(
db_field.remote_field,
self.admin_site,
using=db,
)
elif db_field.name in [*self.filter_vertical, *self.filter_horizontal]:
kwargs["widget"] = widgets.FilteredSelectMultiple(
db_field.verbose_name, db_field.name in self.filter_vertical
)
if "queryset" not in kwargs:
queryset = self.get_field_queryset(db, db_field, request)
if queryset is not None:
kwargs["queryset"] = queryset
form_field = db_field.formfield(**kwargs)
if isinstance(form_field.widget, SelectMultiple) and not isinstance(
form_field.widget, (CheckboxSelectMultiple, AutocompleteSelectMultiple)
):
msg = _(
"Hold down “Control”, or “Command” on a Mac, to select more than one."
)
help_text = form_field.help_text
form_field.help_text = (
format_lazy("{} {}", help_text, msg) if help_text else msg
)
return form_field
```
###Assistant :
Get a form Field for a ManyToManyField.
|
99 | def test_expiry_logic(self) -> None:
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion[
"1"
] = 100000
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion[
"2"
] = 200000
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion[
"3"
] = 300000
self.event_creator_handler._expire_rooms_to_exclude_from_dummy_event_insertion()
# All entries within time frame
self.assertEqual(
len(
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion
),
3,
)
# Oldest room to expire
self.pump(1.01)
self.event_creator_handler._expire_rooms_to_exclude_from_dummy_event_insertion()
self.assertEqual(
len(
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion
),
2,
)
# All rooms to expire
self.pump(2)
self.assertEqual(
len(
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion
),
0,
)
| Simple test to ensure that _expire_rooms_to_exclude_from_dummy_event_insertion()
expires old entries correctly.
| 10 | 57 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_expiry_logic(self) -> None:
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion[
"1"
] = 100000
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion[
"2"
] = 200000
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion[
"3"
] = 300000
self.event_creator_handler._expire_rooms_to_exclude_from_dummy_event_insertion()
# All entries within time frame
self.assertEqual(
len(
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion
),
3,
)
# Oldest room to expire
self.pump(1.01)
self.event_creator_handler._expire_rooms_to_exclude_from_dummy_event_insertion()
self.assertEqual(
len(
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion
),
2,
)
# All rooms to expire
self.pump(2)
self.assertEqual(
len(
self.event_creator_handler._rooms_to_exclude_from_dummy_event_insertion
),
0,
)
```
###Assistant : Simple test to ensure that _expire_rooms_to_exclude_from_dummy_event_insertion()
expires old entries correctly.
|