n_words
int64
3
1.95k
n_ast_errors
int64
0
2
complexity
int64
1
151
nloc
int64
2
546
path
stringlengths
8
125
id
int64
280
339k
commit_message
stringlengths
3
18.1k
repo
stringlengths
3
28
ast_levels
int64
4
28
language
stringclasses
1 value
vocab_size
int64
3
677
file_name
stringlengths
5
67
code
stringlengths
101
24k
commit_id
stringlengths
40
40
ast_errors
stringlengths
0
2.76k
token_counts
int64
7
3.77k
url
stringlengths
31
61
n_whitespaces
int64
4
13.9k
random_cut
stringlengths
21
13.9k
n_identifiers
int64
1
157
n_ast_nodes
int64
10
3.6k
fun_name
stringlengths
3
72
37
0
3
6
rllib/__init__.py
137,459
[RLlib] Deprecate (delete) `contrib` folder. (#30992)
ray
12
Python
34
__init__.py
def _register_all(): from ray.rllib.algorithms.registry import ALGORITHMS, _get_algorithm_class for key, get_trainable_class_and_config in ALGORITHMS.items(): register_trainable(key, get_trainable_class_and_config()[0]) for key in ["__fake", "__sigmoid_fake_data", "__parameter_tuning"]: register_trainable(key, _get_algorithm_class(key)) _setup_logger() usage_lib.record_library_usage("rllib") __all__ = [ "Policy", "TFPolicy", "TorchPolicy", "RolloutWorker", "SampleBatch", "BaseEnv", "MultiAgentEnv", "VectorEnv", "ExternalEnv", ]
64d744b4750b749cede563b04c5d32396470a236
58
https://github.com/ray-project/ray.git
82
def _register_all(): from ray.rllib.algorithms.registry import ALGORITHMS, _get_algorithm_class for key, get_trainable_class_and_config in ALGORITHMS.items(): register_trainable(key, get_trainable_class_and_config()[0]) for key in ["__fake", "__sigmoid_fake_data", "__parameter_tuning"]: register_trainable(key, _get_algorithm_class(key)) _setup_logger() usage_lib.record_library_usage("rllib") __all__ = [ "Policy", "TFPolic
15
153
_register_all
46
1
2
5
jax/_src/numpy/ufuncs.py
119,763
lax_numpy.py: factor ufuncs into their own private submodule Re-lands part of #9724 PiperOrigin-RevId: 434629548
jax
14
Python
39
ufuncs.py
def _sinc_maclaurin(k, x): # compute the kth derivative of x -> sin(x)/x evaluated at zero (since we # compute the monomial term in the jvp rule) if k % 2: return lax.full_like(x, 0) else: return lax.full_like(x, (-1) ** (k // 2) / (k + 1)) @_sinc_maclaurin.defjvp
6355fac8822bced4bfa657187a7284477f373c52
@_sinc_maclaurin.defjvp
38
https://github.com/google/jax.git
54
def _sinc_maclaurin(k, x): # compute the kth derivative of x -> sin(x)/x evaluated at zero (since we # compute the monomial term in the jvp rule) if k % 2: return lax.full_like(x, 0) else: return lax.full_like(
6
81
_sinc_maclaurin
42
0
2
8
d2l/torch.py
158,403
[PaddlePaddle] Merge master into Paddle branch (#1186) * change 15.2 title in chinese version (#1109) change title ’15.2. 情感分析:使用递归神经网络‘ to ’15.2. 情感分析:使用循环神经网络‘ * 修改部分语义表述 (#1105) * Update r0.17.5 (#1120) * Bump versions in installation * 94行typo: (“bert.mall”)->(“bert.small”) (#1129) * line 313: "bert.mall" -> "bert.small" (#1130) * fix: update language as native reader (#1114) * Fix the translation of "stride" (#1115) * Update index.md (#1118) 修改部分语义表述 * Update self-attention-and-positional-encoding.md (#1133) 依照本书的翻译习惯,将pooling翻译成汇聚 * maybe a comment false (#1149) * maybe a little false * maybe a little false * A minor bug in the rcnn section (Chinese edition) (#1148) * Update bert.md (#1137) 一个笔误 # 假设batch_size=2,num_pred_positions=3 # 那么batch_idx应该是np.repeat( [0,1], 3 ) = [0,0,0,1,1,1] * Update calculus.md (#1135) * fix typo in git documentation (#1106) * fix: Update the Chinese translation in lr-scheduler.md (#1136) * Update lr-scheduler.md * Update chapter_optimization/lr-scheduler.md Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * fix translation for kaggle-house-price.md (#1107) * fix translation for kaggle-house-price.md * fix translation for kaggle-house-price.md Signed-off-by: sunhaizhou <haizhou.sun@smartmore.com> * Update weight-decay.md (#1150) * Update weight-decay.md 关于“k多选d”这一部分,中文读者使用排列组合的方式可能更容易理解 关于“给定k个变量,阶数的个数为...”这句话是有歧义的,不是很像中国话,应该是说“阶数为d的项的个数为...”。 并增加了一句对“因此即使是阶数上的微小变化,比如从$2$到$3$,也会显著增加我们模型的复杂性。”的解释 解释为何会增加复杂性以及为何需要细粒度工具。 * Update chapter_multilayer-perceptrons/weight-decay.md yep Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * Update chapter_multilayer-perceptrons/weight-decay.md yep Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * Fix a spelling error (#1161) * Update gru.md (#1152) The key distinction between vanilla RNNs and GRUs is that the latter support gating of the hidden state. 翻译错误 * Unify the function naming (#1113) Unify naming of the function 'init_xavier()'. * Update mlp-concise.md (#1166) * Update mlp-concise.md 语句不通顺 * Update environment.md 语序异常 * Update config.ini * fix the imprecise description (#1168) Co-authored-by: yuande <yuande> * fix typo in chapter_natural-language-processing-pretraining/glove.md (#1175) * Fix some typos. (#1163) * Update batch-norm.md (#1170) fixing typos u->x in article * Update linear-regression.md (#1090) We invoke Stuart Russell and Peter Norvig who, in their classic AI text book Artificial Intelligence: A Modern Approach :cite:Russell.Norvig.2016, pointed out that 原译文把who也直接翻译出来了。 * Update mlp.md (#1117) * Update mlp.md 修改部分语义表述 * Update chapter_multilayer-perceptrons/mlp.md Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * Update chapter_multilayer-perceptrons/mlp.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * Correct a translation error. (#1091) * Correct a translation error. * Update chapter_computer-vision/image-augmentation.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Update aws.md (#1121) * Update aws.md * Update chapter_appendix-tools-for-deep-learning/aws.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Update image-augmentation.md (#1093) * Update anchor.md (#1088) fix a minor issue in code * Update anchor.md * Update image-augmentation.md * fix typo and improve translation in chapter_linear-networks\softmax-regression.md (#1087) * Avoid `torch.meshgrid` user warning (#1174) Avoids the following user warning: ```python ~/anaconda3/envs/torch/lib/python3.10/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2228.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] ``` * bump to 2.0.0-beta1 * Update sequence.md * bump beta1 on readme * Add latex code block background to config * BLD: Bump python support version 3.9 (#1183) * BLD: Bump python support version 3.9 * Remove clear and manually downgrade protobuf 4.21.4 to 3.19.4 * BLD: Bump torch and tensorflow * Update Jenkinsfile * Update chapter_installation/index.md * Update chapter_installation/index.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Update config.ini * Update INFO.md * Update INFO.md * Drop mint to show code in pdf, use Inconsolata font, apply code cell color (#1187) * resolve the conflicts * revise from publisher (#1089) * revise from publisher * d2l api * post_latex * revise from publisher * revise ch11 * Delete d2l-Copy1.bib * clear cache * rm d2lbook clear * debug anchor * keep original d2l doc Co-authored-by: Ubuntu <ubuntu@ip-172-31-12-66.us-west-2.compute.internal> Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> Co-authored-by: Aston Zhang <asv325@gmail.com> * 重复语句 (#1188) Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Improve expression for chapter_preliminaries/pandas.md (#1184) * Update pandas.md * Improve expression * Improve expression * Update chapter_preliminaries/pandas.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Improce expression for chapter_preliminaries/linear-algebra.md (#1185) * Improce expression * Improve code comments * Update chapter_preliminaries/linear-algebra.md * Update chapter_preliminaries/linear-algebra.md * Update chapter_preliminaries/linear-algebra.md * Update chapter_preliminaries/linear-algebra.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Fix multibox_detection bugs * Update d2l to 0.17.5 version * restore older version * Upgrade pandas * change to python3.8 * Test warning log * relocate warning log * test logs filtering * Update gru.md * Add DeprecationWarning filter * Test warning log * Update attention mechanisms & computational performance * Update multilayer perceptron& linear & convolution networks & computer vision * Update recurrent&optimition&nlp pretraining & nlp applications * ignore warnings * Update index.md * Update linear networks * Update multilayer perceptrons&deep learning computation * Update preliminaries * Check and Add warning filter * Update kaggle-cifar10.md * Update object-detection-dataset.md * Update ssd.md fcn.md * Update hybridize.md * Update hybridize.md Signed-off-by: sunhaizhou <haizhou.sun@smartmore.com> Co-authored-by: zhou201505013 <39976863+zhou201505013@users.noreply.github.com> Co-authored-by: Xinwei Liu <xinzone@outlook.com> Co-authored-by: Anirudh Dagar <anirudhdagar6@gmail.com> Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> Co-authored-by: hugo_han <57249629+HugoHann@users.noreply.github.com> Co-authored-by: gyro永不抽风 <1247006353@qq.com> Co-authored-by: CanChengZheng <zcc550169544@163.com> Co-authored-by: linlin <jajupmochi@gmail.com> Co-authored-by: iuk <liukun0104@gmail.com> Co-authored-by: yoos <49556860+liyunlongaaa@users.noreply.github.com> Co-authored-by: Mr. Justice Lawrence John Wargrave <65226618+RUCWargrave@users.noreply.github.com> Co-authored-by: Chiyuan Fu <fuchiyuan2019@outlook.com> Co-authored-by: Sunhuashan <48636870+Sunhuashan@users.noreply.github.com> Co-authored-by: Haiker Sun <haizhou.uestc2011@gmail.com> Co-authored-by: Ming Liu <akira.liu@njnu.edu.cn> Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> Co-authored-by: silenceZheng66 <13754430639@163.com> Co-authored-by: Wenchao Yan <56541797+YWonchall@users.noreply.github.com> Co-authored-by: Kiki2049 <55939997+Kiki2049@users.noreply.github.com> Co-authored-by: Krahets <krahets@163.com> Co-authored-by: friedmainfunction <73703265+friedmainfunction@users.noreply.github.com> Co-authored-by: Jameson <miraclecome@gmail.com> Co-authored-by: P. Yao <12227516+YaoPengCN@users.noreply.github.com> Co-authored-by: Yulv-git <34329208+Yulv-git@users.noreply.github.com> Co-authored-by: Liu,Xiao <45966993+liuxiao916@users.noreply.github.com> Co-authored-by: YIN, Gang <1246410+yingang@users.noreply.github.com> Co-authored-by: Joe-HZ <58297431+Joe-HZ@users.noreply.github.com> Co-authored-by: lybloveyou <102609904+lybloveyou@users.noreply.github.com> Co-authored-by: VigourJiang <jiangfuqiang154@163.com> Co-authored-by: zxhd863943427 <74853597+zxhd863943427@users.noreply.github.com> Co-authored-by: LYF <27893441+liyufan@users.noreply.github.com> Co-authored-by: Aston Zhang <asv325@gmail.com> Co-authored-by: xiaotinghe <xiaotih@amazon.com> Co-authored-by: Ubuntu <ubuntu@ip-172-31-12-66.us-west-2.compute.internal> Co-authored-by: Holly-Max <60691735+Holly-Max@users.noreply.github.com> Co-authored-by: HinGwenWoong <peterhuang0323@qq.com> Co-authored-by: Shuai Zhang <cheungdaven@gmail.com>
d2l-zh
11
Python
35
torch.py
def evaluate_loss(net, data_iter, loss): metric = d2l.Accumulator(2) # Sum of losses, no. of examples for X, y in data_iter: out = net(X) y = d2l.reshape(y, out.shape) l = loss(out, y) metric.add(d2l.reduce_sum(l), d2l.size(l)) return metric[0] / metric[1] DATA_HUB = dict() DATA_URL = 'http://d2l-data.s3-accelerate.amazonaws.com/'
b64b41d8c1ac23c43f7a4e3f9f6339d6f0012ab2
79
https://github.com/d2l-ai/d2l-zh.git
81
def evaluate_loss(net, data_iter, loss): metric = d2l.Accumulator(2) # Sum of losses, no. of examples for X, y in data_iter: out = net(X) y = d2l.reshape(y, out.shape) l = loss(out, y) metric.add(d2l.reduce_sum(l), d2l.size(l)) return metric[0] / metric[1] DATA_HUB = dict() DATA_URL = 'http://d2l-data.s3-accelerate.amazonaws.com/'
19
139
evaluate_loss
7
0
1
4
sympy/tensor/tensor.py
197,117
Update the various tensor deprecations
sympy
10
Python
7
tensor.py
def __iter__(self): deprecate_data() with ignore_warnings(SymPyDeprecationWarning): return self.data.__iter__()
cba899d4137b0b65f6850120ee42cd4fcd4f9dbf
22
https://github.com/sympy/sympy.git
31
def __iter__(self): deprecate_data() with ignore_warnings(SymPyDeprecationWarning
6
40
__iter__
5
0
1
2
modules/image/Image_editing/super_resolution/swinir_l_real_sr_x4/test.py
51,961
add swinir_l_real_sr_x4 (#2076) * git add swinir_l_real_sr_x4 * fix typo * fix typo Co-authored-by: chenjian <chenjian26@baidu.com>
PaddleHub
10
Python
5
test.py
def test_real_sr4(self): self.assertRaises(Exception, self.module.real_sr, image=['tests/test.jpg'])
2e373966a7fd3119c205350fb14d0b7bfe74185d
23
https://github.com/PaddlePaddle/PaddleHub.git
11
def test_real_sr4(self):
7
37
test_real_sr4
44
0
3
19
pandas/core/groupby/groupby.py
167,948
BUG: numeric_only with axis=1 in DataFrame.corrwith and DataFrameGroupBy.cummin/max (#47724) * BUG: DataFrame.corrwith and DataFrameGroupBy.cummin/cummax with numeric_only=True * test improvements
pandas
12
Python
36
groupby.py
def cummax(self, axis=0, numeric_only=False, **kwargs) -> NDFrameT: skipna = kwargs.get("skipna", True) if axis != 0: f = lambda x: np.maximum.accumulate(x, axis) numeric_only_bool = self._resolve_numeric_only("cummax", numeric_only, axis) obj = self._selected_obj if numeric_only_bool: obj = obj._get_numeric_data() return self._python_apply_general(f, obj, is_transform=True) return self._cython_transform( "cummax", numeric_only=numeric_only, skipna=skipna )
ad7dcef6f0dbdbb14240dd13db51f4d8892ad808
104
https://github.com/pandas-dev/pandas.git
160
def cummax(self, axis=0, numeric_only=False, **kwargs) -> NDFrameT: skipna = kwargs.get("skipna", True) if axis != 0: f = lambda x: np.maximum.accumulate(x, axis) numeric_only_bool = self.
21
163
cummax
50
0
2
10
jax/tools/colab_tpu.py
122,702
Update values for release 0.4.1 PiperOrigin-RevId: 494889744
jax
13
Python
43
colab_tpu.py
def setup_tpu(tpu_driver_version='tpu_driver_20221212'): global TPU_DRIVER_MODE if not TPU_DRIVER_MODE: colab_tpu_addr = os.environ['COLAB_TPU_ADDR'].split(':')[0] url = f'http://{colab_tpu_addr}:8475/requestversion/{tpu_driver_version}' requests.post(url) TPU_DRIVER_MODE = 1 # The following is required to use TPU Driver as JAX's backend. config.FLAGS.jax_xla_backend = "tpu_driver" config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR'] # TODO(skyewm): Remove this after SPMD is supported for colab tpu. config.update('jax_array', False)
c4d590b1b640cc9fcfdbe91bf3fe34c47bcde917
72
https://github.com/google/jax.git
70
def setup_tpu(tpu_driver_version='tpu_driver_20221212'): global TPU_DRIVER_MODE if not TPU_DRIVER_MODE: colab_tpu_addr = os.environ['COLAB_TPU_ADDR'].split(':')[0] url = f'http://{colab_tpu_addr}:8475/requestversion/{tpu_driver_version}' requests.post(url) TPU_DRIVER_MODE = 1 # The following is required to use TPU Driver as JAX's backend. config.FLAGS.jax_xla_backend = "tpu_driver" config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR'] # TODO(skyewm): Remove this after SPMD is supported for colab tpu. config.
15
140
setup_tpu
48
0
1
12
dashboard/modules/job/tests/test_cli_integration.py
145,653
[Job submission] Add `list_jobs` API (#22679) Adds an API to the REST server, the SDK, and the CLI for listing all jobs that have been submitted, along with their information. Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
ray
12
Python
32
test_cli_integration.py
def test_list(self, ray_start_stop): _run_cmd("ray job submit --job-id='hello_id' -- echo hello") runtime_env = {"env_vars": {"TEST": "123"}} _run_cmd( "ray job submit --job-id='hi_id' " f"--runtime-env-json='{json.dumps(runtime_env)}' -- echo hi" ) stdout, _ = _run_cmd("ray job list") assert "JobInfo" in stdout assert "123" in stdout assert "hello_id" in stdout assert "hi_id" in stdout
1752f17c6d6fceac3d7902d3220a756b8424b7da
52
https://github.com/ray-project/ray.git
132
def test_list(self, ray_start_stop): _run_cmd("ray job submit --job-id='hello_id' -- echo hello") runtime_env = {"env_vars": {"TEST": "123"}} _run_cmd( "ray job submit --job-id='hi_id' " f"--runtime-env-json='{json.dumps(runtime_env)}' -- echo hi" ) stdout, _ = _run_
9
115
test_list
23
0
2
7
python3.10.4/Lib/datetime.py
222,394
add python 3.10.4 for windows
XX-Net
9
Python
18
datetime.py
def isoformat(self, timespec='auto'): s = _format_time(self._hour, self._minute, self._second, self._microsecond, timespec) tz = self._tzstr() if tz: s += tz return s __str__ = isoformat
8198943edd73a363c266633e1aa5b2a9e9c9f526
47
https://github.com/XX-net/XX-Net.git
97
def isoformat(self, timespec='auto'): s = _format_time(self._hour, self._minute, self._second,
12
80
isoformat
37
0
1
6
pandas/tests/reshape/concat/test_index.py
165,932
TST: add validation checks on levels keyword from pd.concat (#46654)
pandas
13
Python
33
test_index.py
def test_concat_with_duplicated_levels(self): # keyword levels should be unique df1 = DataFrame({"A": [1]}, index=["x"]) df2 = DataFrame({"A": [1]}, index=["y"]) msg = r"Level values not unique: \['x', 'y', 'y'\]" with pytest.raises(ValueError, match=msg): concat([df1, df2], keys=["x", "y"], levels=[["x", "y", "y"]])
361021b56f3159afb71d690fac3a1f3b381b0da6
85
https://github.com/pandas-dev/pandas.git
82
def test_concat_with_duplicated_levels(self): # keyword levels should be unique df1 = DataFrame({"A": [1]}, index=["x"]) df2 = DataFrame({"A": [1]}, index=["y"]) msg = r"Level values not unique: \['x', 'y', 'y'\]" with pytest.raises(ValueError, match=msg): concat([df1, df2], keys=["x", "y"], levels=[["x", "y", "y"
14
146
test_concat_with_duplicated_levels
13
0
3
6
wagtail/search/index.py
75,598
Reformat with black
wagtail
11
Python
11
index.py
def class_is_indexed(cls): return ( issubclass(cls, Indexed) and issubclass(cls, models.Model) and not cls._meta.abstract )
d10f15e55806c6944827d801cd9c2d53f5da4186
30
https://github.com/wagtail/wagtail.git
39
def class_is_indexed(cls): return ( issubclass(cls,
8
46
class_is_indexed
10
0
1
3
python3.10.4/Lib/encodings/bz2_codec.py
223,903
add python 3.10.4 for windows
XX-Net
8
Python
10
bz2_codec.py
def bz2_encode(input, errors='strict'): assert errors == 'strict' return (bz2.compress(input), len(input))
8198943edd73a363c266633e1aa5b2a9e9c9f526
27
https://github.com/XX-net/XX-Net.git
15
def bz2_encode(input, errors='strict'): assert errors == 's
6
45
bz2_encode
58
0
1
22
tests/snuba/api/endpoints/test_organization_events_v2.py
90,353
fix(discover): Equation change and meta conflict tests (#34889) - This fixes this test which broke cause the meta changed in one PR, and the equation format in another
sentry
13
Python
42
test_organization_events_v2.py
def test_equation_simple(self): event_data = load_data("transaction", timestamp=before_now(minutes=1)) event_data["breakdowns"]["span_ops"]["ops.http"]["value"] = 1500 self.store_event(data=event_data, project_id=self.project.id) query = { "field": ["spans.http", "equation|spans.http / 3"], "project": [self.project.id], "query": "event.type:transaction", } response = self.do_request( query, { "organizations:discover-basic": True, }, ) assert response.status_code == 200, response.content assert len(response.data["data"]) == 1 assert ( response.data["data"][0]["equation|spans.http / 3"] == event_data["breakdowns"]["span_ops"]["ops.http"]["value"] / 3 ) assert response.data["meta"]["fields"]["equation|spans.http / 3"] == "number"
3a1d4f5105f9b01e70efa92af651107399e76f99
161
https://github.com/getsentry/sentry.git
244
def test_equation_simple(self): event_data =
18
278
test_equation_simple
153
0
2
19
sklearn/linear_model/tests/test_sgd.py
259,568
MNT ensure creation of dataset is deterministic in SGD (#19716) Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com> Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org> Co-authored-by: Jérémie du Boisberranger <34657725+jeremiedbb@users.noreply.github.com>
scikit-learn
12
Python
75
test_sgd.py
def test_sgd_random_state(Estimator, global_random_seed): # Train the same model on the same data without converging and check that we # get reproducible results by fixing the random seed. if Estimator == linear_model.SGDRegressor: X, y = datasets.make_regression(random_state=global_random_seed) else: X, y = datasets.make_classification(random_state=global_random_seed) # Fitting twice a model with the same hyper-parameters on the same training # set with the same seed leads to the same results deterministically. est = Estimator(random_state=global_random_seed, max_iter=1) with pytest.warns(ConvergenceWarning): coef_same_seed_a = est.fit(X, y).coef_ assert est.n_iter_ == 1 est = Estimator(random_state=global_random_seed, max_iter=1) with pytest.warns(ConvergenceWarning): coef_same_seed_b = est.fit(X, y).coef_ assert est.n_iter_ == 1 assert_allclose(coef_same_seed_a, coef_same_seed_b) # Fitting twice a model with the same hyper-parameters on the same training # set but with different random seed leads to different results after one # epoch because of the random shuffling of the dataset. est = Estimator(random_state=global_random_seed + 1, max_iter=1) with pytest.warns(ConvergenceWarning): coef_other_seed = est.fit(X, y).coef_ assert est.n_iter_ == 1 assert np.abs(coef_same_seed_a - coef_other_seed).max() > 1.0
b4da3b406379b241bf5e81d0f60bbcddd424625b
179
https://github.com/scikit-learn/scikit-learn.git
259
def test_sgd_random_state(Estimator, global_random_seed): # Train the same model on the same data without converging and check that we # get reproducible results by fixing the random seed. if Estimator == linear_model.SGDR
26
287
test_sgd_random_state
14
1
2
3
src/prefect/logging/loggers.py
53,055
Move logging into separate modules at 'prefect.logging'
prefect
13
Python
14
loggers.py
def process(self, msg, kwargs): kwargs["extra"] = {**self.extra, **(kwargs.get("extra") or {})} return (msg, kwargs) @lru_cache()
08e580acf95963a2579971eb0ff4514233b5e7ea
@lru_cache()
39
https://github.com/PrefectHQ/prefect.git
26
def process(self, msg, kwargs): kwargs["extra"] = {**self.extra, **(kwargs.get("extra") or {})} return (msg, kwargs) @lru_cache()
7
70
process
30
0
1
9
tests/unit/test_yamlparser.py
13,199
feat: allow passing custom gateway in Flow (#5189)
jina
14
Python
24
test_yamlparser.py
def test_load_gateway_override_with(): with Gateway.load_config( 'yaml/test-custom-gateway.yml', uses_with={'arg1': 'arg1', 'arg2': 'arg2', 'arg3': 'arg3'}, ) as gateway: assert gateway.__class__.__name__ == 'DummyGateway' assert gateway.arg1 == 'arg1' assert gateway.arg2 == 'arg2' assert gateway.arg3 == 'arg3'
cdaf7f87ececf9e13b517379ca183b17f0d7b007
57
https://github.com/jina-ai/jina.git
77
def test_load_gateway_override_with(): with Gateway.load_config( 'yaml/test-custom-gateway.yml',
10
110
test_load_gateway_override_with
30
0
2
16
pandas/core/base.py
169,003
TYP: Autotyping (#48191) * annotate-magics * annotate-imprecise-magics * none-return * scalar-return * pyi files * ignore vendored file * manual changes * ignore pyright in pickle_compat (these errors would be legit if the current __new__ methods were called but I think these pickle tests call older __new__ methods which allowed providing multiple positional arguments) * run autotyping in pre-commit * remove final and expand safe (and add annotate-imprecise-magics)
pandas
14
Python
28
base.py
def __iter__(self) -> Iterator: # We are explicitly making element iterators. if not isinstance(self._values, np.ndarray): # Check type instead of dtype to catch DTA/TDA return iter(self._values) else: return map(self._values.item, range(self._values.size))
54347fe684e0f7844bf407b1fb958a5269646825
48
https://github.com/pandas-dev/pandas.git
91
def __iter__(self) -> Iterator: # We are explicitly making element iterators. if not isinstance(self._values, np.ndarray): # Check type instead of dtype to catch DTA/TDA return
12
80
__iter__
166
1
2
23
pandas/tests/window/test_base_indexer.py
165,307
ENH: Rolling window with step size (GH-15354) (#45765)
pandas
14
Python
97
test_base_indexer.py
def test_rolling_forward_window(constructor, func, np_func, expected, np_kwargs, step): # GH 32865 values = np.arange(10.0) values[5] = 100.0 indexer = FixedForwardWindowIndexer(window_size=3) match = "Forward-looking windows can't have center=True" with pytest.raises(ValueError, match=match): rolling = constructor(values).rolling(window=indexer, center=True) getattr(rolling, func)() match = "Forward-looking windows don't support setting the closed argument" with pytest.raises(ValueError, match=match): rolling = constructor(values).rolling(window=indexer, closed="right") getattr(rolling, func)() rolling = constructor(values).rolling(window=indexer, min_periods=2, step=step) result = getattr(rolling, func)() # Check that the function output matches the explicitly provided array expected = constructor(expected)[::step] tm.assert_equal(result, expected) # Check that the rolling function output matches applying an alternative # function to the rolling window object expected2 = constructor(rolling.apply(lambda x: np_func(x, **np_kwargs))) tm.assert_equal(result, expected2) # Check that the function output matches applying an alternative function # if min_periods isn't specified # GH 39604: After count-min_periods deprecation, apply(lambda x: len(x)) # is equivalent to count after setting min_periods=0 min_periods = 0 if func == "count" else None rolling3 = constructor(values).rolling(window=indexer, min_periods=min_periods) result3 = getattr(rolling3, func)() expected3 = constructor(rolling3.apply(lambda x: np_func(x, **np_kwargs))) tm.assert_equal(result3, expected3) @pytest.mark.parametrize("constructor", [Series, DataFrame])
6caefb19f4d7c05451fafca182c6eb39fe9901ed
@pytest.mark.parametrize("constructor", [Series, DataFrame])
262
https://github.com/pandas-dev/pandas.git
270
def test_rolling_forward_window(constructor, func, np_func, expected, np_kwargs, step): # GH 32865 values = np.arange(10.0) values[5] = 100.0 indexer = FixedForwardWindowIndexer(window_size=3) match = "Forward-looking windows can't have center=True" with pytest.raises(ValueError, match=match): rolling = constructor(values).rolling(window=indexer, center=True) getattr(rolling, func)() match = "Forward-looking windows don't support setting the closed argument" with pytest.raises(ValueError, match=match): rolling = constructor(values).rolling(window=indexer, closed="right") getattr(rolling, func)() rolling = constructor(values).rolling(window=indexer, min_periods=2, step=step) result = getattr(rolling, func)() # Check that the function output matches the explicitly provided array expected = constructor(expected)[::step] tm.assert_equal(result, expected) # Check that the rolling function output matches applying an alternative # function to the rolling window object expected2 = constructor(rolling.apply(lambda x: np_func(x, **np_kwargs))) tm.assert_equal(result, expected2) # Check that the function output matches applying an alternative function # if min_periods isn't specified # GH 39604: After count-min_periods deprecation, apply(lambda x: len(x)) # is
36
441
test_rolling_forward_window
18
0
3
6
python/ccxt/async_support/okx.py
17,706
1.72.35 [ci skip]
ccxt
12
Python
17
okx.py
def set_sandbox_mode(self, enable): super(okx, self).set_sandbox_mode(enable) if enable: self.headers['x-simulated-trading'] = '1' elif 'x-simulated-trading' in self.headers: self.headers = self.omit(self.headers, 'x-simulated-trading')
50ff6d21431b2f87bc0d7a7c671c34b52d01ef99
50
https://github.com/ccxt/ccxt.git
60
def set_sandbox_mode(self, enable): super(okx, self).set_sandbox_mode(enable) if enable: self.headers['x-simulated-trading'] = '1' elif 'x-simulated-trading' in self.headers: self.heade
7
85
set_sandbox_mode
496
0
40
104
netbox/dcim/models/cables.py
266,214
Fixes #10579: Mark cable traces terminating to a provider network as complete
netbox
20
Python
226
cables.py
def from_origin(cls, terminations): from circuits.models import CircuitTermination if not terminations: return None # Ensure all originating terminations are attached to the same link if len(terminations) > 1: assert all(t.link == terminations[0].link for t in terminations[1:]) path = [] position_stack = [] is_complete = False is_active = True is_split = False while terminations: # Terminations must all be of the same type assert all(isinstance(t, type(terminations[0])) for t in terminations[1:]) # Check for a split path (e.g. rear port fanning out to multiple front ports with # different cables attached) if len(set(t.link for t in terminations)) > 1: is_split = True break # Step 1: Record the near-end termination object(s) path.append([ object_to_path_node(t) for t in terminations ]) # Step 2: Determine the attached link (Cable or WirelessLink), if any link = terminations[0].link if link is None and len(path) == 1: # If this is the start of the path and no link exists, return None return None elif link is None: # Otherwise, halt the trace if no link exists break assert type(link) in (Cable, WirelessLink) # Step 3: Record the link and update path status if not "connected" path.append([object_to_path_node(link)]) if hasattr(link, 'status') and link.status != LinkStatusChoices.STATUS_CONNECTED: is_active = False # Step 4: Determine the far-end terminations if isinstance(link, Cable): termination_type = ContentType.objects.get_for_model(terminations[0]) local_cable_terminations = CableTermination.objects.filter( termination_type=termination_type, termination_id__in=[t.pk for t in terminations] ) # Terminations must all belong to same end of Cable local_cable_end = local_cable_terminations[0].cable_end assert all(ct.cable_end == local_cable_end for ct in local_cable_terminations[1:]) remote_cable_terminations = CableTermination.objects.filter( cable=link, cable_end='A' if local_cable_end == 'B' else 'B' ) remote_terminations = [ct.termination for ct in remote_cable_terminations] else: # WirelessLink remote_terminations = [link.interface_b] if link.interface_a is terminations[0] else [link.interface_a] # Step 5: Record the far-end termination object(s) path.append([ object_to_path_node(t) for t in remote_terminations ]) # Step 6: Determine the "next hop" terminations, if applicable if not remote_terminations: break if isinstance(remote_terminations[0], FrontPort): # Follow FrontPorts to their corresponding RearPorts rear_ports = RearPort.objects.filter( pk__in=[t.rear_port_id for t in remote_terminations] ) if len(rear_ports) > 1: assert all(rp.positions == 1 for rp in rear_ports) elif rear_ports[0].positions > 1: position_stack.append([fp.rear_port_position for fp in remote_terminations]) terminations = rear_ports elif isinstance(remote_terminations[0], RearPort): if len(remote_terminations) > 1 or remote_terminations[0].positions == 1: front_ports = FrontPort.objects.filter( rear_port_id__in=[rp.pk for rp in remote_terminations], rear_port_position=1 ) elif position_stack: front_ports = FrontPort.objects.filter( rear_port_id=remote_terminations[0].pk, rear_port_position__in=position_stack.pop() ) else: # No position indicated: path has split, so we stop at the RearPorts is_split = True break terminations = front_ports elif isinstance(remote_terminations[0], CircuitTermination): # Follow a CircuitTermination to its corresponding CircuitTermination (A to Z or vice versa) term_side = remote_terminations[0].term_side assert all(ct.term_side == term_side for ct in remote_terminations[1:]) circuit_termination = CircuitTermination.objects.filter( circuit=remote_terminations[0].circuit, term_side='Z' if term_side == 'A' else 'A' ).first() if circuit_termination is None: break elif circuit_termination.provider_network: # Circuit terminates to a ProviderNetwork path.extend([ [object_to_path_node(circuit_termination)], [object_to_path_node(circuit_termination.provider_network)], ]) is_complete = True break elif circuit_termination.site and not circuit_termination.cable: # Circuit terminates to a Site path.extend([ [object_to_path_node(circuit_termination)], [object_to_path_node(circuit_termination.site)], ]) break terminations = [circuit_termination] # Anything else marks the end of the path else: is_complete = True break return cls( path=path, is_complete=is_complete, is_active=is_active, is_split=is_split )
bd29d1581461f1b97cf0bcdaa10752d89e3ac0ae
683
https://github.com/netbox-community/netbox.git
2,280
def from_origin(cls, terminations): from circuits.models import CircuitTermination if not terminations: return None # Ensure all originating terminations are attached to the same link if len(terminations) > 1: assert all(t.link == terminations[0].link for t in terminations[1:]) path = [] position_stack = [] is_complete = False is_active = True is_split = False while terminations: # Terminations must all be of the same type assert all(isinstance(t, type(terminations[0])) for t in terminations[1:]) # Check for a split path (e.g. rear port fanning out to multiple front ports with # different cables attached) if len(set(t.link for t in terminations)) > 1: is_split = True break # Step 1: Record the near-end termination object(s) path.append([ object_to_path_node(t) for t in terminations ]) # Step 2: Determine the attached link (Cable or WirelessLink), if any link = terminations[0].link if link is None and len(path) == 1: # If this is the start of the path and no link exists, return None return None elif link is None: # Otherwise, halt the trace if no link exists break assert type(link) in (Cable, WirelessLink) # Step 3: Record the link and update path status if not "connected" path.append([object_to_path_node(link)]) if hasattr(link, 'status') and link.status != LinkStatusChoices.STATUS_CONNECTED: is_active = False # Step 4: Determine the far-end terminations if isinstance(link, Cable): termination_type = ContentType.objects.get_for_model(terminations[0]) local_cable_terminations = CableTermination.objects.filter( termination_type=termination_type, termination_id__in=[t.pk for t in terminations] ) # Terminations must all belong to same end of Cable local_cable_end = local_cable_terminations[0].cable_end assert all(ct.cable_end == local_cable_end for ct in local_cable_terminations[1:]) remote_cable_terminations = CableTermination.objects.filter( cable=link, cable_end='A' if local_cable_end == 'B' else 'B' ) remote_terminations = [ct.termination for ct in remote_cable_terminations] else: # WirelessLink remote_terminations = [link.interface_b] if link.interface_a is terminations[0] else [link.interface_a] # Step 5: Record the far-end termination object(s) path.append([ object_to_path_node(t) for t in remote_terminations ]) # Step 6: Determine the "next hop" terminations, if applicable if not remote_terminations: break if isinstance(remote_terminations[0], FrontPort): # Follow FrontPorts to their corresponding RearPorts rear_ports = RearPort.objects.filter( pk__in=[t.rear_port_id for t in remote_terminations] ) if len(rear_ports) > 1: assert all(rp.positions == 1 for rp in rear_ports) elif rear_ports[0].positions > 1: position_stack.append([fp.rear_port_position for fp in remote_terminations]) terminations = rear_ports elif isinstance(remote_terminations[0], RearPort): if len(remote_terminations) > 1 or remote_terminations[0].positions == 1: front_ports = FrontPort.objects.filter( rear_port_id__in=[rp.pk for rp in remote_terminations], rear_port_position=1 ) elif position_stack: front_ports = FrontPort.objects.filter( rear_port_id=remote_terminations[0].pk, rear_port_position__in=position_stack.pop() ) else: # No position indicated: path has split, so we stop at the RearPor
64
1,077
from_origin
97
0
6
35
erpnext/setup/setup_wizard/operations/install_fixtures.py
67,535
style: format code with black
erpnext
21
Python
69
install_fixtures.py
def add_uom_data(): # add UOMs uoms = json.loads( open(frappe.get_app_path("erpnext", "setup", "setup_wizard", "data", "uom_data.json")).read() ) for d in uoms: if not frappe.db.exists("UOM", _(d.get("uom_name"))): uom_doc = frappe.get_doc( { "doctype": "UOM", "uom_name": _(d.get("uom_name")), "name": _(d.get("uom_name")), "must_be_whole_number": d.get("must_be_whole_number"), "enabled": 1, } ).db_insert() # bootstrap uom conversion factors uom_conversions = json.loads( open( frappe.get_app_path("erpnext", "setup", "setup_wizard", "data", "uom_conversion_data.json") ).read() ) for d in uom_conversions: if not frappe.db.exists("UOM Category", _(d.get("category"))): frappe.get_doc({"doctype": "UOM Category", "category_name": _(d.get("category"))}).db_insert() if not frappe.db.exists( "UOM Conversion Factor", {"from_uom": _(d.get("from_uom")), "to_uom": _(d.get("to_uom"))} ): uom_conversion = frappe.get_doc( { "doctype": "UOM Conversion Factor", "category": _(d.get("category")), "from_uom": _(d.get("from_uom")), "to_uom": _(d.get("to_uom")), "value": d.get("value"), } ).insert(ignore_permissions=True)
494bd9ef78313436f0424b918f200dab8fc7c20b
294
https://github.com/frappe/erpnext.git
60
def add_uom_data(): # add UOMs uoms = json.loads( open(frappe.get_app_path("erpnext", "setup", "setup_wizard", "data", "uom_data.json")).read() ) for d in uoms: if not frappe.db.exists("UOM", _(d.get("uom_name"))): uom_doc = frappe.get_doc( { "doctype": "UOM", "uom_name": _(d.get("uom_name")), "name": _(d.get("uom_name")), "must_be_whole_number": d.get("must_be_whole_number"), "enabled": 1, } ).db_insert() # bootstrap uom conversion factors uom_conversions = json.loads( open( frappe.get_app_path("erpnext", "setup", "setup_wizard", "data", "uom_conversion_data.json") ).read() ) for d in uom_conversions: if not frappe.db.exists("UOM Category", _(d.get("category"))): frappe.get_doc({"doctype": "UOM Category", "category_name": _(d.get("category"))}).db_insert() if not f
20
533
add_uom_data
16
0
1
4
tests/cache/tests.py
202,016
Refs #33476 -- Reformatted code with Black.
django
12
Python
14
tests.py
def test_set_many_invalid_key(self): msg = KEY_ERRORS_WITH_MEMCACHED_MSG % ":1:key with spaces" with self.assertWarnsMessage(CacheKeyWarning, msg): cache.set_many({"key with spaces": "foo"})
9c19aff7c7561e3a82978a272ecdaad40dda5c00
30
https://github.com/django/django.git
40
def test_set_many_invalid_key(self): msg = KEY_ERRORS_WITH_MEMCACHED_MSG % ":1:key with spaces" with self.assertWarnsMessage(CacheKeyWarning, msg): cache.set_many({"key with spaces":
8
56
test_set_many_invalid_key
123
0
18
27
utils/check_repo.py
336,334
Add `is_torch_available`, `is_flax_available` (#204) * Add is_<framework>_available, refactor import utils * deps * quality
diffusers
14
Python
61
check_repo.py
def ignore_undocumented(name): # NOT DOCUMENTED ON PURPOSE. # Constants uppercase are not documented. if name.isupper(): return True # ModelMixins / Encoders / Decoders / Layers / Embeddings / Attention are not documented. if ( name.endswith("ModelMixin") or name.endswith("Decoder") or name.endswith("Encoder") or name.endswith("Layer") or name.endswith("Embeddings") or name.endswith("Attention") ): return True # Submodules are not documented. if os.path.isdir(os.path.join(PATH_TO_DIFFUSERS, name)) or os.path.isfile( os.path.join(PATH_TO_DIFFUSERS, f"{name}.py") ): return True # All load functions are not documented. if name.startswith("load_tf") or name.startswith("load_pytorch"): return True # is_xxx_available functions are not documented. if name.startswith("is_") and name.endswith("_available"): return True # Deprecated objects are not documented. if name in DEPRECATED_OBJECTS or name in UNDOCUMENTED_OBJECTS: return True # MMBT model does not really work. if name.startswith("MMBT"): return True if name in SHOULD_HAVE_THEIR_OWN_PAGE: return True return False
df90f0ce989dcccd7ef2fe9ff085da3197b2f2ad
166
https://github.com/huggingface/diffusers.git
288
def ignore_undocumented(name): # NOT DOCUMENTED ON PURPOSE. # Constants uppercase are not documented. if name.isupper(): return True # ModelMixins / Encoders / Decoders /
14
298
ignore_undocumented
14
0
2
6
homeassistant/components/volumio/media_player.py
306,636
Improve entity type hints [v] (#77885)
core
12
Python
13
media_player.py
async def async_media_pause(self) -> None: if self._state.get("trackType") == "webradio": await self._volumio.stop() else: await self._volumio.pause()
050cb275ffd51891fa58121643086dad304776a3
38
https://github.com/home-assistant/core.git
57
async def async_media_pause(self) -> None: if self._state.get("trackType") == "webradio": await self._volumio
7
72
async_media_pause
14
0
1
6
tests/providers/microsoft/azure/hooks/test_azure_cosmos.py
45,157
(AzureCosmosDBHook) Update to latest Cosmos API (#21514) * Bumping the ms azure cosmos providers to work with the 4.x azure python sdk api Co-authored-by: gatewoodb <ben@everythingisbroken.net>
airflow
11
Python
13
test_azure_cosmos.py
def test_delete_database(self, mock_cosmos): hook = AzureCosmosDBHook(azure_cosmos_conn_id='azure_cosmos_test_key_id') hook.delete_database(self.test_database_name) expected_calls = [mock.call().delete_database('test_database_name')] mock_cosmos.assert_any_call(self.test_end_point, {'masterKey': self.test_master_key}) mock_cosmos.assert_has_calls(expected_calls)
3c4524b4ec2b42a8af0a8c7b9d8f1d065b2bfc83
59
https://github.com/apache/airflow.git
48
def test_delete_database(self, mock_cosmos): hook = AzureCosmosDBHook(azure_cosmos_conn_id='azure_cosmos_test_key_id') hook.delete_database(self.test_database_name) expected_calls = [mock.call().delete_database('test_database_name')] mock_cosmos.assert_any_call(self.test_end_point, {'masterKey': self.test_master_key}) mock_cosmos.assert_has_calls(expected_calls)
15
100
test_delete_database
83
0
1
11
pandas/tests/arrays/sparse/test_arithmetics.py
163,950
TST/CLN: organize SparseArray tests (#45693)
pandas
10
Python
39
test_arithmetics.py
def test_float_same_index_comparison(self, kind): # when sp_index are the same values = np.array([np.nan, 1, 2, 0, np.nan, 0, 1, 2, 1, np.nan]) rvalues = np.array([np.nan, 2, 3, 4, np.nan, 0, 1, 3, 2, np.nan]) a = SparseArray(values, kind=kind) b = SparseArray(rvalues, kind=kind) self._check_comparison_ops(a, b, values, rvalues) values = np.array([0.0, 1.0, 2.0, 6.0, 0.0, 0.0, 1.0, 2.0, 1.0, 0.0]) rvalues = np.array([0.0, 2.0, 3.0, 4.0, 0.0, 0.0, 1.0, 3.0, 2.0, 0.0]) a = SparseArray(values, kind=kind, fill_value=0) b = SparseArray(rvalues, kind=kind, fill_value=0) self._check_comparison_ops(a, b, values, rvalues)
5e40ff55ae2a4e2a1eaab0c924e5c369c591523d
243
https://github.com/pandas-dev/pandas.git
159
def test_float_same_index_comparison(self, kind): # when sp_index are the same values = np.array([np.nan, 1, 2, 0, np.nan, 0, 1, 2, 1, np.nan]) rvalues = np.array([np.nan, 2, 3, 4, np.nan, 0, 1, 3, 2, np.nan]) a = SparseArray(values, kind=kind) b = SparseArray(rvalues, kind=kind) self._check_comparison_ops(a, b, values, rvalues) values = np.array([0.0, 1.0, 2.0, 6.0, 0.0, 0.0, 1.0, 2.0, 1.0, 0.0]) rvalues = np.array([0.0, 2.0, 3.0, 4.0, 0.0, 0.0, 1.0, 3.0, 2.0, 0.0]) a = SparseArray(values, kind=kind, fill_value=0) b = SparseArray(rvalues, kind=kind, fill_value=0) self._check_comparison_ops(a, b, values, rvalues)
13
268
test_float_same_index_comparison
122
0
7
39
mindsdb/api/mongo/responders/coll_stats.py
115,913
del model interface
mindsdb
16
Python
79
coll_stats.py
def result(self, query, request_env, mindsdb_env, session): db = query['$db'] collection = query['collStats'] scale = query.get('scale') if db != 'mindsdb' or collection == 'predictors' or scale is None: # old behavior # NOTE real answer is huge, i removed most data from it. res = { 'ns': "db.collection", 'size': 1, 'count': 0, 'avgObjSize': 1, 'storageSize': 16384, 'capped': False, 'wiredTiger': { }, 'nindexes': 1, 'indexDetails': { }, 'totalIndexSize': 16384, 'indexSizes': { '_id_': 16384 }, 'ok': 1 } res['ns'] = f"{db}.{collection}" if db == 'mindsdb' and collection == 'predictors': res['count'] = len(mindsdb_env['model_controller'].get_models()) else: ident_parts = [collection] if scale is not None: ident_parts.append(scale) ast_query = Describe(Identifier( parts=ident_parts )) data = run_sql_command(mindsdb_env, ast_query) res = { 'data': data } res['ns'] = f"{db}.{collection}" return res responder = Responce()
6eb408a9973fbc24c973d6524dc34cb9b1e0ee05
189
https://github.com/mindsdb/mindsdb.git
616
def result(self, query, request_env, mindsdb_env, session): db = query['$db'] collection = query['collStats'] scale = query.get('scale') if db != 'mindsdb' or collection == 'predictors' or scale is None: # old behavior # NOTE real answer is huge, i removed most da
23
359
result
29
0
2
9
modules/image/Image_editing/colorization/user_guided_colorization/test.py
51,078
update user_guided_colorization (#1994) * update user_guided_colorization * add clean func
PaddleHub
11
Python
27
test.py
def setUpClass(cls) -> None: img_url = 'https://unsplash.com/photos/1sLIu1XKQrY/download?ixid=MnwxMjA3fDB8MXxhbGx8MTJ8fHx8fHwyfHwxNjYyMzQxNDUx&force=true&w=640' if not os.path.exists('tests'): os.makedirs('tests') response = requests.get(img_url) assert response.status_code == 200, 'Network Error.' with open('tests/test.jpg', 'wb') as f: f.write(response.content) cls.module = hub.Module(name="user_guided_colorization")
0ea0f8e8757c3844a98d74013ae3708836bd6355
73
https://github.com/PaddlePaddle/PaddleHub.git
92
def setUpClass(cls) -> None: img_url = 'https://unsplash.com/photos/1sLIu1XKQrY/download?ixid=MnwxMjA3fDB8MXxhbGx8MTJ8fHx8fHwyfHwxNjYyMzQxNDUx&force=true&w=640' if not os.path.exists('tests'):
19
133
setUpClass
121
0
2
11
numpy/core/setup_common.py
160,402
make MismatchCAPIWarnining into MismatchCAPIError
numpy
12
Python
82
setup_common.py
def check_api_version(apiversion, codegen_dir): curapi_hash, api_hash = get_api_versions(apiversion, codegen_dir) # If different hash, it means that the api .txt files in # codegen_dir have been updated without the API version being # updated. Any modification in those .txt files should be reflected # in the api and eventually abi versions. # To compute the checksum of the current API, use numpy/core/cversions.py if not curapi_hash == api_hash: msg = ("API mismatch detected, the C API version " "numbers have to be updated. Current C api version is " f"{apiversion}, with checksum {curapi_hash}, but recorded " f"checksum in core/codegen_dir/cversions.txt is {api_hash}. If " "functions were added in the C API, you have to update " f"C_API_VERSION in {__file__}." ) raise MismatchCAPIError(msg) FUNC_CALL_ARGS = {}
54a7b0b9843e2e89b217eaa38550752bb4754119
42
https://github.com/numpy/numpy.git
242
def check_api_version(apiversion, codegen_dir): curapi_hash, api_hash = get_api_versions(apiversion, codegen_dir) # If different hash, it means that the api .txt files in # codegen_dir have been updated without the API version being # updated. Any modification in those .txt files should be reflected # in the api and eventually abi versions. # To compute the checksum of the current API, use numpy/core/cversions.py if not curapi_hash == api_hash: msg = ("API mismatc
10
102
check_api_version
53
0
1
19
netbox/dcim/tests/test_cablepaths.py
265,038
Add cable topology tests
netbox
11
Python
43
test_cablepaths.py
def test_214_interface_to_providernetwork_via_circuit(self): interface1 = Interface.objects.create(device=self.device, name='Interface 1') providernetwork = ProviderNetwork.objects.create(name='Provider Network 1', provider=self.circuit.provider) circuittermination1 = CircuitTermination.objects.create(circuit=self.circuit, site=self.site, term_side='A') circuittermination2 = CircuitTermination.objects.create(circuit=self.circuit, provider_network=providernetwork, term_side='Z') # Create cable 1 cable1 = Cable( a_terminations=[interface1], b_terminations=[circuittermination1] ) cable1.save() self.assertPathExists( (interface1, cable1, circuittermination1, circuittermination2, providernetwork), is_active=True ) self.assertEqual(CablePath.objects.count(), 1) # Delete cable 1 cable1.delete() self.assertEqual(CablePath.objects.count(), 0) interface1.refresh_from_db() self.assertPathIsNotSet(interface1)
537383e0713645564ba2949e37dc2cbf41eb3317
175
https://github.com/netbox-community/netbox.git
216
def test_214_interface_to_providernetwork_via_circuit(self): interface1 = Interface.objects.create(device=self.device, name='Interface 1') providernetwork = ProviderNetwork.objects.create(name='Provider Network 1', provider=self.circuit.provider) circuittermination1 = CircuitTermination.objects.create(circuit=self.circuit, site=self.site, term_side='A') circuittermination2 = CircuitTerminati
31
278
test_214_interface_to_providernetwork_via_circuit
59
0
1
40
tests/handlers/test_receipts.py
248,152
Implement changes to MSC2285 (hidden read receipts) (#12168) * Changes hidden read receipts to be a separate receipt type (instead of a field on `m.read`). * Updates the `/receipts` endpoint to accept `m.fully_read`.
synapse
19
Python
22
test_receipts.py
def test_filters_out_event_with_only_hidden_receipts_and_ignores_the_rest(self): self._test_filters_hidden( [ { "content": { "$14356419edgd14394fHBLK:matrix.org": { ReceiptTypes.READ_PRIVATE: { "@rikj:jki.re": { "ts": 1436451550453, }, } }, "$1435641916114394fHBLK:matrix.org": { ReceiptTypes.READ: { "@user:jki.re": { "ts": 1436451550453, } } }, }, "room_id": "!jEsUZKDJdhlrceRyVU:example.org", "type": "m.receipt", } ], [ { "content": { "$1435641916114394fHBLK:matrix.org": { ReceiptTypes.READ: { "@user:jki.re": { "ts": 1436451550453, } } } }, "room_id": "!jEsUZKDJdhlrceRyVU:example.org", "type": "m.receipt", } ], )
116a4c8340b729ffde43be33df24d417384cb28b
103
https://github.com/matrix-org/synapse.git
919
def test_filters_out_event_with_only_hidden_receipts_and_ignores_the_rest(self): self._test_filters_hidden( [ { "content": { "$14356419edgd14394fHBLK:matrix.org": { ReceiptTypes.READ_PRIVATE: { "@rikj:jki.re": { "ts": 1436451550453, }, } }, "$1435641916114394fHBLK:matrix.org": { ReceiptTypes.READ: { "@user:jki.re": { "ts": 1436451550453, } } }, }, "room_id": "!jEsUZKDJdhlrceRyVU:example.org",
6
185
test_filters_out_event_with_only_hidden_receipts_and_ignores_the_rest
85
0
1
40
tests/api_connexion/endpoints/test_dag_endpoint.py
46,907
Add more fields to REST API dags/dag_id/details endpoint (#22756) Added more fields to the DAG details endpoint, which is the endpoint for getting DAG `object` details
airflow
12
Python
67
test_dag_endpoint.py
def test_should_response_200_for_null_start_date(self): response = self.client.get( f"/api/v1/dags/{self.dag3_id}/details", environ_overrides={'REMOTE_USER': "test"} ) assert response.status_code == 200 last_parsed = response.json["last_parsed"] expected = { "catchup": True, "concurrency": 16, "max_active_tasks": 16, "dag_id": "test_dag3", "dag_run_timeout": None, "default_view": "grid", "description": None, "doc_md": None, "fileloc": __file__, "file_token": FILE_TOKEN, "is_paused": None, "is_active": None, "is_subdag": False, "orientation": "LR", "owners": ['airflow'], "params": {}, "schedule_interval": { "__type": "TimeDelta", "days": 1, "microseconds": 0, "seconds": 0, }, "start_date": None, "tags": [], "timezone": "Timezone('UTC')", "max_active_runs": 16, "pickle_id": None, "end_date": None, 'is_paused_upon_creation': None, 'last_parsed': last_parsed, 'render_template_as_native_obj': False, } assert response.json == expected
34d2dd8853849d00de2e856b1f79cffe4da6d990
173
https://github.com/apache/airflow.git
501
def test_should_response_200_for_null_start_date(self): response = self.client.get( f"/api/v1/dags/{self.dag3_id}/details", environ_overrides={'REMOTE_USER': "test"} ) assert response.status_code == 200 last_parsed = response.json["last_parsed"] expected = { "catchup": True, "concurrency": 16, "max_active_tasks": 16, "dag_id": "test_dag3", "dag_run_timeout": None, "default_view": "grid", "description": None, "doc_md": None, "fileloc": __file__, "file_token": FILE_TOKEN, "is_paused": None, "is_active": None, "is_subdag": False, "orientation": "LR", "owners": ['airflow'], "params": {}, "schedule_interval": { "__type": "TimeDelta", "days": 1, "microseconds": 0, "seconds": 0, }, "start_date": None, "tags": [], "timez
13
319
test_should_response_200_for_null_start_date
47
0
1
2
keras/applications/resnet_rs.py
268,893
KERAS application addition of Resnet-RS model
keras
8
Python
30
resnet_rs.py
def decode_predictions(preds, top=5): return imagenet_utils.decode_predictions(preds, top=top) preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format( mode='', ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF, error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC) decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__ DOC = setattr(ResNetRS50, '__doc__', ResNetRS50.__doc__ + DOC) setattr(ResNetRS152, '__doc__', ResNetRS152.__doc__ + DOC) setattr(ResNetRS200, '__doc__', ResNetRS200.__doc__ + DOC) setattr(ResNetRS270, '__doc__', ResNetRS270.__doc__ + DOC) setattr(ResNetRS350, '__doc__', ResNetRS350.__doc__ + DOC) setattr(ResNetRS420, '__doc__', ResNetRS420.__doc__ + DOC)
c223693db91473c9a71c330d4e38a751d149f93c
20
https://github.com/keras-team/keras.git
48
def decode_predictions(preds, top=5): return imagenet_utils.decode_predictions(preds, top=top) preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format( mode='', ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF, error=imagenet_uti
21
205
decode_predictions
67
0
1
17
pandas/tests/generic/test_frame.py
169,461
fix pylint bad-super-call (#48896) * fix pylint bad-super-call * fix black pre commit * Update pyproject.toml Co-authored-by: Marco Edward Gorelli <33491632+MarcoGorelli@users.noreply.github.com> * change super() to df.copy() Co-authored-by: Marco Edward Gorelli <33491632+MarcoGorelli@users.noreply.github.com>
pandas
13
Python
38
test_frame.py
def test_validate_bool_args(self, value): df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]}) msg = 'For argument "inplace" expected type bool, received type' with pytest.raises(ValueError, match=msg): df.copy().rename_axis(mapper={"a": "x", "b": "y"}, axis=1, inplace=value) with pytest.raises(ValueError, match=msg): df.copy().drop("a", axis=1, inplace=value) with pytest.raises(ValueError, match=msg): df.copy().fillna(value=0, inplace=value) with pytest.raises(ValueError, match=msg): df.copy().replace(to_replace=1, value=7, inplace=value) with pytest.raises(ValueError, match=msg): df.copy().interpolate(inplace=value) with pytest.raises(ValueError, match=msg): df.copy()._where(cond=df.a > 2, inplace=value) with pytest.raises(ValueError, match=msg): df.copy().mask(cond=df.a > 2, inplace=value)
159a91754159545df743ff89fc51e83d5421993b
254
https://github.com/pandas-dev/pandas.git
206
def test_validate_bool_args(self, value): df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]}) msg = 'For argument "inplace" expected type bool, received type' with pytest.raises(ValueError, match=msg): df.copy().rename_axis(mapper={"a": "x", "b": "y"}, axis=1, inplace=value) with pytest.raises(ValueError, match=msg): df.copy().drop("a", axis=1, inplace=value) with pytest.raises(ValueError, match=msg): df.copy().fillna(value=0, inplace=value) with pytest.raises(ValueError, match=msg): df.copy().replace(to_replace=1, value=7, inplace=value) with pytest.raises(ValueError, match=msg): df.copy().interpolate(inplace=value)
24
412
test_validate_bool_args
20
0
1
5
pandas/tests/arrays/datetimes/test_constructors.py
164,333
⬆️ UPGRADE: Autoupdate pre-commit config (#45752) Co-authored-by: MarcoGorelli <MarcoGorelli@users.noreply.github.com>
pandas
14
Python
17
test_constructors.py
def test_from_pandas_array(self): arr = pd.array(np.arange(5, dtype=np.int64)) * 3600 * 10**9 result = DatetimeArray._from_sequence(arr)._with_freq("infer") expected = pd.date_range("1970-01-01", periods=5, freq="H")._data tm.assert_datetime_array_equal(result, expected)
419331c598a097896edae40bc0687e4127f97b6b
69
https://github.com/pandas-dev/pandas.git
47
def test_from_pandas_array(self): arr = pd.array(np.arange(5, dtype=np.int64)) * 3600 * 10**9 result = Da
20
112
test_from_pandas_array
101
0
4
19
pandas/tests/frame/indexing/test_setitem.py
167,058
REF: Add Manager.column_setitem to set values into a single column (without intermediate series) (#47074)
pandas
15
Python
86
test_setitem.py
def test_setitem_partial_column_inplace(self, consolidate, using_array_manager): # This setting should be in-place, regardless of whether frame is # single-block or multi-block # GH#304 this used to be incorrectly not-inplace, in which case # we needed to ensure _item_cache was cleared. df = DataFrame( {"x": [1.1, 2.1, 3.1, 4.1], "y": [5.1, 6.1, 7.1, 8.1]}, index=[0, 1, 2, 3] ) df.insert(2, "z", np.nan) if not using_array_manager: if consolidate: df._consolidate_inplace() assert len(df._mgr.blocks) == 1 else: assert len(df._mgr.blocks) == 2 zvals = df["z"]._values df.loc[2:, "z"] = 42 expected = Series([np.nan, np.nan, 42, 42], index=df.index, name="z") tm.assert_series_equal(df["z"], expected) # check setting occurred in-place tm.assert_numpy_array_equal(zvals, expected.values) assert np.shares_memory(zvals, df["z"]._values)
b99ec4a9c92e288ace6b63072ffc4c296f8e5dc9
210
https://github.com/pandas-dev/pandas.git
285
def test_setitem_partial_column_inplace(self, consolidate, using_array_manager): # This setting should be in-place, regardless of whether frame is # single-block or multi-block # GH#304 this used to be i
25
280
test_setitem_partial_column_inplace
67
0
2
40
tests/rest/client/test_rooms.py
248,902
Remove unnecessary `json.dumps` from tests (#13303)
synapse
13
Python
42
test_rooms.py
def test_search_filter_not_labels(self) -> None: request_data = { "search_categories": { "room_events": { "search_term": "label", "filter": self.FILTER_NOT_LABELS, } } } self._send_labelled_messages_in_room() channel = self.make_request( "POST", "/search?access_token=%s" % self.tok, request_data ) results = channel.json_body["search_categories"]["room_events"]["results"] self.assertEqual( len(results), 4, [result["result"]["content"] for result in results], ) self.assertEqual( results[0]["result"]["content"]["body"], "without label", results[0]["result"]["content"]["body"], ) self.assertEqual( results[1]["result"]["content"]["body"], "without label", results[1]["result"]["content"]["body"], ) self.assertEqual( results[2]["result"]["content"]["body"], "with wrong label", results[2]["result"]["content"]["body"], ) self.assertEqual( results[3]["result"]["content"]["body"], "with two wrong labels", results[3]["result"]["content"]["body"], )
efee345b454ac5e6aeb4b4128793be1fbc308b91
231
https://github.com/matrix-org/synapse.git
452
def test_search_filter_not_labels(self) -> None: request_data = { "search_categories": { "room_events": { "search_term": "label", "filter": self.FILTER_NOT_LABELS, } } } self._send_labelled_messages_in_room() channel = self.make_request( "POST", "/search?access_token=%s" % self.tok, request_data ) results = channel.json_body["search_categories"]["room_events"]["results"] self.assertEqual( len(results), 4, [result["result"]["content"] for result in results], ) self.assertEqual( results[0]["result"]["content"]["body"], "without label", results[0]["result"]["content"]["body"], ) self.assertEqual( results[1]["result"]["content"]["body"], "without label", results[1]["result"]["content"]["body"], ) self.assertEqual( results[2]["result"]["content"]["body"], "with wrong label", results[2]["result"]["content"]["body"], ) self.assertEqual( results[3
13
404
test_search_filter_not_labels
117
0
5
24
ldm/modules/image_degradation/utils_image.py
157,549
release more models
stablediffusion
17
Python
77
utils_image.py
def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] n_dim = tensor.dim() if n_dim == 4: n_img = len(tensor) img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR elif n_dim == 3: img_np = tensor.numpy() img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR elif n_dim == 2: img_np = tensor.numpy() else: raise TypeError( 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) if out_type == np.uint8: img_np = (img_np * 255.0).round() # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. return img_np.astype(out_type)
ca86da3a30c4e080d4db8c25fca73de843663cb4
228
https://github.com/Stability-AI/stablediffusion.git
225
def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] n_dim = tensor.dim() if n_dim == 4: n_img = len(tensor) img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR elif n_dim == 3: img_np = tensor.numpy() img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR elif n_dim == 2: img_np = tensor.numpy() else: raise TypeError( 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) if out_type == np.uint8
27
358
tensor2img
11
0
1
4
openbb_terminal/featflags_controller.py
285,650
New path for .env (#2508) * add log path * add test to check if log file is in correct dir * env path * black * mypy fix * linting * add make_paths and change references * terminal change * change constants to paths * change names * black * mypy * mypy * pylint else * add make paths * remove custom user dir name Co-authored-by: Chavithra <chavithra@gmail.com>
OpenBBTerminal
10
Python
10
featflags_controller.py
def call_color(self, _): obbff.USE_COLOR = not obbff.USE_COLOR set_key(obbff.USER_ENV_FILE, "OPENBB_USE_COLOR", str(obbff.USE_COLOR)) console.print("")
3d0190e35bae4092f52025377d8604b3a6a17bfa
37
https://github.com/OpenBB-finance/OpenBBTerminal.git
39
def call_color(self, _): obbff.USE_COLOR = not obbff.USE_COLOR set_key(obbff.USER_ENV_
10
64
call_color
105
0
3
44
tests/generation/test_generation_beam_search.py
33,345
Generate: get the correct beam index on eos token (#18851)
transformers
11
Python
59
test_generation_beam_search.py
def check_beam_scorer_update(self, input_ids, next_tokens, next_indices, next_scores): # check too many eos tokens beam_scorer = self.prepare_beam_scorer() tokens = next_tokens.clone() tokens[0, :] = self.eos_token_id with self.parent.assertRaises(ValueError): beam_scorer.process(input_ids, next_scores, tokens, next_indices, eos_token_id=self.eos_token_id) # check all batches are done beam_scorer = self.prepare_beam_scorer() tokens = next_tokens.clone() tokens[:, : self.num_beams] = self.eos_token_id beam_indices = torch.zeros_like(input_ids) + torch.arange(input_ids.shape[-1], device=input_ids.device) beam_indices = tuple(tuple(b) for b in beam_indices) beam_scorer.process( input_ids, next_scores, tokens, next_indices, eos_token_id=self.eos_token_id, beam_indices=beam_indices ) # beam scorer should be done self.parent.assertTrue(beam_scorer.is_done) # check beam_scorer = self.prepare_beam_scorer() tokens = next_tokens.clone() tokens[:, 1] = self.eos_token_id beam_outputs = beam_scorer.process( input_ids, next_scores, tokens, next_indices, eos_token_id=self.eos_token_id, beam_indices=beam_indices ) output_scores = beam_outputs["next_beam_scores"] output_tokens = beam_outputs["next_beam_tokens"] output_indices = beam_outputs["next_beam_indices"]
d4dbd7ca59bd50dd034e7995cb36e5efed3d9512
432
https://github.com/huggingface/transformers.git
305
def check_beam_scorer_update(self, input_ids, next_tokens, next_indices, next_scores): # check too many eos tokens beam_scorer = self.prepare_beam_scorer() tokens = next_tokens.clone() tokens[0, :] = self.eos_token_id with self.parent.assertRaises(ValueError): beam_scorer.process(input_ids, next_scores, tokens, next_indices, eos_token_id=self.eos_token_id) # check all batches are done beam_scorer = self.prepare_beam_scorer() tokens = next_tokens.clone() tokens[:, : self.num_beams] = self.eos_token_id beam_indices = torch.zeros_like(input_ids) + torch.arange(input_ids.shape[-1], device=input_ids.device) beam_indices = tuple(tuple(b) for b in beam_indices) beam_scorer.process( input_ids, next_scores, tokens, next_indices, eos_token_id=self.eos_token_id, beam_indices=beam_indices ) # beam scorer should be done self.parent.assertTrue(beam_scorer.is_don
30
356
check_beam_scorer_update
34
0
2
7
packages/syft/tests/syft/core/tensor/adp/entity_list_test.py
706
Added capnp to sy.serialize / sy.deserialize interface - Added np.array utf-8 string serialization
PySyft
12
Python
28
entity_list_test.py
def test_entity_list_serde() -> None: entities = ["🥒pickles", "madhava", "short", "muchlongername", "a", "🌶"] entity_list = EntityList.from_objs([Entity(name=entity) for entity in entities]) ser = sy.serialize(entity_list, to_bytes=True) de = sy.deserialize(ser, from_bytes=True) de.one_hot_lookup == entity_list.one_hot_lookup assert entity_list == de
530a1aa0eb7f10555b7dcf61c27e3230e019e9c6
75
https://github.com/OpenMined/PySyft.git
51
def test_entity_list_serde() -> None: entities = ["🥒pickles", "madhava", "short", "muchlongername", "a", "🌶"] entity_list = EntityList.from_objs([Entity(name=entity) for entity in entities]) ser = sy.serialize(entity_list, to_bytes=True)
16
123
test_entity_list_serde
31
0
3
9
pipenv/patched/notpip/_vendor/platformdirs/windows.py
20,246
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
pipenv
8
Python
28
windows.py
def _pick_get_win_folder() -> Callable[[str], str]: if hasattr(ctypes, "windll"): return get_win_folder_via_ctypes try: import winreg # noqa: F401 except ImportError: return get_win_folder_from_env_vars else: return get_win_folder_from_registry get_win_folder = lru_cache(maxsize=None)(_pick_get_win_folder()) __all__ = [ "Windows", ]
f3166e673fe8d40277b804d35d77dcdb760fc3b3
36
https://github.com/pypa/pipenv.git
71
def _pick_get_win_folder() -> Callable[[str], str]: if hasattr(ctypes, "windll"): return get_win_folder_via_ctypes try: import winreg # noqa: F401 except ImportErr
14
94
_pick_get_win_folder
32
0
2
5
python/ray/autoscaler/_private/fake_multi_node/node_provider.py
129,152
[rfc][ci] create fake docker-compose cluster environment (#20256) Following #18987 this PR adds a docker-compose based local multi node cluster. The fake multinode docker comprises two parts. The docker_monitor.py script is a watch script calling docker compose up whenever the docker-compose.yaml changes. The node provider creates and updates the docker compose according to the autoscaling requirements. This mode fully supports autoscaling and comes with test utilities to start and connect to docker-compose autoscaling environments. There's also a sample test case showing how this can be used.
ray
11
Python
29
node_provider.py
def _save_node_state(self): with open(self._node_state_path, "wt") as f: json.dump(self._nodes, f) # Make sure this is always writeable from inside the containers if not self.in_docker_container: # Only chmod from the outer container os.chmod(self._node_state_path, 0o777)
5a7f6e4fddabd151baf96d64d6c45e5964766653
43
https://github.com/ray-project/ray.git
85
def _save_node_state(self): with open(self._node_state_path, "wt") as f: json.dump(s
11
74
_save_node_state
139
0
17
41
sympy/physics/continuum_mechanics/truss.py
200,142
rectified non-alignment of nodes
sympy
21
Python
53
truss.py
def _draw_nodes(self, subs_dict): node_markers = [] for node in list(self._node_coordinates): if (type(self._node_coordinates[node][0]) in (Symbol, Quantity)): if self._node_coordinates[node][0] in list(subs_dict): self._node_coordinates[node][0] = subs_dict[self._node_coordinates[node][0]] else: raise ValueError("provided substituted dictionary is not adequate") elif (type(self._node_coordinates[node][0]) == Mul): objects = self._node_coordinates[node][0].as_coeff_Mul() for object in objects: if type(object) in (Symbol, Quantity): if subs_dict==None or object not in list(subs_dict): raise ValueError("provided substituted dictionary is not adequate") else: self._node_coordinates[node][0] /= object self._node_coordinates[node][0] *= subs_dict[object] if (type(self._node_coordinates[node][1]) in (Symbol, Quantity)): if self._node_coordinates[node][1] in list(subs_dict): self._node_coordinates[node][1] = subs_dict[self._node_coordinates[node][1]] else: raise ValueError("provided substituted dictionary is not adequate") elif (type(self._node_coordinates[node][1]) == Mul): objects = self._node_coordinates[node][1].as_coeff_Mul() for object in objects: if type(object) in (Symbol, Quantity): if subs_dict==None or object not in list(subs_dict): raise ValueError("provided substituted dictionary is not adequate") else: self._node_coordinates[node][1] /= object self._node_coordinates[node][1] *= subs_dict[object] for node in list(self._node_coordinates): node_markers.append( { 'args':[[self._node_coordinates[node][0]], [self._node_coordinates[node][1]]], 'marker':'o', 'markersize':5, 'color':'black' } ) return node_markers
86975d1b114689b68dd9f7b953602f318c4497ec
407
https://github.com/sympy/sympy.git
826
def _draw_nodes(self, subs_dict): node_markers = [] for node in list(self._node_coordinates): if (type(self._node_coordinates[node][0]) in (Symbol, Quantity)): if self._node_coordinates[node][0] in list(subs_dict): self._node_coordinates[node][0] = subs_dict[self._node_coordinates[node][0]] else: raise ValueError("provided substituted dictionary is not adequate") elif (type(self._node_coordinates[node][0]) == Mul): objects = self._node_coordinates[node][0].as_coeff_Mul() for object
16
619
_draw_nodes
39
0
3
11
python/ray/tune/syncer.py
132,340
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
ray
12
Python
32
syncer.py
def sync_up_to_new_location(self, worker_ip): if worker_ip != self.worker_ip: logger.debug("Setting new worker IP to %s", worker_ip) self.set_worker_ip(worker_ip) self.reset() if not self.sync_up(): logger.warning( "Sync up to new location skipped. This should not occur." ) else: logger.warning("Sync attempted to same IP %s.", worker_ip)
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
57
https://github.com/ray-project/ray.git
156
def sync_up_to_new_location(self, worker_ip): if worker_ip != self.worker_ip: logger.debug("Setting new worker IP to %s", worker_ip) self.set_worker
9
99
sync_up_to_new_location
87
0
1
18
wagtail/admin/tests/pages/test_delete_page.py
78,476
review fixes
wagtail
14
Python
66
test_delete_page.py
def test_confirm_delete_scenario_1(self): # If the number of pages to be deleted are less than # WAGTAILADMIN_UNSAFE_PAGE_DELETION_LIMIT then don't need # for confirmation child_1 = SimplePage(title="child 1", slug="child-1", content="hello") self.child_page.add_child(instance=child_1) child_2 = SimplePage(title="child 2", slug="child-2", content="hello") self.child_page.add_child(instance=child_2) response = self.client.get( reverse("wagtailadmin_pages:delete", args=(self.child_page.id,)) ) self.assertEqual(response.status_code, 200) self.assertNotContains( response, '<input class="w-mb-4" type="text" name="confirm_site_name" id="id_confirm_site_name" required>', ) # deletion should not actually happen on GET self.assertTrue(SimplePage.objects.filter(id=self.child_page.id).exists()) # And admin should be able to delete page without any confirmation response = self.client.post( reverse("wagtailadmin_pages:delete", args=(self.child_page.id,)) ) # Check that page is deleted self.assertFalse(SimplePage.objects.filter(id=self.child_page.id).exists())
84662031294740d59eee60af37e69c3735de1117
170
https://github.com/wagtail/wagtail.git
263
def test_confirm_delete_scenario_1(self): # If the number of pages to be deleted are less than # WAGTAILADMIN_UNSAFE_PAGE_DELETION_LIMIT then don't need # for confirmation child_1 = SimplePage(title="child 1", slug="child-1", content="hello") self.child_page.add_child(instance=child_1) child_2 = SimplePage(title="child 2", slug="child-2", content="hello") self.child_page.add_child(instance=child_2) response = self.client.get( reverse("wagtailadmin_pages:delete", args=(self.child_page.id,)) ) self.assertEqual(response.status_code, 200) self.assertNotContains( response, '<input class="w-mb-4" type="text" name="confirm_site_name" id="id_confirm_site_name" required>', ) # deletion should not actually happen on GET self.assertTrue(SimplePage.objects.filter(id=self.child_page.id).exists()) # And admin should be able to delete page without any confirmation response = self.client.post( reverse("wagtailadmin_pages:delete", args=(self.child_page.id,))
26
285
test_confirm_delete_scenario_1
38
0
5
8
modules/sd_samplers.py
152,352
prevent replacing torch_randn globally (instead replacing k_diffusion.sampling.torch) and add a setting to disable this all
stable-diffusion-webui
12
Python
25
sd_samplers.py
def randn_like(self, x): noise = self.sampler_noises[self.sampler_noise_index] if self.sampler_noises is not None and self.sampler_noise_index < len(self.sampler_noises) else None if noise is not None and x.shape == noise.shape: res = noise else: res = torch.randn_like(x) self.sampler_noise_index += 1 return res
87e8b9a2ab3f033e7fdadbb2fe258857915980ac
71
https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
94
def randn_like(self, x): noise = self.sampler_noises[self.sampler_noise_index] if self.sampler_noises is not None and self.sampler_noise_index < len(self.sampler_noises) else None if noise is not None and x.shape == noise.shape: res = noise else: res = torch.randn_like(x) self.sampler_noise_index += 1 return res
10
109
randn_like
45
0
1
23
tests/test_evaluation/test_metrics/test_coco_metric.py
245,612
[Refactor] refactor dataflow and sync the latest mmengine (#8620) * refactor dataflow * fix docstr * fix commit * fix commit * fix visualizer hook * fix UT * fix UT * fix UT error * fix bug * update to mmengine main * update typehint * replace data preprocess output type to dict * update * fix typehint
mmdetection
13
Python
37
test_coco_metric.py
def test_format_only(self): # create dummy data fake_json_file = osp.join(self.tmp_dir.name, 'fake_data.json') self._create_dummy_coco_json(fake_json_file) dummy_pred = self._create_dummy_results() with self.assertRaises(AssertionError): CocoMetric( ann_file=fake_json_file, classwise=False, format_only=True, outfile_prefix=None) coco_metric = CocoMetric( ann_file=fake_json_file, metric='bbox', classwise=False, format_only=True, outfile_prefix=f'{self.tmp_dir.name}/test') coco_metric.dataset_meta = dict(CLASSES=['car', 'bicycle']) coco_metric.process( {}, [dict(pred_instances=dummy_pred, img_id=0, ori_shape=(640, 640))]) eval_results = coco_metric.evaluate(size=1) self.assertDictEqual(eval_results, dict()) self.assertTrue(osp.exists(f'{self.tmp_dir.name}/test.bbox.json'))
8405ad9bfce4867f552f2f7a643c9e78a97eb0b6
157
https://github.com/open-mmlab/mmdetection.git
269
def test_format_only(self): # create dummy data fake_json_file = osp.join(self.tmp_dir.name, 'fake_data.json') self._create_dummy_coco_json(fake_json_file) dummy_pred = self._create_dummy_results() with self.assertRaises(AssertionError): CocoMetric( ann_file=fake_json_file, classwise=False, format_only=True, outfile_prefix=None) coco_metric = CocoMetric( ann_file=fake_json_file, metric='bbox', classwise=False, format_only=True, outfile_prefix=f'{self.tmp_dir.name}/test') coco_metric.dataset_meta = dict(CLASSES=['car', 'bicycle']) coco_metric.process( {}, [dict(pred_instances=dummy_pred, img_id=0, ori_shape=(640, 640))]) eval_results = coco_metric.evaluate(size=1) self.assertDictEqual(eval_results, dict()) self.assertTrue(osp.exists(f'{self.tmp_dir.name}/test.b
32
269
test_format_only
84
0
1
40
tests/freqai/conftest.py
150,142
create dedicated minimal freqai test strat
freqtrade
15
Python
75
conftest.py
def freqai_conf(default_conf): freqaiconf = deepcopy(default_conf) freqaiconf.update( { "datadir": Path(default_conf["datadir"]), "strategy": "freqai_test_strat", "strategy-path": "freqtrade/tests/strategy/strats", "freqaimodel": "LightGBMPredictionModel", "freqaimodel_path": "freqai/prediction_models", "timerange": "20180110-20180115", "freqai": { "startup_candles": 10000, "purge_old_models": True, "train_period_days": 5, "backtest_period_days": 2, "live_retrain_hours": 0, "expiration_hours": 1, "identifier": "uniqe-id100", "live_trained_timestamp": 0, "feature_parameters": { "include_timeframes": ["5m"], "include_corr_pairlist": ["ADA/BTC", "DASH/BTC"], "label_period_candles": 20, "include_shifted_candles": 1, "DI_threshold": 0.9, "weight_factor": 0.9, "principal_component_analysis": False, "use_SVM_to_remove_outliers": True, "stratify_training_data": 0, "indicator_max_period_candles": 10, "indicator_periods_candles": [10], }, "data_split_parameters": {"test_size": 0.33, "random_state": 1}, "model_training_parameters": {"n_estimators": 100}, }, "config_files": [Path('config_examples', 'config_freqai_futures.example.json')] } ) freqaiconf['exchange'].update({'pair_whitelist': ['ADA/BTC', 'DASH/BTC', 'ETH/BTC', 'LTC/BTC']}) return freqaiconf
c43935e82ad9b627875a61d02a2923ac101b7374
201
https://github.com/freqtrade/freqtrade.git
600
def freqai_conf(default_conf): freqaiconf = deepcopy(default_conf) freqaiconf.update( { "datadir": Path(default_conf["datadir"]), "strategy": "freqai_test_strat", "strategy-path": "freqtrade/tests/strategy/strats", "freqaimodel": "LightGBMPre
6
365
freqai_conf
95
0
3
37
pandas/tests/io/parser/dtypes/test_dtypes_basic.py
172,169
Add pyarrow support to python engine in read_csv (#50318)
pandas
19
Python
75
test_dtypes_basic.py
def test_use_nullable_dtypes_pyarrow_backend(all_parsers, request): # GH#36712 pa = pytest.importorskip("pyarrow") parser = all_parsers engine = parser.engine data = with pd.option_context("mode.nullable_backend", "pyarrow"): if engine == "c": request.node.add_marker( pytest.mark.xfail( raises=NotImplementedError, reason=f"Not implemented with engine={parser.engine}", ) ) result = parser.read_csv( StringIO(data), use_nullable_dtypes=True, parse_dates=["i"] ) expected = DataFrame( { "a": pd.Series([1, 3], dtype="int64[pyarrow]"), "b": pd.Series([2.5, 4.5], dtype="float64[pyarrow]"), "c": pd.Series([True, False], dtype="bool[pyarrow]"), "d": pd.Series(["a", "b"], dtype=pd.ArrowDtype(pa.string())), "e": pd.Series([pd.NA, 6], dtype="int64[pyarrow]"), "f": pd.Series([pd.NA, 7.5], dtype="float64[pyarrow]"), "g": pd.Series([pd.NA, True], dtype="bool[pyarrow]"), "h": pd.Series( [pd.NA if engine == "python" else "", "a"], dtype=pd.ArrowDtype(pa.string()), ), "i": pd.Series([Timestamp("2019-12-31")] * 2), "j": pd.Series([pd.NA, pd.NA], dtype="null[pyarrow]"), } ) tm.assert_frame_equal(result, expected)
b1c5b5d9517da7269163566892ba230ebf14afea
312
https://github.com/pandas-dev/pandas.git
477
def test_use_nullable_dtypes_pyarrow_backend(all_parsers, request): # GH#36712 pa = pytest.importorskip("pyarrow") parser = all_parsers engine = parser.engine data = with pd.option_context("mode.nullable_backend", "pyarrow"): if engine == "c": request.node.add_marker( pytest.mark.xfail( raises=NotImplementedError, reason=f"Not implemented with engine={parser.engine}", ) ) result = parser.read_csv(
33
511
test_use_nullable_dtypes_pyarrow_backend
55
0
1
12
tests/rest/client/test_rooms.py
247,297
Add type hints to `tests/rest/client` (#12108) * Add type hints to `tests/rest/client` * newsfile * fix imports * add `test_account.py` * Remove one type hint in `test_report_event.py` * change `on_create_room` to `async` * update new functions in `test_third_party_rules.py` * Add `test_filter.py` * add `test_rooms.py` * change to `assertEquals` to `assertEqual` * lint
synapse
11
Python
28
test_rooms.py
def test_rooms_messages_sent(self) -> None: path = "/rooms/%s/send/m.room.message/mid1" % (urlparse.quote(self.room_id)) content = b'{"body":"test","msgtype":{"type":"a"}}' channel = self.make_request("PUT", path, content) self.assertEqual(400, channel.code, msg=channel.result["body"]) # custom message types content = b'{"body":"test","msgtype":"test.custom.text"}' channel = self.make_request("PUT", path, content) self.assertEqual(200, channel.code, msg=channel.result["body"]) # m.text message type path = "/rooms/%s/send/m.room.message/mid2" % (urlparse.quote(self.room_id)) content = b'{"body":"test2","msgtype":"m.text"}' channel = self.make_request("PUT", path, content) self.assertEqual(200, channel.code, msg=channel.result["body"])
2ffaf30803f93273a4d8a65c9e6c3110c8433488
140
https://github.com/matrix-org/synapse.git
145
def test_rooms_messages_sent(self) -> None: path = "/rooms/%s/send/m.room.message/mid1" % (urlparse.quote(self.room_id)) content = b'{"body":"test","msgtype":{"type":"a"}}' channel = self.make_request("PUT", p
13
227
test_rooms_messages_sent
27
0
1
8
saleor/webhook/observability/tests/test_payloads.py
27,586
Observability reporter (#9803) * Initial commit * Add observability celery beat task * Add observability_reporter_task and observability_send_events * Convert payload to camel case * Add fakeredis to dev dependencies * Add redis buffer tests * Refactor buffer * Update * Optimize buffer * Add tests * Add types-redis to dev dependencies * Refactor * Fix after rebase * Refactor opentracing * Add opentracing to observability tasks * Add more tests * Fix buffer fixtures * Report dropped events * Fix buffer tests * Refactor get_buffer * Refactor unit tests * Set Redis connection client_name * Refactor redis tests * Fix test_get_or_create_connection_pool * Fix JsonTruncText comparison * Add more generate_event_delivery_attempt_payload tests
saleor
10
Python
23
test_payloads.py
def test_serialize_gql_operation_result_when_no_operation_data(): bytes_limit = 1024 result = GraphQLOperationResponse() payload, _ = serialize_gql_operation_result(result, bytes_limit) assert payload == GraphQLOperation( name=None, operation_type=None, query=None, result=None, result_invalid=False ) assert len(dump_payload(payload)) <= bytes_limit
7ea7916c65357741c3911e307acb58d547a5e91a
57
https://github.com/saleor/saleor.git
51
def test_serialize_gql_operation_result_when_no_operation_data(): bytes_limit = 1024 result = G
14
87
test_serialize_gql_operation_result_when_no_operation_data
20
0
2
32
tests/test_optimize.py
30,493
Use better img2pdf settings where possible while supporting old versions Fixes #894
OCRmyPDF
13
Python
17
test_optimize.py
def test_multiple_pngs(resources, outdir): with Path.open(outdir / 'in.pdf', 'wb') as inpdf: img2pdf.convert( fspath(resources / 'baiona_colormapped.png'), fspath(resources / 'baiona_gray.png'), outputstream=inpdf, **IMG2PDF_KWARGS, )
2d0ac4707c6b19614bf56bede0892656cd0e1f0c
192
https://github.com/ocrmypdf/OCRmyPDF.git
80
def test_multiple_pngs(resources, outdir): with Path.open(outdir / 'in.pdf', 'wb') as inpdf: img2pdf.co
11
81
test_multiple_pngs
63
0
3
21
wagtail/snippets/tests/test_bulk_actions/test_bulk_delete.py
79,174
Fix plural handling for "no permission to delete these snippets" errors `./manage.py compilemessages` does not allow variables to differ between the singular and plural forms - it fails with a format specification for argument 'snippet_type_name', as in 'msgstr[0]', doesn't exist in 'msgid_plural' It's not possible to use the gettext pluralisation mechanism properly here, because we're using Django's verbose_name and verbose_name_plural properties which don't cover the requirements of languages with complex pluralisation rules. Since we can only hope to support English-style (`if n == 1`) pluralisation, use an n==1 test directly (as we have elsewhere in the template) rather than trying to shoehorn this into gettext pluralisation. While we're at it, remove the capitalisation of the snippet name - it makes no sense here (especially when only done for the plural).
wagtail
15
Python
48
test_bulk_delete.py
def test_delete_with_limited_permissions(self): self.user.is_superuser = False self.user.user_permissions.add( Permission.objects.get( content_type__app_label="wagtailadmin", codename="access_admin" ) ) self.user.save() response = self.client.get(self.url) self.assertEqual(response.status_code, 200) html = response.content.decode() self.assertInHTML( "<p>You don't have permission to delete these standard snippets</p>", html, ) for snippet in self.test_snippets: self.assertInHTML(f"<li>{snippet.text}</li>", html) response = self.client.post(self.url) # User should be redirected back to the index self.assertEqual(response.status_code, 302) # Documents should not be deleted for snippet in self.test_snippets: self.assertTrue(self.snippet_model.objects.filter(pk=snippet.pk).exists())
e0fd8e1a473d154c7ec154958e8c334db5a39a6d
150
https://github.com/wagtail/wagtail.git
248
def test_delete_with_limited_permissions(self): self.user.is_superuser = False self.user.user_permissions.add( Permission.objects.get( content_type__app_label="wagtailadmin", codename="access_admin" ) ) self.user.save() response = self.client.get(self.url) self.assertEqual(response.status_code, 200) html = response.content.decode() self.assertInHTML( "<p>You don't have permission to delete these standard snippets</p>", html, ) for snippet in self.test_snippets: self.assertInHTML(f"<li>{snippet.text}</li>", html) response = self.client.post
30
249
test_delete_with_limited_permissions
121
0
8
16
jax/interpreters/pxla.py
121,964
Fix Forward. The fix is on the user's end. Original PR: https://github.com/google/jax/pull/12217 Co-authored-by: Matthew Johnson <mattjj@google.com> Co-authored-by: Yash Katariya <yashkatariya@google.com> PiperOrigin-RevId: 472999907
jax
16
Python
84
pxla.py
def _pmap_dce_rule(used_outputs, eqn): # just like pe.dce_jaxpr_call_rule, except handles in_axes / out_axes new_jaxpr, used_inputs = pe.dce_jaxpr(eqn.params['call_jaxpr'], used_outputs) _, donated_invars = partition_list(used_inputs, eqn.params['donated_invars']) # TODO(yashkatariya,mattjj): Handle global_arg_shapes here too. _, in_axes = partition_list(used_inputs, eqn.params['in_axes']) _, out_axes = partition_list(used_outputs, eqn.params['out_axes']) new_params = dict(eqn.params, call_jaxpr=new_jaxpr, donated_invars=tuple(donated_invars), in_axes=tuple(in_axes), out_axes=tuple(out_axes)) if not any(used_inputs) and not any(used_outputs) and not new_jaxpr.effects: return used_inputs, None else: new_eqn = pe.new_jaxpr_eqn( [v for v, used in zip(eqn.invars, used_inputs) if used], [v for v, used in zip(eqn.outvars, used_outputs) if used], eqn.primitive, new_params, new_jaxpr.effects, eqn.source_info) return used_inputs, new_eqn # Set param update handlers to update `donated_invars` just like xla_call_p pe.call_param_updaters[xla_pmap_p] = pe.call_param_updaters[xla.xla_call_p] pe.partial_eval_jaxpr_custom_rules[xla_pmap_p] = \ partial(pe.call_partial_eval_custom_rule, 'call_jaxpr', _pmap_partial_eval_custom_params_updater, res_aval=_pmap_partial_eval_custom_res_maker) pe.dce_rules[xla_pmap_p] = _pmap_dce_rule ad.call_param_updaters[xla_pmap_p] = ad.call_param_updaters[xla.xla_call_p] ad.call_transpose_param_updaters[xla_pmap_p] = \ ad.call_transpose_param_updaters[xla.xla_call_p] ad.primitive_transposes[xla_pmap_p] = partial(ad.map_transpose, xla_pmap_p)
7fbf8ec669c03ce0e1014aaf010dabdf5985509f
188
https://github.com/google/jax.git
218
def _pmap_dce_rule(used_outputs, eqn): # just like pe.dce_jaxpr_call_rule, except handles in_axes / out_axes new_jaxpr, used_inputs = pe.dce_jaxpr(eqn.params['call_jaxpr'], used_outputs) _, donated_invars = partition_list(used_inputs, eqn.params['donated_invars']) # TODO(yashkatariya,mattjj): Handle global_arg_shapes here too. _, in_axes = partition_list(used_inputs, eqn.params['in_axes']) _, out_axes = partition_list(used_outputs, eqn.params['out_axes']) new_params = dict(eqn.params, call_jaxpr=new_jaxpr, donated_invars=tuple(donated_invars), in_axes=tuple(in_axes), out_axes=tuple(out_axes)) if not any(used_inputs) and not any(used_outputs) and not new_jaxpr.effects: return used_inputs, None else: new_eqn = pe.new_jaxpr_eqn( [v for v, used in zip(eqn.invars, used_inputs) if used], [v for v, used in zip(eqn.outvars, used_outputs) if used], eqn.primitive, new_params, new_jaxpr.effects, eqn.source_info) return used_inputs, new_eqn # Set param update handlers to update `donated_invars` just like xla_call_p pe.call_param_updaters[xla_pmap_p] = pe.call_param_updaters[xla.xla_call_p] pe.partial_eval_jaxpr_custom_rules[xla_pmap_p] =
43
418
_pmap_dce_rule
15
0
1
5
python/ray/tests/test_batch_node_provider_unit.py
136,578
[autoscaler][kuberay] Batching node provider (#29933) Implements the abstract subclass of NodeProvider proposed in https://docs.google.com/document/d/1JyQINBFirZw7YenA_14zize0R3hIII1_fnfQytIXTPo/ The goal is to simplify the autoscaler's interactions with external cluster managers like the KubeRay operator. A follow-up PR will implement KuberayNodeProvider as a subclass of the BatchingNodeProvider added here. Signed-off-by: Dmitri Gekhtman <dmitri.m.gekhtman@gmail.com>
ray
13
Python
14
test_batch_node_provider_unit.py
def _add_node(self, node_type, node_kind): new_node_id = str(uuid4()) self._node_data_dict[new_node_id] = NodeData( kind=node_kind, ip=str(uuid4()), status=STATUS_UP_TO_DATE, type=node_type )
c51b0c9a5664e5c6df3d92f9093b56e61b48f514
47
https://github.com/ray-project/ray.git
46
def _add_node(self, node_type, node_kind): new_node_id = str(uuid4()) self._n
14
71
_add_node
57
1
4
20
erpnext/startup/leaderboard.py
67,566
style: format code with black
erpnext
12
Python
48
leaderboard.py
def get_all_sales_partner(date_range, company, field, limit=None): if field == "total_sales_amount": select_field = "sum(`base_net_total`)" elif field == "total_commission": select_field = "sum(`total_commission`)" filters = {"sales_partner": ["!=", ""], "docstatus": 1, "company": company} if date_range: date_range = frappe.parse_json(date_range) filters["transaction_date"] = ["between", [date_range[0], date_range[1]]] return frappe.get_list( "Sales Order", fields=[ "`sales_partner` as name", "{} as value".format(select_field), ], filters=filters, group_by="sales_partner", order_by="value DESC", limit=limit, ) @frappe.whitelist()
494bd9ef78313436f0424b918f200dab8fc7c20b
@frappe.whitelist()
117
https://github.com/frappe/erpnext.git
36
def get_all_sales_partner(date_range, company, field, limit=None): if field == "total_sales_amount": select_field = "sum(`base_net_total`)" elif field == "total_commission": select_field = "sum(`total_commission`)" filters = {"sales_partner": ["!=", ""], "d
15
209
get_all_sales_partner
42
1
1
7
dask/tests/test_distributed.py
156,326
Stringify BlockwiseDepDict mapping values when produces_keys=True (#8972)
dask
12
Python
35
test_distributed.py
def test_from_delayed_dataframe(c): # Check that Delayed keys in the form of a tuple # are properly serialized in `from_delayed` pd = pytest.importorskip("pandas") dd = pytest.importorskip("dask.dataframe") df = pd.DataFrame({"x": range(20)}) ddf = dd.from_pandas(df, npartitions=2) ddf = dd.from_delayed(ddf.to_delayed()) dd.utils.assert_eq(ddf, df, scheduler=c) @pytest.mark.parametrize("fuse", [True, False])
bbd1d2f16b5ac4784d758252188047b7816c7fa4
@pytest.mark.parametrize("fuse", [True, False])
74
https://github.com/dask/dask.git
64
def test_from_delayed_dataframe(c): # Check that Delayed keys in the form of a tuple # are properly serialized in `from_delayed` pd = pytest.importorskip("pandas") dd = pytest.importorskip("d
19
149
test_from_delayed_dataframe
63
0
1
27
tests/util/test_async.py
311,925
Don't warn on time.sleep injected by the debugger (#65420)
core
15
Python
55
test_async.py
async def test_check_loop_async_custom(caplog): with pytest.raises(RuntimeError), patch( "homeassistant.util.async_.extract_stack", return_value=[ Mock( filename="/home/paulus/homeassistant/core.py", lineno="23", line="do_something()", ), Mock( filename="/home/paulus/config/custom_components/hue/light.py", lineno="23", line="self.light.is_on", ), Mock( filename="/home/paulus/aiohue/lights.py", lineno="2", line="something()", ), ], ): hasync.check_loop(banned_function) assert ( "Detected blocking call inside the event loop. This is causing stability issues. " "Please report issue to the custom component author for hue doing blocking calls " "at custom_components/hue/light.py, line 23: self.light.is_on" in caplog.text )
5a34feb7de440e0df748c9db500facc72a4c2646
89
https://github.com/home-assistant/core.git
328
async def test_check_loop_async_custom(caplog): with pytest.raises(RuntimeError), patch( "homeassistant.util.async_.extract_stack", return_value=[ Mock( filename="/home/paulus/homeassistant/core.py", lineno="23", line="do_something()", ), Mock( filename="/home/paulus/config/custom_compo
15
159
test_check_loop_async_custom
175
0
10
71
freqtrade/optimize/backtesting.py
149,841
Remove surplus mark columns, and make fillna on funding rate only
freqtrade
23
Python
115
backtesting.py
def load_bt_data_detail(self) -> None: if self.timeframe_detail: self.detail_data = history.load_data( datadir=self.config['datadir'], pairs=self.pairlists.whitelist, timeframe=self.timeframe_detail, timerange=self.timerange, startup_candles=0, fail_without_data=True, data_format=self.config.get('dataformat_ohlcv', 'json'), candle_type=self.config.get('candle_type_def', CandleType.SPOT) ) else: self.detail_data = {} if self.trading_mode == TradingMode.FUTURES: # Load additional futures data. funding_rates_dict = history.load_data( datadir=self.config['datadir'], pairs=self.pairlists.whitelist, timeframe=self.exchange._ft_has['mark_ohlcv_timeframe'], timerange=self.timerange, startup_candles=0, fail_without_data=True, data_format=self.config.get('dataformat_ohlcv', 'json'), candle_type=CandleType.FUNDING_RATE ) # For simplicity, assign to CandleType.Mark (might contian index candles!) mark_rates_dict = history.load_data( datadir=self.config['datadir'], pairs=self.pairlists.whitelist, timeframe=self.exchange._ft_has['mark_ohlcv_timeframe'], timerange=self.timerange, startup_candles=0, fail_without_data=True, data_format=self.config.get('dataformat_ohlcv', 'json'), candle_type=CandleType.from_string(self.exchange._ft_has["mark_ohlcv_price"]) ) # Combine data to avoid combining the data per trade. unavailable_pairs = [] for pair in self.pairlists.whitelist: if pair not in self.exchange._leverage_tiers: unavailable_pairs.append(pair) continue if (pair in mark_rates_dict and len(funding_rates_dict[pair]) == 0 and "futures_funding_rate" in self.config): mark_rates_dict[pair]["open_fund"] = self.config.get('futures_funding_rate') mark_rates_dict[pair].rename( columns={'open': 'open_mark', 'close': 'close_mark', 'high': 'high_mark', 'low': 'low_mark', 'volume': 'volume_mark'}, inplace=True) self.futures_data[pair] = mark_rates_dict[pair] else: if "futures_funding_rate" in self.config: self.futures_data[pair] = mark_rates_dict[pair].merge( funding_rates_dict[pair], on='date', how="outer", suffixes=["_mark", "_fund"])['open_fund'].fillna( self.config.get('futures_funding_rate')) else: self.futures_data[pair] = mark_rates_dict[pair].merge( funding_rates_dict[pair], on='date', how="inner", suffixes=["_mark", "_fund"]) if unavailable_pairs: raise OperationalException( f"Pairs {', '.join(unavailable_pairs)} got no leverage tiers available. " "It is therefore impossible to backtest with this pair at the moment.") else: self.futures_data = {}
c499a92f57cccf520f3d6f19941857af87fac5aa
476
https://github.com/freqtrade/freqtrade.git
1,368
def load_bt_data_detail(self) -> None: if self.timeframe_detail: self.detail_data = history.load_data( datadir=self.config['datadir'], pairs=self.pairlists.whitelist, timeframe=self.timeframe_detail, timerange=self.timerange, startup_candles=0, fail_without_data=True, data_format=self.config.get('dataformat_ohlcv', 'json'), candle_type=self.config.get('candle_type_def', CandleType.SPOT) ) else: self.detail_data = {} if self.trading_mode == TradingMode.FUTURES: # Load additional futures data. funding_rates_dict = history.load_data( datadir=self.config['datadir'], pairs=self.pairlists.whitelist, timeframe=self.exchange._ft_has['mark_ohlcv_timeframe'], timerange=self.timerange, startup_candles=0, fail_without_data=True, data_format=self.config.get('dataformat_ohlcv', 'json'), candle_type=CandleType.FUNDING_RATE ) # For simplicity, assign to CandleType.Mark (might contian index candles!) mark_rates_dict = history.load_data( datadir=self.config['datadir'], pairs=self.pa
45
787
load_bt_data_detail
270
0
5
29
nni/retiarii/oneshot/pytorch/supermodule/_singlepathnas.py
112,157
Valuechoice oneshot lightning (#4602)
nni
16
Python
95
_singlepathnas.py
def generate_architecture_params(self): self.alpha = {} if self.kernel_size_candidates is not None: # kernel size arch params self.t_kernel = nn.Parameter(torch.rand(len(self.kernel_size_candidates) - 1)) self.alpha['kernel_size'] = self.t_kernel # kernel size mask self.kernel_masks = [] for i in range(0, len(self.kernel_size_candidates) - 1): big_size = self.kernel_size_candidates[i] small_size = self.kernel_size_candidates[i + 1] mask = torch.zeros_like(self.weight) mask[:, :, :big_size[0], :big_size[1]] = 1 # if self.weight.shape = (out, in, 7, 7), big_size = (5, 5) and mask[:, :, :small_size[0], :small_size[1]] = 0 # small_size = (3, 3), mask will look like: self.kernel_masks.append(mask) # 0 0 0 0 0 0 0 mask = torch.zeros_like(self.weight) # 0 1 1 1 1 1 0 mask[:, :, :self.kernel_size_candidates[-1][0], :self.kernel_size_candidates[-1][1]] = 1 # 0 1 0 0 0 1 0 self.kernel_masks.append(mask) # 0 1 0 0 0 1 0 # 0 1 0 0 0 1 0 if self.out_channel_candidates is not None: # 0 1 1 1 1 1 0 # out_channel (or expansion) arch params. we do not consider skip-op here, so we # 0 0 0 0 0 0 0 # only generate ``len(self.kernel_size_candidates) - 1 `` thresholds self.t_expansion = nn.Parameter(torch.rand(len(self.out_channel_candidates) - 1)) self.alpha['out_channels'] = self.t_expansion self.channel_masks = [] for i in range(0, len(self.out_channel_candidates) - 1): big_channel, small_channel = self.out_channel_candidates[i], self.out_channel_candidates[i + 1] mask = torch.zeros_like(self.weight) mask[:big_channel] = 1 mask[:small_channel] = 0 # if self.weight.shape = (32, in, W, H), big_channel = 16 and small_size = 8, mask will look like: # 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 self.channel_masks.append(mask) mask = torch.zeros_like(self.weight) mask[:self.out_channel_candidates[-1]] = 1 self.channel_masks.append(mask)
14d2966b9e91ae16dcc39de8f41017a75cec8ff9
345
https://github.com/microsoft/nni.git
728
def generate_architecture_params(self): self.alpha = {} if self.kernel_size_candidates is not None: # kernel size arch params self.t_kernel = nn.Parameter(torch.rand(len(self.kernel_size_candidates) - 1)) self.alpha['kernel_size'] = self.t_kernel # kernel size mask self.kernel_masks = [] for i in range(0, len(self.kernel_size_candidates) - 1): big_size = self.kernel_size_candidates[i] small_size = self.kernel_size_candidates[i + 1] mask = torch.zeros_like(self.weight) mask[:, :, :big_size[0], :big_size[1]] = 1 # if self.weight.shape = (out, in, 7, 7), big_size = (5, 5) and
24
549
generate_architecture_params
8
0
1
3
homeassistant/components/alarm_control_panel/__init__.py
290,730
Adjust type hints for AlarmControlPanelEntityFeature (#82186)
core
6
Python
8
__init__.py
def supported_features(self) -> AlarmControlPanelEntityFeature | int: return self._attr_supported_features
f952b74b74443d20c2ed200990e3040fee38aa9d
14
https://github.com/home-assistant/core.git
22
def supported_features(self) -> AlarmControlPanelEntityFeature | int: return sel
5
25
supported_features
6
0
1
2
keras/layers/preprocessing/image_preprocessing_test.py
273,091
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
keras
7
Python
6
image_preprocessing_test.py
def test_random_crop_output_shape(self, expected_height, expected_width): self._run_test(expected_height, expected_width)
84afc5193d38057e2e2badf9c889ea87d80d8fbf
17
https://github.com/keras-team/keras.git
12
def test_random_crop_output_shape(self, expected_height, expected_width): self._run_
5
25
test_random_crop_output_shape
17
0
2
6
.venv/lib/python3.8/site-packages/pip/_vendor/resolvelib/structs.py
63,721
upd; format
transferlearning
10
Python
15
structs.py
def add(self, key): if key in self._vertices: raise ValueError("vertex exists") self._vertices.add(key) self._forwards[key] = set() self._backwards[key] = set()
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
48
https://github.com/jindongwang/transferlearning.git
63
def add(self, key): if key in self._
8
81
add
15
0
1
9
airflow/migrations/versions/54bebd308c5f_add_trigger_table_and_task_info.py
45,460
Autogenerate migration reference doc (#21601) * document airflow version in each alembic migration module and use this to autogen the doc * update each migration module to have the same description used in migration ref (so it can be used in autogen)
airflow
11
Python
15
54bebd308c5f_add_trigger_table_and_task_info.py
def downgrade(): with op.batch_alter_table('task_instance', schema=None) as batch_op: batch_op.drop_constraint('task_instance_trigger_id_fkey', type_='foreignkey') batch_op.drop_index('ti_trigger_id') batch_op.drop_column('trigger_id') batch_op.drop_column('trigger_timeout') batch_op.drop_column('next_method') batch_op.drop_column('next_kwargs') op.drop_table('trigger')
69f6f9e01b6df76c3c8fa266d460324163957887
65
https://github.com/apache/airflow.git
66
def downgrade(): with op.batch_alter_table('task_instance', schema=None) as batch_op: batch_op.drop_constraint('task_instance_trigger_id_fkey', type_='foreign
10
129
downgrade
12
0
1
8
modin/pandas/resample.py
154,885
REFACTOR-#5038: Remove unnecessary `_method` argument from resamplers (#5039) Signed-off-by: Vasily Litvinov <fam1ly.n4me@yandex.ru>
modin
11
Python
10
resample.py
def sem(self, *args, **kwargs): return self._dataframe.__constructor__( query_compiler=self._query_compiler.resample_sem( self.resample_kwargs, *args, **kwargs, ) )
c89f8ba6aaa575ed44f381ad838c8e39050bc102
38
https://github.com/modin-project/modin.git
92
def sem(self, *args, **kwargs): return self._dataframe.__constructor__( query_compiler=self._query_compiler.resample_s
10
57
sem
179
0
12
35
src/datasets/packaged_modules/text/text.py
104,188
Run pyupgrade for Python 3.6+ (#3560) * Run pyupgrade for Python 3.6+ * Fix lint issues * Revert changes for the datasets code Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
datasets
25
Python
84
text.py
def _generate_tables(self, files): schema = pa.schema(self.config.features.type if self.config.features is not None else {"text": pa.string()}) for file_idx, file in enumerate(files): batch_idx = 0 with open(file, encoding=self.config.encoding) as f: if self.config.sample_by == "line": batch_idx = 0 while True: batch = f.read(self.config.chunksize) if not batch: break batch += f.readline() # finish current line batch = batch.splitlines(keepends=self.config.keep_linebreaks) pa_table = pa.Table.from_arrays([pa.array(batch)], schema=schema) # Uncomment for debugging (will print the Arrow table size and elements) # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}") # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) yield (file_idx, batch_idx), pa_table batch_idx += 1 elif self.config.sample_by == "paragraph": batch_idx = 0 batch = "" while True: batch += f.read(self.config.chunksize) if not batch: break batch += f.readline() # finish current line batch = batch.split("\n\n") pa_table = pa.Table.from_arrays( [pa.array([example for example in batch[:-1] if example])], schema=schema ) # Uncomment for debugging (will print the Arrow table size and elements) # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}") # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) yield (file_idx, batch_idx), pa_table batch_idx += 1 batch = batch[-1] elif self.config.sample_by == "document": text = f.read() pa_table = pa.Table.from_arrays([pa.array([text])], schema=schema) yield file_idx, pa_table
21bfd0d3f5ff3fbfd691600e2c7071a167816cdf
299
https://github.com/huggingface/datasets.git
1,000
def _generate_tables(self, files): schema = pa.schema(self.config.features.type if self.config.features is not None else {"text": pa.string()}) for file_idx, file in enumerate(files): batch_idx = 0 with open(file, encoding=self.config.encoding) as f: if self.config.sample_by == "line": batch_idx = 0 while True: batch = f.read(self.config.chunksize) if not batch: break batch += f.readline() # finish current line batch = batch.splitlines(keepends=self.config.keep_linebreaks) pa_table = pa.Table.from_arrays([pa.array(batch)], schema=schema) # Uncomment for debugging (will print the Arrow table size and elements) # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}") # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) yield (file_idx, batch_idx), pa_table batch_idx += 1 elif self.config.sample_by == "paragraph": batch_idx = 0 batch = "" while True: batch += f.read(self.config.chunksize) if not batch: break batch += f.readline() # finish current line batch = batch.split("\n\n") pa_table = pa.Table.from_arrays( [pa.array([example for example in batch[:-1] if example])], schema=schema ) # Uncomment for debugging (will print the Arrow table size and elements) # logger.warning(f"pa_table: {pa_table} num rows
31
494
_generate_tables
27
1
2
8
tests/fsdp/test_fsdp.py
337,956
enhancements and fixes for FSDP and DeepSpeed (#532) * checkpointing enhancements and fixes for FSDP and DeepSpeed * resolving comments 1. Adding deprecation args and warnings in launcher for FSDP 2. Handling old configs to work with new launcher args wrt FSDP. 3. Reverting changes to public methods in `checkpointing.py` and handling it in `Accelerator` 4. Explicitly writing the defaults of various FSDP options in `dataclasses` for readability. * fixes 1. FSDP wrapped model being added to the `_models`. 2. Not passing the env variables when args are None. * resolving comments * adding FSDP for all the collective operations * adding deepspeed and fsdp tests 1. Removes mrpc datafiles and directly relies on HF datasets as it was throwing `file not found` error when running from within `tests` folder. Updating `moke_dataloaders` as a result. 2. adding `test_performance.py`, `test_memory.py` and `test_checkpointing.py` for multi-gpu FSDP and DeepSpeed tests * reverting `mocked_dataloader` changes * adding FSDP tests * data files revert * excluding fsdp tests from `tests_core` * try 2 * adding time delay to avoid `torchrun` from crashing at times leading which causing flaky behaviour * reducing the time of tests * fixes * fix * fixes and reduce time further * reduce time further and minor fixes * adding a deepspeed basic e2e test for single gpu setup
accelerate
14
Python
25
test_fsdp.py
def test_cpu_offload(self): from torch.distributed.fsdp.fully_sharded_data_parallel import CPUOffload for flag in [True, False]: env = self.dist_env.copy() env["FSDP_OFFLOAD_PARAMS"] = str(flag).lower() with mockenv_context(**env): fsdp_plugin = FullyShardedDataParallelPlugin() self.assertEqual(fsdp_plugin.cpu_offload, CPUOffload(offload_params=flag)) @require_fsdp @require_multi_gpu @slow
0c6bdc2c237ac071be99ac6f93ddfbc8bbcb8441
@require_fsdp @require_multi_gpu @slow
73
https://github.com/huggingface/accelerate.git
100
def test_cpu_offload(self): from torch.distributed.fsdp.fully_sharded_data_parallel import CPUOffload for flag in [True, False]: env = self.dist_env.copy() env["FSDP_OFFLOAD_PARAMS"] = str(flag).lower() with mockenv_context(**env): fsdp_plugin = FullyShardedDataParallelPlugin() self.assertEqual(fsdp_plugin.cpu_o
22
128
test_cpu_offload
18
0
2
6
d2l/paddle.py
157,784
[Paddle] Add chapter chapter_linear-networks (#1134) * [Paddletest] Add chapter3 chapter_linear-networks * [Paddle] Add chapter chapter_linear-networks * [Paddle] Add chapter chapter_linear-networks * [Paddle] Add chapter3 linear-networks * [Paddle] Add chapter3 linear-networks * [Paddle] Add chapter3 linear-networks * [Paddle] Add chapter3 linear-networks * Convert tenor to to_tensor * [Paddle] Add chapter_preface * Fix get_dataloader_workers mac & windows * Remove redundant list/tuple unpacking * Minor style fixes * sync lib * Add stop gradient explaination * remove blank content * Update softmax-regression-scratch.md * Fix the sgd bugs Co-authored-by: Anirudh Dagar <anirudhdagar6@gmail.com> Co-authored-by: w5688414 <w5688414@gmail.com>
d2l-zh
14
Python
18
paddle.py
def sgd(params, lr, batch_size): with paddle.no_grad(): for i,param in enumerate(params): param -= lr * params[i].grad/ batch_size params[i].set_value(param.numpy()) params[i].clear_gradient()
e292b514b8f4873a36c8ca0ba68b19db2ee8ba44
60
https://github.com/d2l-ai/d2l-zh.git
64
def sgd(params, lr, batch_size): with paddle.no_grad(): for i,param in enumerate(params): param -= lr * params[i].grad/ batch_size params[i].set_va
13
98
sgd
25
0
2
6
python/ray/tests/kubernetes_e2e/test_helm.py
131,119
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
ray
13
Python
25
test_helm.py
def delete_rayclusters(namespace): cmd = f"kubectl -n {namespace} delete rayclusters --all" try: subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT).decode() except subprocess.CalledProcessError as e: assert False, "returncode: {}, stdout: {}".format(e.returncode, e.stdout)
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
53
https://github.com/ray-project/ray.git
47
def delete_rayclusters(namespace): cmd = f"kubectl -n {namespace} delete rayclusters --all" try: subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT).decode() except subprocess.CalledProcessError as e: assert False, "returncode: {}, stdout: {}".format(e.returncode, e.stdout)
14
89
delete_rayclusters
21
0
1
7
tests/models/test_mappedoperator.py
42,678
Move MappedOperator tests to mirror code location (#23884) At some point during the development of AIP-42 we moved the code for MappedOperator out of baseoperator.py to mappedoperator.py, but we didn't move the tests at the same time
airflow
14
Python
17
test_mappedoperator.py
def test_map_xcom_arg(): with DAG("test-dag", start_date=DEFAULT_DATE): task1 = BaseOperator(task_id="op1") mapped = MockOperator.partial(task_id='task_2').expand(arg2=XComArg(task1)) finish = MockOperator(task_id="finish") mapped >> finish assert task1.downstream_list == [mapped]
70b41e46b46e65c0446a40ab91624cb2291a5039
63
https://github.com/apache/airflow.git
58
def test_map_xcom_arg(): with DAG("test-dag", start_date=DEFAULT_DATE): task1 = BaseOperator(task_id="op1") mapped = MockOperator.partial(task_id='task_2').expand(arg2=XComArg(task1)) finish = MockOperator(task_id="finish") mapped >
15
112
test_map_xcom_arg
24
0
1
13
wagtail/admin/views/generic/models.py
79,450
Extract generic RevisionsUnscheduleView and make page's unpublish view extend from it
wagtail
11
Python
23
models.py
def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context.update( { "object": self.object, "revision": self.revision, "subtitle": self.get_page_subtitle(), "object_display_title": self.get_object_display_title(), "revisions_unschedule_url": self.get_revisions_unschedule_url(), "next_url": self.get_next_url(), } ) return context
ae0603001638e6b03556aef19bdcfa445f9f74c6
72
https://github.com/wagtail/wagtail.git
163
def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context.update( { "object": self.object, "revision": self.revision, "subtitle": self.get_page_subtitle(), "object_display_title": self.get_object_display_title(), "revisions_unsched
12
123
get_context_data
70
0
4
21
pandas/tests/extension/base/groupby.py
166,423
DEPR: groupby numeric_only default (#47025)
pandas
14
Python
49
groupby.py
def test_in_numeric_groupby(self, data_for_grouping): df = pd.DataFrame( { "A": [1, 1, 2, 2, 3, 3, 1, 4], "B": data_for_grouping, "C": [1, 1, 1, 1, 1, 1, 1, 1], } ) dtype = data_for_grouping.dtype if is_numeric_dtype(dtype) or dtype.name == "decimal": warn = None else: warn = FutureWarning msg = "The default value of numeric_only" with tm.assert_produces_warning(warn, match=msg): result = df.groupby("A").sum().columns if data_for_grouping.dtype._is_numeric: expected = pd.Index(["B", "C"]) else: expected = pd.Index(["C"]) tm.assert_index_equal(result, expected)
7c054d6a256fd0186befe03acf9e9e86d81668d6
153
https://github.com/pandas-dev/pandas.git
261
def test_in_numeric_groupby(self, data_for_grouping): df = pd.DataFrame( { "A": [1, 1, 2, 2, 3, 3, 1, 4], "B": data_for_grouping, "C": [1, 1, 1, 1, 1, 1, 1, 1], } ) dtype = data_for_grouping.dtype if is_numeric_dtype(dtype) or dtype.name == "decimal": warn = None else: warn = FutureWarning msg = "The default value of numeric_only" with tm.assert_produces_warning(warn, match=msg): result = df.groupby("A").sum().columns if data_for_grouping.dtype._is_numeric: expected = pd.Index(["B", "C"]
23
243
test_in_numeric_groupby
100
0
9
20
networkx/generators/classic.py
176,625
Adjust the usage of nodes_or_number decorator (#5599) * recorrect typo in decorators.py * Update tests to show troubles in current code * fix troubles with usage of nodes_or_number * fix typo * remove nodes_or_number where that makes sense * Reinclude nodes_or_numbers and add some tests for nonstandard usage * fix typowq * hopefully final tweaks (no behavior changes * Update test_classic.py Co-authored-by: Jarrod Millman <jarrod.millman@gmail.com>
networkx
13
Python
67
classic.py
def lollipop_graph(m, n, create_using=None): m, m_nodes = m M = len(m_nodes) if M < 2: raise NetworkXError("Invalid description: m should indicate at least 2 nodes") n, n_nodes = n if isinstance(m, numbers.Integral) and isinstance(n, numbers.Integral): n_nodes = list(range(M, M + n)) N = len(n_nodes) # the ball G = complete_graph(m_nodes, create_using) if G.is_directed(): raise NetworkXError("Directed Graph not supported") # the stick G.add_nodes_from(n_nodes) if N > 1: G.add_edges_from(pairwise(n_nodes)) if len(G) != M + N: raise NetworkXError("Nodes must be distinct in containers m and n") # connect ball to stick if M > 0 and N > 0: G.add_edge(m_nodes[-1], n_nodes[0]) return G
de1d00f20e0bc14f1cc911b3486e50225a8fa168
157
https://github.com/networkx/networkx.git
193
def lollipop_graph(m, n, create_using=None): m, m_nodes = m M = len(m_nodes) if M < 2: raise NetworkXError("Invalid description: m should indicate a
22
257
lollipop_graph
82
0
1
41
tests/sentry/api/endpoints/test_organization_metric_details.py
91,777
Revert "feat(metrics): make indexer more configurable (#35604)" (#35862) This reverts commit 7f60db924ea37f34e0cfe6856777239e2a2ffe13.
sentry
16
Python
66
test_organization_metric_details.py
def test_same_entity_multiple_metric_ids(self, mocked_derived_metrics): mocked_derived_metrics.return_value = MOCKED_DERIVED_METRICS_2 org_id = self.project.organization.id metric_id = indexer.record(org_id, "metric_foo_doe") self.store_session( self.build_session( project_id=self.project.id, started=(time.time() // 60) * 60, status="ok", release="foobar@2.0", errors=2, ) ) self._send_buckets( [ { "org_id": org_id, "project_id": self.project.id, "metric_id": metric_id, "timestamp": (time.time() // 60 - 2) * 60, "tags": { resolve_weak(org_id, "release"): indexer.record(org_id, "fooww"), }, "type": "c", "value": 5, "retention_days": 90, }, ], entity="metrics_counters", ) response = self.get_success_response( self.organization.slug, "derived_metric.multiple_metrics", ) assert response.data == { "name": "derived_metric.multiple_metrics", "type": "numeric", "operations": [], "unit": "percentage", "tags": [{"key": "release"}], }
3ffb14a47d868956ef759a0cd837066629676774
193
https://github.com/getsentry/sentry.git
597
def test_same_entity_multiple_metric_ids(self, mocked_derived_metrics): mocked_derived_metrics.return_value = MOCKED_DERIVED_METRICS_2 org_id = self.project.organization.id metric_id = indexer.record(org_id, "metric_foo_doe") self.store_session( self.build_session( project_id=self.project.id, started=(time.time() // 60) * 60, status="ok", release="foobar@2.0", errors=2, ) ) self._send_buckets( [ { "org_id": org_id, "project_id": self.project.id, "metric_id": metric_id, "timestamp": (time.time() // 60 - 2) * 60, "tags": { resolve_weak(org_id, "release"): indexer.record(org_id, "fooww"), }, "type": "c", "value": 5, "retention_days": 90, }, ], entity="metrics_counters", ) response = self.get_success_response( self.organization.slug, "derive
27
348
test_same_entity_multiple_metric_ids
35
0
1
5
networkx/drawing/nx_pydot.py
176,575
improve docstring for read_doc, see issue #5604 (#5605)
networkx
8
Python
33
nx_pydot.py
def read_dot(path): import pydot data = path.read() # List of one or more "pydot.Dot" instances deserialized from this file. P_list = pydot.graph_from_dot_data(data) # Convert only the first such instance into a NetworkX graph. return from_pydot(P_list[0])
4f2b1b854d5934a487b428f252ad6ff9375d74ad
31
https://github.com/networkx/networkx.git
56
def read_dot(path): import pydot data = path.read() # List of one or more "pydot.Dot" instances deserialized from this file. P_list = pydot.graph_from_dot_d
8
56
read_dot
18
0
1
4
python/ray/tune/schedulers/pb2_utils.py
132,258
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
ray
12
Python
15
pb2_utils.py
def normalize(data, wrt): return (data - np.min(wrt, axis=0)) / ( np.max(wrt, axis=0) - np.min(wrt, axis=0) + 1e-8 )
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
51
https://github.com/ray-project/ray.git
34
def normalize(data, wrt): return (data - np.min(wrt, axis=0)) / ( np.ma
7
75
normalize
15
0
1
6
tests/db_functions/math/test_sign.py
202,757
Refs #33476 -- Reformatted code with Black.
django
13
Python
14
test_sign.py
def test_transform(self): with register_lookup(DecimalField, Sign): DecimalModel.objects.create(n1=Decimal("5.4"), n2=Decimal("0")) DecimalModel.objects.create(n1=Decimal("-0.1"), n2=Decimal("0")) obj = DecimalModel.objects.filter(n1__sign__lt=0, n2__sign=0).get() self.assertEqual(obj.n1, Decimal("-0.1"))
9c19aff7c7561e3a82978a272ecdaad40dda5c00
86
https://github.com/django/django.git
65
def test_transform(self): with register_lookup(DecimalField, Sign): DecimalModel.objects.create(n1=Decimal("5.4"), n2=Decimal("0")) DecimalModel.objects.create(n1=Decimal("-0.1"), n2=Decimal("0")) obj = DecimalModel.objects.filter(n1__sign__lt=0
17
146
test_transform
57
1
1
2
tests/test_suggestions.py
183,173
[css] Address "did you mean" PR feedback
textual
10
Python
34
test_suggestions.py
def test_get_suggestion(word, possible_words, expected_result): assert get_suggestion(word, possible_words) == expected_result @pytest.mark.parametrize( "word, possible_words, count, expected_result", ( ["background", ("background",), 1, ["background"]], ["backgroundu", ("background",), 1, ["background"]], ["bkgrund", ("background",), 1, ["background"]], ["llow", ("background",), 1, []], ["llow", ("background", "yellow"), 1, ["yellow"]], ["yllow", ("background", "yellow", "ellow"), 1, ["yellow"]], ["yllow", ("background", "yellow", "ellow"), 2, ["yellow", "ellow"]], ["yllow", ("background", "yellow", "red"), 2, ["yellow"]], ), )
26f138e69be49f33fe7ff72cebbb51d617a6338f
@pytest.mark.parametrize( "word, possible_words, count, expected_result", ( ["background", ("background",), 1, ["background"]], ["backgroundu", ("background",), 1, ["background"]], ["bkgrund", ("background",), 1, ["background"]], ["llow", ("background",), 1, []], ["llow", ("background", "yellow"), 1, ["yellow"]], ["yllow", ("background", "yellow", "ellow"), 1, ["yellow"]], ["yllow", ("background", "yellow", "ellow"), 2, ["yellow", "ellow"]], ["yllow", ("background", "yellow", "red"), 2, ["yellow"]], ), )
18
https://github.com/Textualize/textual.git
122
def test_get_suggestion(word, possible_words, expected_result): assert get_suggestion(word, possible_words) == expected_result @pytest.mark.parametrize( "word, possible_words, count, expected_result", ( ["background", ("background",), 1, ["background"]], ["backgroundu", ("background",), 1, ["background"]
8
265
test_get_suggestion
100
0
6
20
python/ccxt/binance.py
17,330
1.71.93 [ci skip]
ccxt
12
Python
69
binance.py
def set_margin_mode(self, marginType, symbol=None, params={}): # # {"code": -4048 , "msg": "Margin type cannot be changed if there exists position."} # # or # # {"code": 200, "msg": "success"} # marginType = marginType.upper() if marginType == 'CROSS': marginType = 'CROSSED' if (marginType != 'ISOLATED') and (marginType != 'CROSSED'): raise BadRequest(self.id + ' marginType must be either isolated or cross') self.load_markets() market = self.market(symbol) method = None if market['linear']: method = 'fapiPrivatePostMarginType' elif market['inverse']: method = 'dapiPrivatePostMarginType' else: raise NotSupported(self.id + ' setMarginMode() supports linear and inverse contracts only') request = { 'symbol': market['id'], 'marginType': marginType, } return getattr(self, method)(self.extend(request, params))
2d2673c42db3abc79e52ec83b050f12ca1a90fc5
130
https://github.com/ccxt/ccxt.git
309
def set_margin_mode(self, marginType, symbol=None, params={}): # # {"code": -4048 , "msg": "Margin type cannot be changed if there exists position."} # # or # # {"code": 200, "msg": "success"} # marginType = marginType.upper() if marginType == 'CROSS': marginType = 'CROSSED' if (marginType != 'ISOLATED') and (marginType != 'CROSSED'): raise BadRequest(self.id + ' marginType must be either isolated or cross') self.load_markets() market = self.market(symbol) method = None if market['linear']: method = 'fapiPrivatePostMarginType' elif market['inverse']: method = 'dapiPrivatePostMarginType' else: raise NotSupported(self.id + ' setMarginMode() supports linear and inverse contracts only') request = { 'symbol': market['id'], 'marginType': marginType, } return getat
15
234
set_margin_mode
47
0
2
9
pandas/tests/series/indexing/test_setitem.py
170,204
STYLE: fix some consider-using-enumerate pylint warnings (#49214) * STYLE: fix some consider-using-enumerate pylint errors * fixup! STYLE: fix some consider-using-enumerate pylint errors * fixup! fixup! STYLE: fix some consider-using-enumerate pylint errors * fixup! fixup! fixup! STYLE: fix some consider-using-enumerate pylint errors * fixup! fixup! fixup! fixup! STYLE: fix some consider-using-enumerate pylint errors
pandas
12
Python
41
test_setitem.py
def test_setitem_scalar_into_readonly_backing_data(): # GH#14359: test that you cannot mutate a read only buffer array = np.zeros(5) array.flags.writeable = False # make the array immutable series = Series(array) for n in series.index: msg = "assignment destination is read-only" with pytest.raises(ValueError, match=msg): series[n] = 1 assert array[n] == 0
93bd1a8ece37657e887808b1492d3715e25e8bd3
60
https://github.com/pandas-dev/pandas.git
94
def test_setitem_scalar_into_readonly_backing_data(): # GH#14359: test that you cannot mutate a read only buffer array = np.zeros(5) array.flags.writeable = False # make the array immutable series = Series(array) for n in series.index: msg = "assignment destination is read-only" with pytest.raises(ValueError, match=msg): series
15
100
test_setitem_scalar_into_readonly_backing_data
17
0
2
4
.venv/lib/python3.8/site-packages/pip/_internal/utils/urls.py
61,324
upd; format
transferlearning
11
Python
16
urls.py
def get_url_scheme(url): # type: (str) -> Optional[str] if ":" not in url: return None return url.split(":", 1)[0].lower()
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
29
https://github.com/jindongwang/transferlearning.git
32
def get_url_scheme(url): # type: (str) -> Optional[str] if ":" not in url: return None
4
50
get_url_scheme
306
0
3
41
python/ccxt/async_support/coinbase.py
15,210
1.66.55 [ci skip]
ccxt
11
Python
145
coinbase.py
def parse_transaction(self, transaction, market=None): # # fiat deposit # # { # "id": "f34c19f3-b730-5e3d-9f72", # "status": "completed", # "payment_method": { # "id": "a022b31d-f9c7-5043-98f2", # "resource": "payment_method", # "resource_path": "/v2/payment-methods/a022b31d-f9c7-5043-98f2" # }, # "transaction": { # "id": "04ed4113-3732-5b0c-af86-b1d2146977d0", # "resource": "transaction", # "resource_path": "/v2/accounts/91cd2d36-3a91-55b6-a5d4-0124cf105483/transactions/04ed4113-3732-5b0c-af86" # }, # "user_reference": "2VTYTH", # "created_at": "2017-02-09T07:01:18Z", # "updated_at": "2017-02-09T07:01:26Z", # "resource": "deposit", # "resource_path": "/v2/accounts/91cd2d36-3a91-55b6-a5d4-0124cf105483/deposits/f34c19f3-b730-5e3d-9f72", # "committed": True, # "payout_at": "2017-02-12T07:01:17Z", # "instant": False, # "fee": {"amount": "0.00", "currency": "EUR"}, # "amount": {"amount": "114.02", "currency": "EUR"}, # "subtotal": {"amount": "114.02", "currency": "EUR"}, # "hold_until": null, # "hold_days": 0, # "hold_business_days": 0, # "next_step": null # } # # fiat_withdrawal # # { # "id": "cfcc3b4a-eeb6-5e8c-8058", # "status": "completed", # "payment_method": { # "id": "8b94cfa4-f7fd-5a12-a76a", # "resource": "payment_method", # "resource_path": "/v2/payment-methods/8b94cfa4-f7fd-5a12-a76a" # }, # "transaction": { # "id": "fcc2550b-5104-5f83-a444", # "resource": "transaction", # "resource_path": "/v2/accounts/91cd2d36-3a91-55b6-a5d4-0124cf105483/transactions/fcc2550b-5104-5f83-a444" # }, # "user_reference": "MEUGK", # "created_at": "2018-07-26T08:55:12Z", # "updated_at": "2018-07-26T08:58:18Z", # "resource": "withdrawal", # "resource_path": "/v2/accounts/91cd2d36-3a91-55b6-a5d4-0124cf105483/withdrawals/cfcc3b4a-eeb6-5e8c-8058", # "committed": True, # "payout_at": "2018-07-31T08:55:12Z", # "instant": False, # "fee": {"amount": "0.15", "currency": "EUR"}, # "amount": {"amount": "13130.69", "currency": "EUR"}, # "subtotal": {"amount": "13130.84", "currency": "EUR"}, # "idem": "e549dee5-63ed-4e79-8a96", # "next_step": null # } # subtotalObject = self.safe_value(transaction, 'subtotal', {}) feeObject = self.safe_value(transaction, 'fee', {}) id = self.safe_string(transaction, 'id') timestamp = self.parse8601(self.safe_value(transaction, 'created_at')) updated = self.parse8601(self.safe_value(transaction, 'updated_at')) type = self.safe_string(transaction, 'resource') amount = self.safe_number(subtotalObject, 'amount') currencyId = self.safe_string(subtotalObject, 'currency') currency = self.safe_currency_code(currencyId) feeCost = self.safe_number(feeObject, 'amount') feeCurrencyId = self.safe_string(feeObject, 'currency') feeCurrency = self.safe_currency_code(feeCurrencyId) fee = { 'cost': feeCost, 'currency': feeCurrency, } status = self.parse_transaction_status(self.safe_string(transaction, 'status')) if status is None: committed = self.safe_value(transaction, 'committed') status = 'ok' if committed else 'pending' return { 'info': transaction, 'id': id, 'txid': id, 'timestamp': timestamp, 'datetime': self.iso8601(timestamp), 'network': None, 'address': None, 'addressTo': None, 'addressFrom': None, 'tag': None, 'tagTo': None, 'tagFrom': None, 'type': type, 'amount': amount, 'currency': currency, 'status': status, 'updated': updated, 'fee': fee, }
8543cfb54ecfae0f51f4b77a8df7b38aa0626094
272
https://github.com/ccxt/ccxt.git
1,594
def parse_transaction(self, transaction, market=None): # # fiat deposit # # { # "id": "f34c19f3-b730-5e3d-9f72", # "status": "completed", # "payment_method": { # "id": "a022b31d-f9c7-5043-98f2", # "resource": "payment_method", # "resource_path": "/v2/payment-methods/a022b31d-f9c7-5043-98f2" # }, # "transaction": { # "id": "04ed4113-3732-5b0c-af86-b1d2146977d0", # "resource": "transaction", # "resource_path": "/v2/accounts/91cd2d36-3a91-55b6-a5d4-0124cf105483/transactions/04ed4113-3732-5b0c-
26
523
parse_transaction
95
0
4
21
jaxlib/gpu_prng.py
122,719
Raise error for unsupported shape polymorphism for custom call and fallback lowering
jax
12
Python
75
gpu_prng.py
def _threefry2x32_lowering(prng, platform, keys, data): assert len(keys) == 2, keys assert len(data) == 2, data assert (ir.RankedTensorType(keys[0].type).element_type == ir.IntegerType.get_unsigned(32)), keys[0].type typ = keys[0].type dims = ir.RankedTensorType(typ).shape if any(d < 0 for d in dims): raise NotImplementedError("Shape polymorphism for custom call is not implemented (threefry); b/261671778") for x in itertools.chain(keys, data): assert x.type == typ, (x.type, typ) ndims = len(dims) opaque = prng.threefry2x32_descriptor(_prod(dims)) layout = tuple(range(ndims - 1, -1, -1)) return custom_call( f"{platform}_threefry2x32", [typ, typ], [keys[0], keys[1], data[0], data[1]], backend_config=opaque, operand_layouts=[layout] * 4, result_layouts=[layout] * 2) cuda_threefry2x32 = partial(_threefry2x32_lowering, _cuda_prng, "cu") rocm_threefry2x32 = partial(_threefry2x32_lowering, _hip_prng, "hip")
ac7740513d0b47894d9170af6aaa6b9355fb2059
211
https://github.com/google/jax.git
150
def _threefry2x32_lowering(prng, platform, keys, data): assert len(keys) == 2, keys assert len(data) == 2, data assert (ir.RankedTensorType(keys[0].type).element_type == ir.IntegerType.get_unsigned(32)), keys[0].type typ = keys[0].type dims = ir.RankedTensorType(typ).shape if any(d < 0 for d in dims): raise NotImplementedError("Shape polymorphism for custom call is not implemented (threefry); b/261671778") for x in itertools.chain(keys, data): assert x.type == typ, (x.type, typ) ndims = len(dims) opaque = prng.threefry2x32_descriptor(_prod(dims)) layout = tuple(range(ndims - 1, -1, -1)) return custom_call( f"{platform}_threefry2x32", [typ, typ], [keys[0], keys[1], data[0], data[1]], backend_config=opaque, op
37
345
_threefry2x32_lowering
26
0
3
8
erpnext/hr/report/vehicle_expenses/vehicle_expenses.py
66,275
style: format code with black
erpnext
12
Python
24
vehicle_expenses.py
def get_period_dates(filters): if filters.filter_based_on == "Fiscal Year" and filters.fiscal_year: fy = frappe.db.get_value( "Fiscal Year", filters.fiscal_year, ["year_start_date", "year_end_date"], as_dict=True ) return fy.year_start_date, fy.year_end_date else: return filters.from_date, filters.to_date
494bd9ef78313436f0424b918f200dab8fc7c20b
58
https://github.com/frappe/erpnext.git
18
def get_period_dates(filters): if filters.filter_based_on == "Fiscal Year" and filters.fiscal_year: fy = frappe.db.get_value(
13
95
get_period_dates
17
0
2
4
.venv/lib/python3.8/site-packages/pip/_internal/index/collector.py
60,723
upd; format
transferlearning
11
Python
17
collector.py
def _ensure_html_header(response): # type: (Response) -> None content_type = response.headers.get("Content-Type", "") if not content_type.lower().startswith("text/html"): raise _NotHTML(content_type, response.request.method)
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
42
https://github.com/jindongwang/transferlearning.git
36
def _ensure_html_header(response): # type: (Response) -> None content_type = response.headers.get("Content-Type", "") if not content_type.lower().startswith("text/html"): raise _NotHTML(content_type, response.request.method)
10
76
_ensure_html_header
31
0
3
13
sympy/core/expr.py
198,470
Code cleanup
sympy
13
Python
24
expr.py
def _parse_order(cls, order): from sympy.polys.orderings import monomial_key startswith = getattr(order, "startswith", None) if startswith is None: reverse = False else: reverse = startswith('rev-') if reverse: order = order[4:] monom_key = monomial_key(order)
9d58006fc0a23afcba38f641c9472917c436428a
66
https://github.com/sympy/sympy.git
121
def _parse_order(cls, order): from sympy.polys.orderings import monomial_key startswith = getattr(order, "startswith", None) if startswith is None: reverse = False else: reverse = startswith('rev-') if reverse: order = order[4:] monom_key = mono
11
97
_parse_order
114
1
9
20
awx/main/tasks/jobs.py
81,573
Replace git shallow clone with shutil.copytree Introduce build_project_dir method the base method will create an empty project dir for workdir Share code between job and inventory tasks with new mixin combine rest of pre_run_hook logic structure to hold lock for entire sync process force sync to run for inventory updates due to UI issues Remove reference to removed scm_last_revision field
awx
19
Python
85
jobs.py
def sync_and_copy(self, project, private_data_dir, scm_branch=None): self.acquire_lock(project, self.instance.id) try: original_branch = None project_path = project.get_project_path(check_if_exists=False) if project.scm_type == 'git' and (scm_branch and scm_branch != project.scm_branch): if os.path.exists(project_path): git_repo = git.Repo(project_path) if git_repo.head.is_detached: original_branch = git_repo.head.commit else: original_branch = git_repo.active_branch return self.sync_and_copy_without_lock(project, private_data_dir, scm_branch=scm_branch) finally: # We have made the copy so we can set the tree back to its normal state if original_branch: # for git project syncs, non-default branches can be problems # restore to branch the repo was on before this run try: original_branch.checkout() except Exception: # this could have failed due to dirty tree, but difficult to predict all cases logger.exception(f'Failed to restore project repo to prior state after {self.instance.id}') self.release_lock(project) @task(queue=get_local_queuename)
46be2d9e5b4423f316d6fae4a080d36716622c15
@task(queue=get_local_queuename)
137
https://github.com/ansible/awx.git
445
def sync_and_copy(self, project, private_data_dir, scm_branch=None): self.acquire_lock(project, self.instance.id) try: original_branch = None project_path = project.get_project_path(check_if_exists=False) if project.scm_type == 'git' and (scm_branch and scm_branch != project.scm_branch): if os.path.exists(project_path): git_repo = git.Repo(project_path) if git_repo.head.is_detached: original_branch = git_repo.head.commit else: original_branch = git_repo.active_branch return self.sync_and_copy_without_lock(project, private_data_dir, scm_branch=scm_branch) finally: # We have made the copy so we can set the tree back to its normal state if original_branch: # for git project syncs, non-default branches can be problems # restore to branch the repo w
32
245
sync_and_copy
32
0
1
2
src/calibre/gui2/preferences/create_custom_column.py
188,799
More CreateNewCustomColumn stuff. - Improved documentation - Check column headings for duplicates - Method to return the current column headings as a dict - Improved exception handling
calibre
8
Python
28
create_custom_column.py
def current_columns(self): # deepcopy to prevent users from changing it. The new MappingProxyType # isn't enough because only the top-level dict is immutable, not the # items in the dict. return copy.deepcopy(self.custcols)
9a95d8b0c26bdaea17ea9264ab45e8a81b6422f0
15
https://github.com/kovidgoyal/calibre.git
67
def current_columns(self): # deepcopy to prevent users from changing it. The new MappingProxyType # isn't enough because only the top
5
30
current_columns
56
0
3
29
tests/orion/models/test_orm.py
55,461
Fix some timestamp sensitive tests on windows
prefect
23
Python
48
test_orm.py
async def many_task_run_states(flow_run, session, db): # clear all other task runs await session.execute(sa.delete(db.TaskRun)) await session.execute(sa.delete(db.TaskRunState)) for i in range(5): task_run = await models.task_runs.create_task_run( session=session, task_run=schemas.actions.TaskRunCreate( flow_run_id=flow_run.id, task_key="test-task", dynamic_key=str(i), ), ) states = [ db.TaskRunState( task_run_id=task_run.id, **schemas.states.State( type={ 0: schemas.states.StateType.PENDING, 1: schemas.states.StateType.RUNNING, 2: schemas.states.StateType.COMPLETED, }[i], timestamp=pendulum.now("UTC").add(minutes=i), ).dict(), ) for i in range(3) ] task_run.set_state(states[-1]) session.add_all(states) await session.commit()
5c08c3ed69793298f86f1484f149951ee2a0847f
198
https://github.com/PrefectHQ/prefect.git
398
async def many_task_run_states(flow_run, session, db): # clear all other task runs await session.execute(sa.delete(db.TaskRun)) await session.execute(sa.delete(db.TaskRunState)) for i in range(5): task_run = await models.task_runs.create_task_run( session=session, task_run=schemas.actions.TaskRunCreate( flow_run_id=flow_run.id, task_key="test-task", dynamic_key=str(i), ), ) states = [ db.TaskRunState( task_run_id=task_run.id, **schemas.states.State( type={ 0: schemas.states.StateType.PENDING, 1: schemas.states.StateType.RUNNING,
40
309
many_task_run_states
63
0
2
17
homeassistant/components/filesize/sensor.py
296,538
Fix file size last_updated (#70114) Co-authored-by: J. Nick Koston <nick@koston.org>
core
12
Python
53
sensor.py
async def _async_update_data(self) -> dict[str, float | int | datetime]: try: statinfo = os.stat(self._path) except OSError as error: raise UpdateFailed(f"Can not retrieve file statistics {error}") from error size = statinfo.st_size last_updated = datetime.utcfromtimestamp(statinfo.st_mtime).replace( tzinfo=dt_util.UTC ) _LOGGER.debug("size %s, last updated %s", size, last_updated) data: dict[str, int | float | datetime] = { "file": round(size / 1e6, 2), "bytes": size, "last_updated": last_updated, } return data
23446fa1c0a8579ae314151651b6973af600df09
112
https://github.com/home-assistant/core.git
199
async def _async_update_data(self) -> dict[str, float | int | datetime]: try: statinfo = os.stat(self._path) except OSError as error: raise UpdateFailed(f"Can not retrieve file statistics {error}") from error size = statinfo.st_size last_updated = datetime.utcfromtimestamp(statinfo.st_mtime).replace( tzinfo=dt_util.UTC ) _LOGGER.debug("size %s, last updated %s", size, last_updated) data: dict[str, int | float | datetime] = { "file": round(size /
27
184
_async_update_data
52
0
7
16
homeassistant/components/hdmi_cec/media_player.py
307,749
Enforce MediaPlayerState in hdmi_cec media player (#78522)
core
12
Python
33
media_player.py
def update(self) -> None: device = self._device if device.power_status in [POWER_OFF, 3]: self._attr_state = MediaPlayerState.OFF elif not self.support_pause: if device.power_status in [POWER_ON, 4]: self._attr_state = MediaPlayerState.ON elif device.status == STATUS_PLAY: self._attr_state = MediaPlayerState.PLAYING elif device.status == STATUS_STOP: self._attr_state = MediaPlayerState.IDLE elif device.status == STATUS_STILL: self._attr_state = MediaPlayerState.PAUSED else: _LOGGER.warning("Unknown state: %s", device.status)
b29605060a74c441550708ccf4ace4b697f66ae6
109
https://github.com/home-assistant/core.git
189
def update(self) -> None: device = self._device if device.power_status in [POWER_OFF, 3]: self._attr_state = MediaPlayerState.OFF elif not self.support_pause: if device.power_status in [POWER_ON, 4]: self._attr_state = MediaPlayerState.
21
175
update
72
0
1
26
tests/sentry/utils/suspect_resolutions/test_commit_correlation.py
94,571
ref(suspect-resolutions): refactor code around (#37775) * refactor code * fix metric correlation test
sentry
10
Python
52
test_commit_correlation.py
def test_get_files_changed_no_shared_files(self): (project, issue, release, repo) = self.setup() Activity.objects.create( project=project, group=issue, type=ActivityType.SET_RESOLVED_IN_COMMIT.value ) release2 = self.create_release() issue2 = self.create_group() commit2 = Commit.objects.create( organization_id=project.organization_id, repository_id=repo.id, key="2" ) ReleaseCommit.objects.create( organization_id=project.organization_id, release=release2, commit=commit2, order=1 ) CommitFileChange.objects.create( organization_id=project.organization_id, commit=commit2, filename=".gitignore" ) GroupRelease.objects.create( project_id=project.id, group_id=issue2.id, release_id=release2.id ) res1 = get_files_changed_in_releases(issue.id, project.id) res2 = get_files_changed_in_releases(issue2.id, project.id) assert res1.files_changed == {".random", ".random2"} assert res2.files_changed == {".gitignore"} assert res1.release_ids == [release.id] assert res2.release_ids == [release2.id] assert not is_issue_commit_correlated(issue.id, issue2.id, project.id).is_correlated
b25bf3d4efa751232673b1e9d2a07ee439994348
228
https://github.com/getsentry/sentry.git
266
def test_get_files_changed_no_shared_files(self): (project, issue, release, repo) = self.setup() Activity.objects.create( project=project, group=issue, type=ActivityType.SET_RESOLVED_IN_COMMIT.value ) release2 = self.create_release() issue2 = self.create_group() commit2 = Commit.objects.create( organization_id=project.organization_id, repository_id=repo.id, key="2" ) ReleaseCommit.objects.create( organization_id=project.organization_id, release=release2, commit=commit2, order=1 ) CommitFileChange.objects.create( organization_id=project.organization_id, commit=commit2, filename=".gitignore" ) GroupRelease.objects.create( project_id=project.id, group_id=issue2.id, release_id=release2.id ) res1 = get_files_changed_in_releases(issue.id, project.id) res2 = get_files_changed_in_releases(issue2.id, project.id) assert res1.files_changed == {".random", ".random2"} assert res2.files_changed == {".gitignore"} assert res1.release_ids == [release.id] assert res2.release_ids == [release2.i
41
347
test_get_files_changed_no_shared_files
95
0
3
25
lib/mpl_toolkits/tests/test_axisartist_axislines.py
107,077
Expire axes_grid1/axisartist deprecations.
matplotlib
14
Python
70
test_axisartist_axislines.py
def test_ParasiteAxesAuxTrans(): # Remove this line when this test image is regenerated. plt.rcParams['pcolormesh.snap'] = False data = np.ones((6, 6)) data[2, 2] = 2 data[0, :] = 0 data[-2, :] = 0 data[:, 0] = 0 data[:, -2] = 0 x = np.arange(6) y = np.arange(6) xx, yy = np.meshgrid(x, y) funcnames = ['pcolor', 'pcolormesh', 'contourf'] fig = plt.figure() for i, name in enumerate(funcnames): ax1 = SubplotHost(fig, 1, 3, i+1) fig.add_subplot(ax1) ax2 = ParasiteAxes(ax1, IdentityTransform()) ax1.parasites.append(ax2) if name.startswith('pcolor'): getattr(ax2, name)(xx, yy, data[:-1, :-1]) else: getattr(ax2, name)(xx, yy, data) ax1.set_xlim((0, 5)) ax1.set_ylim((0, 5)) ax2.contour(xx, yy, data, colors='k')
7749b7b153219738dcf30f0acbad310a2550aa19
237
https://github.com/matplotlib/matplotlib.git
217
def test_ParasiteAxesAuxTrans(): # Remove this line when this test image is regenerated. plt.rcParams['pcolormesh.snap'] = False data = np.ones((6, 6)) data[2, 2] = 2 data[0, :] = 0 data[-2, :] = 0 data[:, 0] = 0 data[:, -2] = 0 x = np.arange(6) y = np.arange(6) xx, yy = np.meshgrid(x
32
371
test_ParasiteAxesAuxTrans
98
0
4
12
qutebrowser/mainwindow/tabwidget.py
321,278
Run scripts/dev/rewrite_enums.py
qutebrowser
12
Python
70
tabwidget.py
def subElementRect(self, sr, opt, widget=None): if sr == QStyle.SubElement.SE_TabBarTabText: layouts = self._tab_layout(opt) if layouts is None: log.misc.warning("Could not get layouts for tab!") return QRect() return layouts.text elif sr in [QStyle.SubElement.SE_TabWidgetTabBar, QStyle.SubElement.SE_TabBarScrollLeftButton]: # Handling SE_TabBarScrollLeftButton so the left scroll button is # aligned properly. Otherwise, empty space will be shown after the # last tab even though the button width is set to 0 # # Need to use super() because we also use super() to render # element in drawControl(); otherwise, we may get bit by # style differences... return super().subElementRect(sr, opt, widget) else: return self._style.subElementRect(sr, opt, widget)
0877fb0d78635692e481c8bde224fac5ad0dd430
97
https://github.com/qutebrowser/qutebrowser.git
307
def subElementRect(self, sr, opt, widget=None): if sr == QStyle.SubElement.SE_TabBarTabText: layouts = self._tab_layout(opt) if layouts is None: log.misc.warning("Could not get layouts for tab!") return QR
19
158
subElementRect
44
0
3
11
lib/matplotlib/tests/test_axes.py
110,506
MNT: when clearing an Axes via clear/cla fully detach children Reset the Axes and Figure of the children to None to help break cycles. Closes #6982
matplotlib
10
Python
24
test_axes.py
def test_cla_clears_chlidren_axes_and_fig(): fig, ax = plt.subplots() lines = ax.plot([], [], [], []) img = ax.imshow([[1]]) for art in lines + [img]: assert art.axes is ax assert art.figure is fig ax.clear() for art in lines + [img]: assert art.axes is None assert art.figure is None
ffcc8d314c8a47772ba541027f138ee18155d7e6
90
https://github.com/matplotlib/matplotlib.git
89
def test_cla_clears_chlidren_axes_and_fig(): fig, ax = plt.subplots()
13
140
test_cla_clears_chlidren_axes_and_fig
9
0
1
170
sympy/crypto/crypto.py
196,747
Fix a misspelling
sympy
8
Python
9
crypto.py
def rsa_public_key(*args, **kwargs): r return _rsa_key(*args, public=True, private=False, **kwargs)
2110dbe01539e03ef8634deac4c40f895da38daa
28
https://github.com/sympy/sympy.git
14
def rsa_public_key(*args, **kwargs): r return _rsa_key(*args, public=True, private=False, **kwargs)
6
43
rsa_public_key
55
0
5
18
homeassistant/components/zerproc/light.py
318,171
Use attributes in zerproc light (#75951)
core
14
Python
38
light.py
async def async_update(self) -> None: try: if not self.available: await self._light.connect() state = await self._light.get_state() except pyzerproc.ZerprocException: if self.available: _LOGGER.warning("Unable to connect to %s", self._light.address) self._attr_available = False return if not self.available: _LOGGER.info("Reconnected to %s", self._light.address) self._attr_available = True self._attr_is_on = state.is_on hsv = color_util.color_RGB_to_hsv(*state.color) self._attr_hs_color = hsv[:2] self._attr_brightness = int(round((hsv[2] / 100) * 255))
90458ee200d6d9e6fe7458fec3021d904e365c13
132
https://github.com/home-assistant/core.git
218
async def async_update(self) -> None: try: if not self.available: await self._light.connect() state = await self._light.get_state() except pyzerproc.Zerproc
24
220
async_update
37
0
3
13
yt_dlp/extractor/awaan.py
162,364
[cleanup] Use format_field where applicable
yt-dlp
12
Python
34
awaan.py
def _parse_video_data(self, video_data, video_id, is_live): title = video_data.get('title_en') or video_data['title_ar'] img = video_data.get('img') return { 'id': video_id, 'title': title, 'description': video_data.get('description_en') or video_data.get('description_ar'), 'thumbnail': format_field(img, template='http://admin.mangomolo.com/analytics/%s'), 'duration': int_or_none(video_data.get('duration')), 'timestamp': parse_iso8601(video_data.get('create_time'), ' '), 'is_live': is_live, 'uploader_id': video_data.get('user_id'), }
e0ddbd02bd1c365b95bb88eaa6e4e0238faf35eb
109
https://github.com/yt-dlp/yt-dlp.git
152
def _parse_video_data(self, video_data, video_id, is_live): title = video_data.get('title_en') or video_data['title_ar'] img = video_data.get('img') return { 'id': video_id, 'title': title, 'description': video_data.get('description_en') or video_data.get('description_ar'), 'thumbnail': format_field(img, template='http://admin.mangomolo.com/analytics/%s'), 'duration': int_or_none(video_data.get('duration')), 'timestamp': parse_iso8601(video_data.get('create_time'), ' '), 'is_liv
12
194
_parse_video_data
16
0
2
4
dev/breeze/src/airflow_breeze/utils/selective_checks.py
43,511
Convert selective checks to Breeze Python (#24610) Instead of bash-based, complex logic script to perform PR selective checks we now integrated the whole logic into Breeze Python code. It is now much simplified, when it comes to algorithm. We've implemented simple rule-based decision tree. The rules describing the decision tree are now are now much easier to reason about and they correspond one-to-one with the rules that are implemented in the code in rather straightforward way. The code is much simpler and diagnostics of the selective checks has also been vastly improved: * The rule engine displays status of applying each rule and explains (with yellow warning message what decision was made and why. Informative messages are printed showing the resulting output * List of files impacting the decision are also displayed * The names of "ci file group" and "test type" were aligned * Unit tests covering wide range of cases are added. Each test describes what is the case they demonstrate * `breeze selective-checks` command that is used in CI can also be used locally by just providing commit-ish reference of the commit to check. This way you can very easily debug problems and fix them Fixes: #19971
airflow
12
Python
16
selective_checks.py
def upgrade_to_newer_dependencies(self) -> bool: return len( self._matching_files(FileGroupForCi.SETUP_FILES, CI_FILE_GROUP_MATCHES) ) > 0 or self._github_event in [GithubEvents.PUSH, GithubEvents.SCHEDULE]
d7bd72f494e7debec11672eeddf2e6ba5ef75fac
37
https://github.com/apache/airflow.git
40
def upgrade_to_newer_dependencies(self) -> bool: return len( self._matching_files(FileGroupForCi.SETUP_FILES, CI_FILE_GROUP_MATCHES) ) > 0 or self._github_event in [GithubEvents.PUSH, GithubEvents.SCHEDULE]
12
56
upgrade_to_newer_dependencies