id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
250,426
14
9
5
68
7
0
15
43
test_no_order_last
Add missing type hints to tests.handlers. (#14680) And do not allow untyped defs in tests.handlers.
https://github.com/matrix-org/synapse.git
def test_no_order_last(self) -> None: ev1 = _create_event("!abc:test") ev2 = _create_event("!xyz:test", "xyz") self.assertEqual([ev2, ev1], _order(ev1, ev2))
39
test_room_summary.py
Python
tests/handlers/test_room_summary.py
652d1669c5a103b1c20478770c4aaf18849c09a3
synapse
1
183,499
26
8
3
31
5
0
27
55
_get_time
[App] Finally, time mocking in tests seems to be working! 😅 I had to add a flag in the `_timer` module that allows us to completely disable the "skip" feature of Timers, though - but it shouldn't cause too much trouble 🤞
https://github.com/Textualize/textual.git
def _get_time(self) -> float: # N.B. We could remove this method and always call `self._timer.get_time()` internally, # but it's handy to have in mocking situations return self._timer.get_time()
16
_animator.py
Python
src/textual/_animator.py
15df75919744fbea824bbf029cfb56029a3d0dc8
textual
1
197,178
37
14
43
120
10
0
48
127
variations
Reduce "yield from" overhead in iterables.variations() Same treatment as was recently applied to iterables.subsets(). All paths through iterables.variations() still return a generator object, but variations() is no longer itself a generator function. This avoids the overhead of one level of yield from. This change can save as much as 30% of the time spent in this function. (Very sensitive to Python version.)
https://github.com/sympy/sympy.git
def variations(seq, n, repetition=False): r if not repetition: seq = tuple(seq) if len(seq) < n: return (val for val in []) # 0 length generator return permutations(seq, n) else: if n == 0: return (val for val in [()]) # yields 1 empty tuple else: return product(seq, repeat=n)
76
iterables.py
Python
sympy/utilities/iterables.py
f8ee5f4f6410ea7130fdb3080680248ac0667d8f
sympy
6
87,190
42
12
23
237
16
0
53
330
test_get_dynamic_sampling_after_migrating_to_new_plan_manually_set_biases
feat(ds): Support new DS behaviour in project_details endpoint (#40387) Supports new adaptive dynamic sampling behaviour alongside the deprecated dynamic sampling behaviour and achieves that through feature flag differentiation This PR achieve that through the following: - Introducing a new `DynamicSamplingBiasSerializer` which is composed of id representing the bias name and a boolean flag indicating whether that particular flag is active or not - Modifies current existing behavior for both old sampling flag and new sampling flag. Essentially the new setup entails that to be on the old dynamic sampling, the following flags need to be enabled "organizations:server-side-sampling" and "organizations:server-side-sampling-ui", and to be on the new dynamic sampling configurations, you need the following flags to be enabled "organizations:dynamic-sampling-basic" and "organizations:server-side-sampling" P.S. 1: These flags will be replaced "organizations:server-side-sampling-ui" -> "organizations:dynamic-sampling-deprecated" "organizations:server-side-sampling-basic" -> "organizations:dynamic-sampling" Hence, these feature flags need to be updated once this PR lands https://github.com/getsentry/sentry/pull/40388 P.S. 2: If a project is on the new plan and the old plan, the new plan takes precedence - Introduces default biases that are enabled by default and can be overwritten. The motivation to do this is to be able to add new biases that are enabled by default, and both the GET and PUT request honor this list - `GET` and `POST` endpoint does a dictionary update of user's stored biases on the default biases that are hardcoded, and returns them to the UI/ relay. This means that the introduced project option "sentry:dynamic_sampling_biases" might not have all the toggles enabled/disabled through the UI but only the ones that a customer chose to modify Followup: - This new feature flag behaviour needs to be reflected in ProjectConfig computations
https://github.com/getsentry/sentry.git
def test_get_dynamic_sampling_after_migrating_to_new_plan_manually_set_biases(self): self.project.update_option("sentry:dynamic_sampling", self.dynamic_sampling_data) new_biases = [{"id": "boostEnvironments", "active": False}] self.project.update_option("sentry:dynamic_sampling_biases", new_biases) with Feature( { self.universal_ds_flag: True, self.old_ds_flag: True, self.new_ds_flag: True, } ): response = self.get_success_response( self.organization.slug, self.project.slug, method="get" ) assert response.data["dynamicSampling"] is None assert response.data["dynamicSamplingBiases"] == [ {"id": "boostEnvironments", "active": False}, { "id": "boostLatestRelease", "active": True, }, {"id": "ignoreHealthChecks", "active": True}, ]
138
test_project_details.py
Python
tests/sentry/api/endpoints/test_project_details.py
5462ee11ad11ebb9a50323befcd286816d7898c8
sentry
1
171,566
9
9
64
31
4
0
9
24
unique
STYLE: Enable Pylint useless-parent-delegation warning (#49773) * Enable Pylint useless-parent-delegation warning and remove superfluous overriding methods * Remove annotations that caused errors. Remove @skip_nested annotations that cause test failures and replace them with statements that suppress the useless-parent-delegation warning. * Remove tests that can fall back to the base class * Move a comment to ensure that pre-commit passes
https://github.com/pandas-dev/pandas.git
def unique(self) -> ArrayLike: # pylint: disable=useless-parent-delegation return super().unique()
16
series.py
Python
pandas/core/series.py
005486fa6c2f065e25df3c244a2bd53abe80ffb3
pandas
1
20,520
17
11
6
87
13
0
21
43
doctype_matches
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def doctype_matches(text, regex): m = doctype_lookup_re.search(text) if m is None: return False doctype = m.group(1) return re.compile(regex, re.I).match(doctype.strip()) is not None
54
util.py
Python
pipenv/patched/notpip/_vendor/pygments/util.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
249,499
20
10
13
99
12
0
22
126
test_success_urlencoded
Add an admin API endpoint to find a user based on its external ID in an auth provider. (#13810)
https://github.com/matrix-org/synapse.git
def test_success_urlencoded(self) -> None: url = "/_synapse/admin/v1/auth_providers/another-auth-provider/users/a%3Acomplex%40external%2Fid" channel = self.make_request( "GET", url, access_token=self.admin_user_tok, ) self.assertEqual(200, channel.code, msg=channel.json_body) self.assertEqual( {"user_id": self.other_user}, channel.json_body, )
61
test_user.py
Python
tests/rest/admin/test_user.py
74f60cec92c5aff87d6e74d177e95ec5f1a69f2b
synapse
1
309,521
21
10
7
95
13
0
24
89
turn_on
Cleanup ADS constants and add type hints (#63390) Co-authored-by: epenet <epenet@users.noreply.github.com>
https://github.com/home-assistant/core.git
def turn_on(self, **kwargs): brightness = kwargs.get(ATTR_BRIGHTNESS) self._ads_hub.write_by_name(self._ads_var, True, pyads.PLCTYPE_BOOL) if self._ads_var_brightness is not None and brightness is not None: self._ads_hub.write_by_name( self._ads_var_brightness, brightness, pyads.PLCTYPE_UINT )
62
light.py
Python
homeassistant/components/ads/light.py
0042bb68d97fb3cdebb6ad82500a578b1c0b647f
core
3
39,239
21
15
10
153
22
0
25
107
compute_cooccurrence_matrix
Remove drop_duplicates() from SAR method fix #1464 (#1588) * Remove drop_duplicates() from SAR method fix #1464 * flake is complaining * Typos * Define self.unity_user_affinity inside __init__() * Remove drop_duplicates() from SAR method * Remove duplicates in testing data * Remove duplicates in test data for recommend_k_items * Allow duplicates in score data Co-authored-by: miguelgfierro <miguelgfierro@users.noreply.github.com> Co-authored-by: Andreas Argyriou <anargyri@users.noreply.github.com> Co-authored-by: Simon Zhao <43029286+simonzhaoms@users.noreply.github.com>
https://github.com/microsoft/recommenders.git
def compute_cooccurrence_matrix(self, df): user_item_hits = sparse.coo_matrix( (np.repeat(1, df.shape[0]), (df[self.col_user_id], df[self.col_item_id])), shape=(self.n_users, self.n_items), ).tocsr() item_cooccurrence = user_item_hits.transpose().dot(user_item_hits) item_cooccurrence = item_cooccurrence.multiply( item_cooccurrence >= self.threshold ) return item_cooccurrence.astype(df[self.col_rating].dtype)
101
sar_singlenode.py
Python
recommenders/models/sar/sar_singlenode.py
96b5053fa688bec79a729f9ea238e5f916bced01
recommenders
1
136,002
6
12
3
45
8
0
6
20
_remote_workers
[RLlib] Refactor `WorkerSet` on top of `FaultTolerantActorManager`. (#29938) Signed-off-by: Jun Gong <jungong@anyscale.com>
https://github.com/ray-project/ray.git
def _remote_workers(self) -> List[ActorHandle]: return list(self.__worker_manager.actors().values())
26
worker_set.py
Python
rllib/evaluation/worker_set.py
e707ce4fb3717e3c05118c57f503dfbd03552ca9
ray
1
125,003
27
12
8
124
15
0
30
90
test_dataset_reader_itr_batches
[RLlib] improved unittests for dataset_reader and fixed bugs (#26458)
https://github.com/ray-project/ray.git
def test_dataset_reader_itr_batches(self): input_config = {"format": "json", "paths": self.dset_path} dataset, _ = get_dataset_and_shards( {"input": "dataset", "input_config": input_config} ) ioctx = IOContext(config={"train_batch_size": 1200}, worker_index=0) reader = DatasetReader(ioctx, dataset) assert len(reader.next()) >= 1200
70
test_dataset_reader.py
Python
rllib/offline/tests/test_dataset_reader.py
569fe0109629048d08e1d9e023f7769f10bd2244
ray
1
19,220
20
11
9
88
10
0
21
60
find_diff
add diff style check test (#617) * add diff style check test * add diff style check test * add diff style check test * add diff style check test * add license * add license
https://github.com/AtsushiSakai/PythonRobotics.git
def find_diff(sha): files = ['*.py'] res = subprocess.run( ['git', 'diff', '--unified=0', sha, '--'] + files, stdout=subprocess.PIPE, encoding='utf-8' ) res.check_returncode() return res.stdout
50
test_diff_codestyle.py
Python
tests/test_diff_codestyle.py
0dfa274be3eaddb270b2bcee197f7d34acbc1363
PythonRobotics
1
199,311
27
11
15
103
11
0
32
59
make_authors_file_lines
mailmap documents python2 I believe this to be useful for at least two reasons. Current documentation either state to use `python bin/mailmap_check.py` or just `bin/mailmap_check.py` which itself calles `python`. In both case, on ubuntu this will run python2. Worse, instead of printing "This script requires Python 3.8 or newer" it prints an error message that python don't know what to do with the f-string. If the f-string is removed, that it does not know pathlib. So, I ensure that the script still work as before on 3.8 and prints a more useful message on python2. I admit I'm not fan of removing a f-string, however, since there was a single f-string, I assume it was relatively acceptable. I suppose most sympy contributor are at ease with those subtilities of python2/3. However, this will at least be useful for people, like me, who only wanted to contribute to improving documentation and not have to deal with complexity of system administration.
https://github.com/sympy/sympy.git
def make_authors_file_lines(git_people): # define new lines for the file header = filldedent().lstrip() header_extra = "There are a total of %d authors." % len(git_people) lines = header.splitlines() lines.append('') lines.append(header_extra) lines.append('') lines.extend(git_people) return lines
56
mailmap_check.py
Python
bin/mailmap_check.py
e8bf22b0eb76ecb6aec12dd45549649c490e1354
sympy
1
293,723
6
11
3
35
4
0
6
31
dispose
Use a dedicated executor pool for database operations (#68105) Co-authored-by: Erik Montnemery <erik@montnemery.com> Co-authored-by: Franck Nijhof <git@frenck.dev>
https://github.com/home-assistant/core.git
def dispose(self): if self.recorder_or_dbworker: return super().dispose()
19
pool.py
Python
homeassistant/components/recorder/pool.py
bc862e97ed68cce8c437327651f85892787e755e
core
2
208,110
41
13
17
309
26
0
61
188
test_group_stamping_with_replaced_group
Canvas Header Stamping (#7384) * Strip down the header-stamping PR to the basics. * Serialize groups. * Add groups to result backend meta data. * Fix spelling mistake. * Revert changes to canvas.py * Revert changes to app/base.py * Add stamping implementation to canvas.py * Send task to AMQP with groups. * Successfully pass single group to result. * _freeze_gid dict merge fixed * First draft of the visitor API. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * OptionsVisitor created * Fixed canvas.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test for simple test for chord and fixed chord implementation * Changed _IMMUTABLE_OPTIONS * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed list order * Fixed tests (stamp test and chord test), fixed order in groups * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed lint and elements * Changed implementation of stamp API and fix lint * Added documentation to Stamping API. Added chord with groups test * Implemented stamping inside replace and added test for an implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Splitted into subtests * Group stamping rollback * group.id is None fixed * Added integration test * Added integration test * apply_async fixed * Integration test and test_chord fixed * Lint fixed * chord freeze fixed * Minor fixes. * Chain apply_async fixed and tests fixed * lint fixed * Added integration test for chord * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * type -> isinstance * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Redo header stamping (#7341) * _freeze_gid dict merge fixed * OptionsVisitor created * Fixed canvas.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test for simple test for chord and fixed chord implementation * Changed _IMMUTABLE_OPTIONS * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed list order * Fixed tests (stamp test and chord test), fixed order in groups * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed lint and elements * Changed implementation of stamp API and fix lint * Added documentation to Stamping API. Added chord with groups test * Implemented stamping inside replace and added test for an implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Splitted into subtests * Group stamping rollback * group.id is None fixed * Added integration test * Added integration test * apply_async fixed * Integration test and test_chord fixed * Lint fixed * chord freeze fixed * Minor fixes. * Chain apply_async fixed and tests fixed * lint fixed * Added integration test for chord * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * type -> isinstance * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Omer Katz <omer.katz@omerkatz.com> * Added stamping mechanism * Manual stamping improved * flake8 fixed * Added subtests * Add comma. * Moved groups to stamps * Fixed chord and added test for that * Strip down the header-stamping PR to the basics. * Serialize groups. * Add groups to result backend meta data. * Fix spelling mistake. * Revert changes to canvas.py * Revert changes to app/base.py * Add stamping implementation to canvas.py * Send task to AMQP with groups. * Successfully pass single group to result. * _freeze_gid dict merge fixed * First draft of the visitor API. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * OptionsVisitor created * Fixed canvas.py * Added test for simple test for chord and fixed chord implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Changed _IMMUTABLE_OPTIONS * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed list order * Fixed tests (stamp test and chord test), fixed order in groups * Fixed lint and elements * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Changed implementation of stamp API and fix lint * Added documentation to Stamping API. Added chord with groups test * Implemented stamping inside replace and added test for an implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Splitted into subtests * Group stamping rollback * group.id is None fixed * Added integration test * Added integration test * apply_async fixed * Integration test and test_chord fixed * Lint fixed * chord freeze fixed * Minor fixes. * Chain apply_async fixed and tests fixed * lint fixed * Added integration test for chord * type -> isinstance * Added stamping mechanism * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Manual stamping improved * fail_ci_if_error uncommented * flake8 fixed * Added subtests * Changes * Add comma. * Fixed chord and added test for that * canvas.py fixed * Test chord.py fixed * Fixed stamped_headers * collections import fixed * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * collections import fixed * Update celery/backends/base.py Co-authored-by: Omer Katz <omer.katz@omerkatz.com> * ampq.py fixed * Refrain from using deprecated import path. * Fix test_complex_chain regression. Whenever we stamp a group we need to freeze it first if it wasn't already frozen. Somewhere along the line, the group id changed because we were freezing twice. This commit places the stamping operation after preparing the chain's steps which fixes the problem somehow. We don't know why yet. * Fixed integration tests * Fixed integration tests * Fixed integration tests * Fixed integration tests * Fixed issues with maybe_list. Add documentation * Fixed potential issue with integration tests * Fixed issues with _regen * Fixed issues with _regen * Fixed test_generator issues * Fixed _regen stamping * Fixed _regen stamping * Fixed TimeOut issue * Fixed TimeOut issue * Fixed TimeOut issue * Update docs/userguide/canvas.rst Co-authored-by: Omer Katz <omer.katz@omerkatz.com> * Fixed Couchbase * Better stamping intro * New GroupVisitor example * Adjust documentation. Co-authored-by: Naomi Elstein <naomi.els@omerkatz.com> Co-authored-by: Omer Katz <omer.katz@omerkatz.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Asif Saif Uddin <auvipy@gmail.com> Co-authored-by: Omer Katz <omer.katz@kcg.tech>
https://github.com/celery/celery.git
def test_group_stamping_with_replaced_group(self, subtests): self.app.conf.task_always_eager = True self.app.conf.task_store_eager_result = True self.app.conf.result_extended = True nested_g = self.replace_with_group.s(8) nested_g_res = nested_g.freeze() sig_1 = self.add.s(2, 2) sig_2 = self.add.s(2, 2) | nested_g sig_1_res = sig_1.freeze() sig_2_res = sig_2.freeze() g = group(sig_1, sig_2, app=self.app) g_res = g.freeze() g.apply() with subtests.test("sig_1_res is stamped", groups=[g_res.id]): assert sig_1_res._get_task_meta()['groups'] == [g_res.id] with subtests.test("sig_2_res is stamped", groups=nested_g_res._get_task_meta()['groups']): assert sig_2_res._get_task_meta()['groups'] == nested_g_res._get_task_meta()['groups']
186
test_canvas.py
Python
t/unit/tasks/test_canvas.py
1c4ff33bd22cf94e297bd6449a06b5a30c2c1fbc
celery
1
195,033
4
6
2
16
2
0
4
18
is_prebuilt
Add is_prebuilt method in base huggingface class (#4508)
https://github.com/facebookresearch/ParlAI.git
def is_prebuilt(self): return True
8
dict.py
Python
parlai/agents/hugging_face/dict.py
137afdde991f2cda824d42c8f295eead5b43e773
ParlAI
1
60,255
8
8
3
39
6
0
8
29
set_raw_scale
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def set_raw_scale(self, in_, scale): self.__check_input(in_) self.raw_scale[in_] = scale
24
io.py
Python
code/deep/BJMMD/caffe/python/caffe/io.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
1
277,073
4
12
2
42
7
0
4
10
isbuiltin
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def isbuiltin(obj): return _inspect.isbuiltin(tf.__internal__.decorator.unwrap(obj)[1])
25
tf_inspect.py
Python
keras/utils/tf_inspect.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
299,595
31
10
14
99
8
0
41
195
_update_effect
Add entity id to template error logging (#71107) * Add entity id to template error logging * Increase coverage
https://github.com/home-assistant/core.git
def _update_effect(self, effect): if effect in (None, "None", ""): self._effect = None return if effect not in self._effect_list: _LOGGER.error( "Received invalid effect: %s for entity %s. Expected one of: %s", effect, self.entity_id, self._effect_list, ) self._effect = None return self._effect = effect
61
light.py
Python
homeassistant/components/template/light.py
75debb7dece50744713a2822fe8f508ce633c818
core
3
142,915
19
7
2
25
3
0
21
49
set_location
[tune/structure] Introduce experiment package (#26033) Experiment, Trial, and config parsing moves into an `experiment` package. Notably, the new public facing APIs will be ``` from ray.tune.experiment import Experiment from ray.tune.experiment import Trial ```
https://github.com/ray-project/ray.git
def set_location(self, location): self.location = location # No need to invalidate state cache: location is not stored in json # self.invalidate_json_state()
13
trial.py
Python
python/ray/tune/experiment/trial.py
8a2f6bda62378c07a66169ee49504cc3703f7d35
ray
1
297,520
108
17
41
674
42
0
190
519
test_upload_image
Rename image integration to image_upload (#84063) * Rename image integration to image_upload * fix test
https://github.com/home-assistant/core.git
async def test_upload_image(hass, hass_client, hass_ws_client): now = util_dt.utcnow() with tempfile.TemporaryDirectory() as tempdir, patch.object( hass.config, "path", return_value=tempdir ), patch("homeassistant.util.dt.utcnow", return_value=now): assert await async_setup_component(hass, "image_upload", {}) ws_client: ClientWebSocketResponse = await hass_ws_client() client: ClientSession = await hass_client() with TEST_IMAGE.open("rb") as fp: res = await client.post("/api/image/upload", data={"file": fp}) assert res.status == 200 item = await res.json() assert item["content_type"] == "image/png" assert item["filesize"] == 38847 assert item["name"] == "logo.png" assert item["uploaded_at"] == now.isoformat() tempdir = pathlib.Path(tempdir) item_folder: pathlib.Path = tempdir / item["id"] assert (item_folder / "original").read_bytes() == TEST_IMAGE.read_bytes() # fetch non-existing image res = await client.get("/api/image/serve/non-existing/256x256") assert res.status == 404 # fetch invalid sizes for inv_size in ("256", "256x25A", "100x100", "25Ax256"): res = await client.get(f"/api/image/serve/{item['id']}/{inv_size}") assert res.status == 400 # fetch resized version res = await client.get(f"/api/image/serve/{item['id']}/256x256") assert res.status == 200 assert (item_folder / "256x256").is_file() # List item await ws_client.send_json({"id": 6, "type": "image/list"}) msg = await ws_client.receive_json() assert msg["id"] == 6 assert msg["type"] == ws_const.TYPE_RESULT assert msg["success"] assert msg["result"] == [item] # Delete item await ws_client.send_json( {"id": 7, "type": "image/delete", "image_id": item["id"]} ) msg = await ws_client.receive_json() assert msg["id"] == 7 assert msg["type"] == ws_const.TYPE_RESULT assert msg["success"] # Ensure removed from disk assert not item_folder.is_dir()
367
test_init.py
Python
tests/components/image_upload/test_init.py
80b357262795a57dc267a313064266fd2682ca74
core
2
301,380
97
16
44
560
43
0
166
667
async_turn_on
Fix Hue SONOFF S31 Lite zb plug (#69589) * Update light.py Same issue as https://github.com/home-assistant/core/issues/46619 with SONOFF S13 Lite Zigbee plug. * Update light.py
https://github.com/home-assistant/core.git
async def async_turn_on(self, **kwargs): command = {"on": True} if ATTR_TRANSITION in kwargs: command["transitiontime"] = int(kwargs[ATTR_TRANSITION] * 10) if ATTR_HS_COLOR in kwargs: if self.is_osram: command["hue"] = int(kwargs[ATTR_HS_COLOR][0] / 360 * 65535) command["sat"] = int(kwargs[ATTR_HS_COLOR][1] / 100 * 255) else: # Philips hue bulb models respond differently to hue/sat # requests, so we convert to XY first to ensure a consistent # color. xy_color = color.color_hs_to_xy(*kwargs[ATTR_HS_COLOR], self.gamut) command["xy"] = xy_color elif ATTR_COLOR_TEMP in kwargs: temp = kwargs[ATTR_COLOR_TEMP] command["ct"] = max(self.min_mireds, min(temp, self.max_mireds)) if ATTR_BRIGHTNESS in kwargs: command["bri"] = hass_to_hue_brightness(kwargs[ATTR_BRIGHTNESS]) flash = kwargs.get(ATTR_FLASH) if flash == FLASH_LONG: command["alert"] = "lselect" del command["on"] elif flash == FLASH_SHORT: command["alert"] = "select" del command["on"] elif ( not self.is_innr and not self.is_ewelink and not self.is_livarno and not self.is_s31litezb ): command["alert"] = "none" if ATTR_EFFECT in kwargs: effect = kwargs[ATTR_EFFECT] if effect == EFFECT_COLORLOOP: command["effect"] = "colorloop" elif effect == EFFECT_RANDOM: command["hue"] = random.randrange(0, 65535) command["sat"] = random.randrange(150, 254) else: command["effect"] = "none" if self.is_group: await self.bridge.async_request_call(self.light.set_action, **command) else: await self.bridge.async_request_call(self.light.set_state, **command) await self.coordinator.async_request_refresh()
332
light.py
Python
homeassistant/components/hue/v1/light.py
72cb320ed769275424b739bf00890d4d33451f5c
core
16
153,630
12
7
3
32
4
0
12
34
loc
DOCS-#3099: Fix `BasePandasDataSet` docstrings warnings (#4333) Co-authored-by: Yaroslav Igoshev <Poolliver868@mail.ru> Signed-off-by: Alexander Myskov <alexander.myskov@intel.com>
https://github.com/modin-project/modin.git
def loc(self): # noqa: RT01, D200 from .indexing import _LocIndexer return _LocIndexer(self)
16
base.py
Python
modin/pandas/base.py
605efa618e7994681f57b11d04d417f353ef8d50
modin
1
216,179
18
11
7
90
10
0
23
52
hash_file
fixes saltstack/salt#61562 cp functions derive saltenv from config
https://github.com/saltstack/salt.git
def hash_file(path, saltenv=None): if not saltenv: saltenv = __opts__["saltenv"] or "base" path, senv = salt.utils.url.split_env(path) if senv: saltenv = senv return _client().hash_file(path, saltenv)
53
cp.py
Python
salt/modules/cp.py
2bd6323ef5f87d871891a59917ee96f44ef55e75
salt
4
268,960
45
14
17
208
20
0
58
102
map_structure_with_atomic
Improve the error message. Because `repr(np.int64(3)) == repr(3)`, the error message is frustratingly confusing when numpy ints are passed as dimensions. PiperOrigin-RevId: 428366474
https://github.com/keras-team/keras.git
def map_structure_with_atomic(is_atomic_fn, map_fn, nested): if is_atomic_fn(nested): return map_fn(nested) # Recursively convert. if not tf.nest.is_nested(nested): raise ValueError( f'Received non-atomic and non-sequence element: {nested} ' f'of type {type(nested)}') if tf.__internal__.nest.is_mapping(nested): values = [nested[k] for k in sorted(nested.keys())] elif tf.__internal__.nest.is_attrs(nested): values = _astuple(nested) else: values = nested mapped_values = [ map_structure_with_atomic(is_atomic_fn, map_fn, ele) for ele in values ] return tf.__internal__.nest.sequence_like(nested, mapped_values)
123
tf_utils.py
Python
keras/utils/tf_utils.py
cff8cc93305d1c4a54385fb623fe895dafa0845c
keras
7
338,247
21
10
3
53
9
0
21
31
parse_flag_from_env
Make rich toggleable and seperate out a new environment utility file (#779) * Toggleable rich * Refactor into environment utils
https://github.com/huggingface/accelerate.git
def parse_flag_from_env(key, default=False): value = os.environ.get(key, str(default)) return strtobool(value) == 1 # As its name indicates `strtobool` actually returns an int...
32
environment.py
Python
src/accelerate/utils/environment.py
6f7fa4f48e05da0f1fc745f39e840b6304844202
accelerate
1
52,902
18
13
12
90
9
0
18
79
__str__
Update `State.__str__` to only include type if meaningful and drop "message="
https://github.com/PrefectHQ/prefect.git
def __str__(self) -> str: display_type = ( f", type={self.type}" if self.type.value.lower() != self.name.lower() else "" ) return f"{self.name}({self.message!r}{display_type})"
37
states.py
Python
src/prefect/orion/schemas/states.py
db6b16ef0f7d0adcd47d2ad96e8453ebd7590a22
prefect
2
244,309
12
10
6
98
13
0
13
55
_log_data_table
[Feature] Dedicated WandbLogger for MMDetection (#7459) * [Fix] Adjust the order of get_classes and FileClient. (#7276) * delete -sv (#7277) Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com> * wandb-integration * docstring * lint * log config+handle segmentation results * remove val logging * segmentation logging * metadata logging improved + remove best metadata * remove extra arg * train.py conflict * shuffle eval data+remove config * wandb config file * iter based logging + cleanup * minor update * minor update * minor fix * epoch and iter eval table * save and log config * Delete mask_rcnn_r50_fpn_1x_wandb_iter_coco.py Co-authored-by: Wencheng Wu <41542251+274869388@users.noreply.github.com> Co-authored-by: Yue Zhou <592267829@qq.com> Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
https://github.com/open-mmlab/mmdetection.git
def _log_data_table(self): data_artifact = self.wandb.Artifact('val', type='dataset') data_artifact.add(self.data_table, 'val_data') self.wandb.run.use_artifact(data_artifact) data_artifact.wait() self.data_table_ref = data_artifact.get('val_data')
55
wandblogger_hook.py
Python
mmdet/core/hook/wandblogger_hook.py
b637c99d4955e69ca2ff5d94a2c9bd9d962096ab
mmdetection
1
266,668
63
12
16
245
30
0
84
154
execute_command
ansible-test - Fix consistency of managed venvs. (#77028)
https://github.com/ansible/ansible.git
def execute_command(cmd, cwd=None, capture=False, env=None): # type: (t.List[str], t.Optional[str], bool, t.Optional[t.Dict[str, str]]) -> None log('Execute command: %s' % ' '.join(cmd_quote(c) for c in cmd), verbosity=1) cmd_bytes = [to_bytes(c) for c in cmd] if capture: stdout = subprocess.PIPE stderr = subprocess.PIPE else: stdout = None stderr = None cwd_bytes = to_optional_bytes(cwd) process = subprocess.Popen(cmd_bytes, cwd=cwd_bytes, stdin=devnull(), stdout=stdout, stderr=stderr, env=env) # pylint: disable=consider-using-with stdout_bytes, stderr_bytes = process.communicate() stdout_text = to_optional_text(stdout_bytes) or u'' stderr_text = to_optional_text(stderr_bytes) or u'' if process.returncode != 0: raise SubprocessError(cmd, process.returncode, stdout_text, stderr_text)
156
requirements.py
Python
test/lib/ansible_test/_util/target/setup/requirements.py
68fb3bf90efa3a722ba5ab7d66b1b22adc73198c
ansible
7
299,381
50
15
20
226
21
0
64
260
save_language_translations
Skip translations when integration no longer exists (#71004) * Skip translations when integration no longer exists * Update script/translations/download.py Co-authored-by: Martin Hjelmare <marhje52@gmail.com> Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
https://github.com/home-assistant/core.git
def save_language_translations(lang, translations): components = translations.get("component", {}) for component, component_translations in components.items(): base_translations = get_component_translations(component_translations) if base_translations: if (path := get_component_path(lang, component)) is None: print( f"Skipping {lang} for {component}, as the integration doesn't seem to exist." ) continue os.makedirs(os.path.dirname(path), exist_ok=True) save_json(path, base_translations) if "platform" not in component_translations: continue for platform, platform_translations in component_translations[ "platform" ].items(): path = get_platform_path(lang, component, platform) os.makedirs(os.path.dirname(path), exist_ok=True) save_json(path, platform_translations)
136
download.py
Python
script/translations/download.py
7fbc3f63643c314d8c8097e6fd4bc9a1b2314dc2
core
6
38,770
33
13
4
83
13
0
36
71
create_position_ids_from_input_ids
Add LayoutLMv3 (#17060) * Make forward pass work * More improvements * Remove unused imports * Remove timm dependency * Improve loss calculation of token classifier * Fix most tests * Add docs * Add model integration test * Make all tests pass * Add LayoutLMv3FeatureExtractor * Improve integration test + make fixup * Add example script * Fix style * Add LayoutLMv3Processor * Fix style * Add option to add visual labels * Make more tokenizer tests pass * Fix more tests * Make more tests pass * Fix bug and improve docs * Fix import of processors * Improve docstrings * Fix toctree and improve docs * Fix auto tokenizer * Move tests to model folder * Move tests to model folder * change default behavior add_prefix_space * add prefix space for fast * add_prefix_spcae set to True for Fast * no space before `unique_no_split` token * add test to hightligh special treatment of added tokens * fix `test_batch_encode_dynamic_overflowing` by building a long enough example * fix `test_full_tokenizer` with add_prefix_token * Fix tokenizer integration test * Make the code more readable * Add tests for LayoutLMv3Processor * Fix style * Add model to README and update init * Apply suggestions from code review * Replace asserts by value errors * Add suggestion by @ducviet00 * Add model to doc tests * Simplify script * Improve README * a step ahead to fix * Update pair_input_test * Make all tokenizer tests pass - phew * Make style * Add LayoutLMv3 to CI job * Fix auto mapping * Fix CI job name * Make all processor tests pass * Make tests of LayoutLMv2 and LayoutXLM consistent * Add copied from statements to fast tokenizer * Add copied from statements to slow tokenizer * Remove add_visual_labels attribute * Fix tests * Add link to notebooks * Improve docs of LayoutLMv3Processor * Fix reference to section Co-authored-by: SaulLu <lucilesaul.com@gmail.com> Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
https://github.com/huggingface/transformers.git
def create_position_ids_from_input_ids(self, input_ids, padding_idx): # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA. mask = input_ids.ne(padding_idx).int() incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask)) * mask return incremental_indices.long() + padding_idx
51
modeling_layoutlmv3.py
Python
src/transformers/models/layoutlmv3/modeling_layoutlmv3.py
31ee80d55673f32c0f5d50936f371e661b74b21a
transformers
1
319,804
14
10
11
50
6
0
16
35
supported_file_type
Moves the barcode related functionality out of tasks and into its own location. Splits up the testing based on that
https://github.com/paperless-ngx/paperless-ngx.git
def supported_file_type(mime_type) -> bool: supported_mime = ["application/pdf"] if settings.CONSUMER_BARCODE_TIFF_SUPPORT: supported_mime += ["image/tiff"] return mime_type in supported_mime
27
barcodes.py
Python
src/documents/barcodes.py
ec045e81f217e8b667614c32879f873c220ae035
paperless-ngx
2
195,808
81
12
9
119
17
0
122
234
root_equality_test
Introduce `Poly.root_equality_test()` This replaces the `Poly.root_comparison_tools()` method.
https://github.com/sympy/sympy.git
def root_equality_test(f): delta_sq = f.root_separation_lower_bound_squared() # We have delta_sq = delta**2, where delta is a lower bound on the # minimum separation between any two roots of this polynomial. # Let eps = delta/3, and define eps_sq = eps**2 = delta**2/9. eps_sq = delta_sq / 9 r, _, _, _ = evalf(1/eps_sq, 1, {}) n = fastlog(r) # Then 2^n > 1/eps**2. m = math.ceil(n/2) # Then 2^(-m) < eps. ev = lambda x: quad_to_mpmath(evalf_with_bounded_error(x, m=m)) # Then for any complex numbers a, b we will have # |a - ev(a)| < eps and |b - ev(b)| < eps. # So if |ev(a) - ev(b)|**2 < eps**2, then # |ev(a) - ev(b)| < eps, hence |a - b| < 3*eps = delta.
73
polytools.py
Python
sympy/polys/polytools.py
a4072458438847d3da259ad91828566dbf1e214b
sympy
1
247,622
219
22
190
1,519
42
0
539
3,789
test_upload_signatures
Add type hints to some tests/handlers files. (#12224)
https://github.com/matrix-org/synapse.git
def test_upload_signatures(self) -> None: # set up a user with cross-signing keys and a device. This user will # try uploading signatures local_user = "@boris:" + self.hs.hostname device_id = "xyz" # private key: OMkooTr76ega06xNvXIGPbgvvxAOzmQncN8VObS7aBA device_pubkey = "NnHhnqiMFQkq969szYkooLaBAXW244ZOxgukCvm2ZeY" device_key = { "user_id": local_user, "device_id": device_id, "algorithms": [ "m.olm.curve25519-aes-sha2", RoomEncryptionAlgorithms.MEGOLM_V1_AES_SHA2, ], "keys": {"curve25519:xyz": "curve25519+key", "ed25519:xyz": device_pubkey}, "signatures": {local_user: {"ed25519:xyz": "something"}}, } device_signing_key = key.decode_signing_key_base64( "ed25519", "xyz", "OMkooTr76ega06xNvXIGPbgvvxAOzmQncN8VObS7aBA" ) self.get_success( self.handler.upload_keys_for_user( local_user, device_id, {"device_keys": device_key} ) ) # private key: 2lonYOM6xYKdEsO+6KrC766xBcHnYnim1x/4LFGF8B0 master_pubkey = "nqOvzeuGWT/sRx3h7+MHoInYj3Uk2LD/unI9kDYcHwk" master_key = { "user_id": local_user, "usage": ["master"], "keys": {"ed25519:" + master_pubkey: master_pubkey}, } master_signing_key = key.decode_signing_key_base64( "ed25519", master_pubkey, "2lonYOM6xYKdEsO+6KrC766xBcHnYnim1x/4LFGF8B0" ) usersigning_pubkey = "Hq6gL+utB4ET+UvD5ci0kgAwsX6qP/zvf8v6OInU5iw" usersigning_key = { # private key: 4TL4AjRYwDVwD3pqQzcor+ez/euOB1/q78aTJ+czDNs "user_id": local_user, "usage": ["user_signing"], "keys": {"ed25519:" + usersigning_pubkey: usersigning_pubkey}, } usersigning_signing_key = key.decode_signing_key_base64( "ed25519", usersigning_pubkey, "4TL4AjRYwDVwD3pqQzcor+ez/euOB1/q78aTJ+czDNs" ) sign.sign_json(usersigning_key, local_user, master_signing_key) # private key: HvQBbU+hc2Zr+JP1sE0XwBe1pfZZEYtJNPJLZJtS+F8 selfsigning_pubkey = "EmkqvokUn8p+vQAGZitOk4PWjp7Ukp3txV2TbMPEiBQ" selfsigning_key = { "user_id": local_user, "usage": ["self_signing"], "keys": {"ed25519:" + selfsigning_pubkey: selfsigning_pubkey}, } selfsigning_signing_key = key.decode_signing_key_base64( "ed25519", selfsigning_pubkey, "HvQBbU+hc2Zr+JP1sE0XwBe1pfZZEYtJNPJLZJtS+F8" ) sign.sign_json(selfsigning_key, local_user, master_signing_key) cross_signing_keys = { "master_key": master_key, "user_signing_key": usersigning_key, "self_signing_key": selfsigning_key, } self.get_success( self.handler.upload_signing_keys_for_user(local_user, cross_signing_keys) ) # set up another user with a master key. This user will be signed by # the first user other_user = "@otherboris:" + self.hs.hostname other_master_pubkey = "fHZ3NPiKxoLQm5OoZbKa99SYxprOjNs4TwJUKP+twCM" other_master_key = { # private key: oyw2ZUx0O4GifbfFYM0nQvj9CL0b8B7cyN4FprtK8OI "user_id": other_user, "usage": ["master"], "keys": {"ed25519:" + other_master_pubkey: other_master_pubkey}, } self.get_success( self.handler.upload_signing_keys_for_user( other_user, {"master_key": other_master_key} ) ) # test various signature failures (see below) ret = self.get_success( self.handler.upload_signatures_for_device_keys( local_user, { local_user: { # fails because the signature is invalid # should fail with INVALID_SIGNATURE device_id: { "user_id": local_user, "device_id": device_id, "algorithms": [ "m.olm.curve25519-aes-sha2", RoomEncryptionAlgorithms.MEGOLM_V1_AES_SHA2, ], "keys": { "curve25519:xyz": "curve25519+key", # private key: OMkooTr76ega06xNvXIGPbgvvxAOzmQncN8VObS7aBA "ed25519:xyz": device_pubkey, }, "signatures": { local_user: { "ed25519:" + selfsigning_pubkey: "something" } }, }, # fails because device is unknown # should fail with NOT_FOUND "unknown": { "user_id": local_user, "device_id": "unknown", "signatures": { local_user: { "ed25519:" + selfsigning_pubkey: "something" } }, }, # fails because the signature is invalid # should fail with INVALID_SIGNATURE master_pubkey: { "user_id": local_user, "usage": ["master"], "keys": {"ed25519:" + master_pubkey: master_pubkey}, "signatures": { local_user: {"ed25519:" + device_pubkey: "something"} }, }, }, other_user: { # fails because the device is not the user's master-signing key # should fail with NOT_FOUND "unknown": { "user_id": other_user, "device_id": "unknown", "signatures": { local_user: { "ed25519:" + usersigning_pubkey: "something" } }, }, other_master_pubkey: { # fails because the key doesn't match what the server has # should fail with UNKNOWN "user_id": other_user, "usage": ["master"], "keys": { "ed25519:" + other_master_pubkey: other_master_pubkey }, "something": "random", "signatures": { local_user: { "ed25519:" + usersigning_pubkey: "something" } }, }, }, }, ) ) user_failures = ret["failures"][local_user] self.assertEqual(user_failures[device_id]["errcode"], Codes.INVALID_SIGNATURE) self.assertEqual( user_failures[master_pubkey]["errcode"], Codes.INVALID_SIGNATURE ) self.assertEqual(user_failures["unknown"]["errcode"], Codes.NOT_FOUND) other_user_failures = ret["failures"][other_user] self.assertEqual(other_user_failures["unknown"]["errcode"], Codes.NOT_FOUND) self.assertEqual( other_user_failures[other_master_pubkey]["errcode"], Codes.UNKNOWN ) # test successful signatures del device_key["signatures"] sign.sign_json(device_key, local_user, selfsigning_signing_key) sign.sign_json(master_key, local_user, device_signing_key) sign.sign_json(other_master_key, local_user, usersigning_signing_key) ret = self.get_success( self.handler.upload_signatures_for_device_keys( local_user, { local_user: {device_id: device_key, master_pubkey: master_key}, other_user: {other_master_pubkey: other_master_key}, }, ) ) self.assertEqual(ret["failures"], {}) # fetch the signed keys/devices and make sure that the signatures are there ret = self.get_success( self.handler.query_devices( {"device_keys": {local_user: [], other_user: []}}, 0, local_user, "device123", ) ) self.assertEqual( ret["device_keys"][local_user]["xyz"]["signatures"][local_user][ "ed25519:" + selfsigning_pubkey ], device_key["signatures"][local_user]["ed25519:" + selfsigning_pubkey], ) self.assertEqual( ret["master_keys"][local_user]["signatures"][local_user][ "ed25519:" + device_id ], master_key["signatures"][local_user]["ed25519:" + device_id], ) self.assertEqual( ret["master_keys"][other_user]["signatures"][local_user][ "ed25519:" + usersigning_pubkey ], other_master_key["signatures"][local_user]["ed25519:" + usersigning_pubkey], )
876
test_e2e_keys.py
Python
tests/handlers/test_e2e_keys.py
5dd949bee6158a8b651db9f2ae417a62c8184bfd
synapse
1
19,685
33
13
8
101
13
0
36
107
pip_version
Issue 4993 Add standard pre commit hooks and apply linting. (#4994) * Add .pre-commit-config.yaml to the project and exclude tests (for now). This does not include the MyPy linting that pip does but does include everything else.
https://github.com/pypa/pipenv.git
def pip_version(self): # type: () -> Version from .vendor.packaging.version import parse as parse_version pip = next( iter(pkg for pkg in self.get_installed_packages() if pkg.key == "pip"), None ) if pip is not None: return parse_version(pip.version) return parse_version("20.2")
60
environment.py
Python
pipenv/environment.py
9a3b3ce70621af6f9adaa9eeac9cf83fa149319c
pipenv
4
119,833
64
15
17
286
31
1
85
111
poly
lax_numpy: move poly functions into numpy.polynomial
https://github.com/google/jax.git
def poly(seq_of_zeros): _check_arraylike('poly', seq_of_zeros) seq_of_zeros, = _promote_dtypes_inexact(seq_of_zeros) seq_of_zeros = atleast_1d(seq_of_zeros) sh = seq_of_zeros.shape if len(sh) == 2 and sh[0] == sh[1] and sh[0] != 0: # import at runtime to avoid circular import from jax._src.numpy import linalg seq_of_zeros = linalg.eigvals(seq_of_zeros) if seq_of_zeros.ndim != 1: raise ValueError("input must be 1d or non-empty square 2d array.") dt = seq_of_zeros.dtype if len(seq_of_zeros) == 0: return ones((), dtype=dt) a = ones((1,), dtype=dt) for k in range(len(seq_of_zeros)): a = convolve(a, array([1, -seq_of_zeros[k]], dtype=dt), mode='full') return a @_wraps(np.polyval, lax_description=) @partial(jit, static_argnames=['unroll'])
@_wraps(np.polyval, lax_description="""\ The ``unroll`` parameter is JAX specific. It does not effect correctness but can have a major impact on performance for evaluating high-order polynomials. The parameter controls the number of unrolled steps with ``lax.scan`` inside the ``polyval`` implementation. Consider setting ``unroll=128`` (or even higher) to improve runtime performance on accelerators, at the cost of increased compilation time. """) @partial(jit, static_argnames=['unroll'])
158
polynomial.py
Python
jax/_src/numpy/polynomial.py
603bb3c5ca288674579211e64fa47c6b2b0fb7a6
jax
7
186,168
5
6
13
16
1
0
5
8
test_overlapping_priority_bindings
Add tests for competing bindings an priority permutations This is set to xfail at the moment because the tested result is what I think should be the result, but what happens now isn't that. Need to check with Will to see what he thinks the correct resolution is here.
https://github.com/Textualize/textual.git
async def test_overlapping_priority_bindings() -> None:
57
test_binding_inheritance.py
Python
tests/test_binding_inheritance.py
618db503b9179481a7374e61a1cfce7007934970
textual
1
299,284
32
14
14
115
12
0
42
200
_media_status
Add state buffering to media_player and use it in cast (#70802)
https://github.com/home-assistant/core.git
def _media_status(self): media_status = self.media_status media_status_received = self.media_status_received if ( media_status is None or media_status.player_state == MEDIA_PLAYER_STATE_UNKNOWN ): groups = self.mz_media_status for k, val in groups.items(): if val and val.player_state != MEDIA_PLAYER_STATE_UNKNOWN: media_status = val media_status_received = self.mz_media_status_received[k] break return (media_status, media_status_received)
72
media_player.py
Python
homeassistant/components/cast/media_player.py
66551e6fcbd063e53c13adc8a6462b8e00ce1450
core
6
215,968
60
20
93
538
25
0
153
406
list_repo_pkgs
Update to latest ``pyupgrade`` hook. Stop skipping it on CI. Signed-off-by: Pedro Algarvio <palgarvio@vmware.com>
https://github.com/saltstack/salt.git
def list_repo_pkgs(*args, **kwargs): byrepo = kwargs.pop("byrepo", False) cacheonly = kwargs.pop("cacheonly", False) fromrepo = kwargs.pop("fromrepo", "") or "" disablerepo = kwargs.pop("disablerepo", "") or "" enablerepo = kwargs.pop("enablerepo", "") or "" repo_arg = _get_options(fromrepo=fromrepo, **kwargs) if fromrepo and not isinstance(fromrepo, list): try: fromrepo = [x.strip() for x in fromrepo.split(",")] except AttributeError: fromrepo = [x.strip() for x in str(fromrepo).split(",")] if disablerepo and not isinstance(disablerepo, list): try: disablerepo = [x.strip() for x in disablerepo.split(",") if x != "*"] except AttributeError: disablerepo = [x.strip() for x in str(disablerepo).split(",") if x != "*"] if enablerepo and not isinstance(enablerepo, list): try: enablerepo = [x.strip() for x in enablerepo.split(",") if x != "*"] except AttributeError: enablerepo = [x.strip() for x in str(enablerepo).split(",") if x != "*"] if fromrepo: repos = fromrepo else: repos = [ repo_name for repo_name, repo_info in list_repos().items() if repo_name in enablerepo or ( repo_name not in disablerepo and str(repo_info.get("enabled", "1")) == "1" ) ] ret = {}
713
yumpkg.py
Python
salt/modules/yumpkg.py
f2a783643de61cac1ff3288b40241e5ce6e1ddc8
salt
53
19,706
9
8
2
43
5
0
9
23
matches_minor
Issue 4993 Add standard pre commit hooks and apply linting. (#4994) * Add .pre-commit-config.yaml to the project and exclude tests (for now). This does not include the MyPy linting that pip does but does include everything else.
https://github.com/pypa/pipenv.git
def matches_minor(self, other): return (self.major, self.minor) == (other.major, other.minor)
28
installers.py
Python
pipenv/installers.py
9a3b3ce70621af6f9adaa9eeac9cf83fa149319c
pipenv
1
312,987
6
9
3
38
6
0
6
20
name
Add Moehlenhoff Alpha2 underfloor heating system integration (#42771) * Add Moehlenhoff Alpha2 underfloor heating system integration * isort changes * flake8 changes * Do not exclude config_flow.py * pylint changes * Add config_flow test * correct requirements_test_all.txt * more tests * Update test description * Test connection and catch TimeoutError in async_setup_entry * Add version to manifest file * Remove version from manifest file * Replace tests.async_mock.patch by unittest.mock.patch * Update moehlenhoff-alpha2 to version 1.0.1 * Update requirements for moehlenhoff-alpha2 1.0.1 * Update moehlenhoff-alpha2 to 1.0.2 * Use async_setup_platforms * Use async_unload_platforms * Separate connection and devices for each entry_id * Use async_track_time_interval to schedule updates * Check if input is valid before checking uniqueness * Move Exception handling to validate_input * Catch aiohttp.client_exceptions.ClientConnectorError * Remove translation files * Mock TimeoutError * Fix data update * Replace current callback implementation with ha dispatcher * Return False in should_poll * Remove unused argument * Remove CONNECTION_CLASS * Use _async_current_entries * Call async_schedule_update_ha_state after data update * Remove unneeded async_setup Co-authored-by: Milan Meulemans <milan.meulemans@live.be> * Remove unneeded async_setup_platform Co-authored-by: Milan Meulemans <milan.meulemans@live.be> * Set Schema attribute host required Co-authored-by: Milan Meulemans <milan.meulemans@live.be> * Remove unused Exception class Co-authored-by: Milan Meulemans <milan.meulemans@live.be> * Update manifest.json Co-authored-by: Milan Meulemans <milan.meulemans@live.be> * pylint constructor return type None * Replace properties by class variables * use pass instead of return * Remove unused sync update method * remove property hvac_action * remove pass * rework exception handling * Update homeassistant/components/moehlenhoff_alpha2/config_flow.py Co-authored-by: Milan Meulemans <milan.meulemans@live.be> * Correct indentation * catch Exception in validate_input * Replace HomeAssistantType with HomeAssistant * Update to moehlenhoff-alpha2 1.0.3 * Allow to switch between heating and cooling mode * Update moehlenhoff-alpha2 to version 1.0.4 * Update heatarea data after setting target temperature * Support hvac_action * Fix heatarea update with multiple bases * Update data after setting preset mode * Use custom preset modes like defined by device * Fix config flow test * Fix test_duplicate_error * Rename property to extra_state_attributes Rename property device_state_attributes to extra_state_attributes and return lowercase keys in dict. * Refactor using DataUpdateCoordinator * Remove _attr_should_poll * Raise HomeAssistantError on communication error Catch HTTPError instead of broad except and reraise as HomeAssistantError * Change DataUpdateCoordinator name to alpha2_base * Refresh coordinator before setting data * Raise ValueError on invalid heat area mode * Rename heatarea to heat_area * Set type annotation in class attribute * Move coordinator to top * Move exception handling to the coordinator * Use heat_area_id directly * Sore get_cooling() result into local var * Add explanation of status attributes and remove BLOCK_HC * Fix pylint warnings * from __future__ import annotations * Use Platform Enum * Move data handling to coordinator * Remove property extra_state_attributes * Add missing annotations * Update moehlenhoff-alpha2 to version 1.1.2 * Rework tests based on the scaffold template * Set also heat/cool/day/night temp with target temp * Remove unneeded code from tests Co-authored-by: Milan Meulemans <milan.meulemans@live.be>
https://github.com/home-assistant/core.git
def name(self) -> str: return self.coordinator.data[self.heat_area_id]["HEATAREA_NAME"]
22
climate.py
Python
homeassistant/components/moehlenhoff_alpha2/climate.py
243d003acc11d638feb3867410c3cbb1987520bc
core
1
14,099
7
4
11
35
7
1
8
28
test_class_var_forward_ref
guard against ClassVar in fields (#4064) * guard against ClassVar in fields, fix #3679 * fix linting * skipif for test_class_var_forward_ref
https://github.com/pydantic/pydantic.git
def test_class_var_forward_ref(create_module): # see #3679 create_module( # language=Python
create_module( # language=Python """
9
test_forward_ref.py
Python
tests/test_forward_ref.py
a45276d6b1c0dd7c10d46767bb19954401b3b04a
pydantic
1
281,687
47
12
17
221
8
0
69
108
excel_columns
DCF Refactoring and Improvements (#1198) * Refactored excel letter generation * Updated error handling * Refactored df creation * Refactored sisters * Refactored ratios * Refactored worksheets * Finished refactoring attributes * Massively refactored add _ratios * Slight changes * refactored add_ratios * Made DCF much more customizable * Added market cap filtering * Removed print * Updated documentation * Updated pylint issues * Added pylint fixes * Small bug fixes * Test fixes * Fixed tests * Updated audit warning
https://github.com/OpenBB-finance/OpenBBTerminal.git
def excel_columns() -> List[str]: letters = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M"] letters += ["N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"] opts = ( [f"{x}" for x in letters] + [f"{x}{y}" for x in letters for y in letters] + [f"{x}{y}{z}" for x in letters for y in letters for z in letters] ) return opts
112
helper_funcs.py
Python
gamestonk_terminal/helper_funcs.py
a3d551105fd038ed00da277717b15df7c465b0a9
OpenBBTerminal
7
35,793
34
10
5
111
11
0
35
74
remove_low_and_no_objects
Maskformer (#15682) * maskformer * conflicts * conflicts * minor fixes * feature extractor test fix refactor MaskFormerLoss following conversation MaskFormer related types should not trigger a module time import error missed one removed all the types that are not used update config mapping minor updates in the doc resolved conversation that doesn't need a discussion minor changes resolved conversations fixed DetrDecoder * minor changes minor changes fixed mdx file test feature_extractor return types functional losses -> classes removed the return type test for the feature extractor minor changes + style + quality * conflicts? * rebase master * readme * added missing files * deleded poolformers test that where in the wrong palce * CI * minor changes * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * resolved conversations * minor changes * conversations [Unispeech] Fix slow tests (#15818) * remove soundfile old way of loading audio * Adapt slow test [Barthez Tokenizer] Fix saving (#15815) [TFXLNet] Correct tf xlnet generate (#15822) * [TFXLNet] Correct tf xlnet * adapt test comment Fix the push run (#15807) Fix semantic segmentation pipeline test (#15826) Fix dummy_inputs() to dummy_inputs in symbolic_trace doc (#15776) Add model specific output classes to PoolFormer model docs (#15746) * Added model specific output classes to poolformer docs * Fixed Segformer typo in Poolformer docs Adding the option to return_timestamps on pure CTC ASR models. (#15792) * Adding the option to return_timestamps on pure CTC ASR models. * Remove `math.prod` which was introduced in Python 3.8 * int are not floats. * Reworking the PR to support "char" vs "word" output. * Fixup! * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Quality. Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> HFTracer.trace should use/return self.graph to be compatible with torch.fx.Tracer (#15824) Fix tf.concatenate + test past_key_values for TF models (#15774) * fix wrong method name tf.concatenate * add tests related to causal LM / decoder * make style and quality * clean-up * Fix TFBertModel's extended_attention_mask when past_key_values is provided * Fix tests * fix copies * More tf.int8 -> tf.int32 in TF test template * clean-up * Update TF test template * revert the previous commit + update the TF test template * Fix TF template extended_attention_mask when past_key_values is provided * Fix some styles manually * clean-up * Fix ValueError: too many values to unpack in the test * Fix more: too many values to unpack in the test * Add a comment for extended_attention_mask when there is past_key_values * Fix TFElectra extended_attention_mask when past_key_values is provided * Add tests to other TF models * Fix for TF Electra test: add prepare_config_and_inputs_for_decoder * Fix not passing training arg to lm_head in TFRobertaForCausalLM * Fix tests (with past) for TF Roberta * add testing for pask_key_values for TFElectra model Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> [examples/summarization and translation] fix readme (#15833) Add ONNX Runtime quantization for text classification notebook (#15817) Re-enable doctests for the quicktour (#15828) * Re-enable doctests for the quicktour * Re-enable doctests for task_summary (#15830) * Remove & Framework split model report (#15825) Add TFConvNextModel (#15750) * feat: initial implementation of convnext in tensorflow. * fix: sample code for the classification model. * chore: added checked for from the classification model. * chore: set bias initializer in the classification head. * chore: updated license terms. * chore: removed ununsed imports * feat: enabled argument during using drop_path. * chore: replaced tf.identity with layers.Activation(linear). * chore: edited default checkpoint. * fix: minor bugs in the initializations. * partial-fix: tf model errors for loading pretrained pt weights. * partial-fix: call method updated * partial-fix: cross loading of weights (4x3 variables to be matched) * chore: removed unneeded comment. * removed playground.py * rebasing * rebasing and removing playground.py. * fix: renaming TFConvNextStage conv and layer norm layers * chore: added initializers and other minor additions. * chore: added initializers and other minor additions. * add: tests for convnext. * fix: integration tester class. * fix: issues mentioned in pr feedback (round 1). * fix: how output_hidden_states arg is propoagated inside the network. * feat: handling of arg for pure cnn models. * chore: added a note on equal contribution in model docs. * rebasing * rebasing and removing playground.py. * feat: encapsulation for the convnext trunk. * Fix variable naming; Test-related corrections; Run make fixup * chore: added Joao as a contributor to convnext. * rebasing * rebasing and removing playground.py. * rebasing * rebasing and removing playground.py. * chore: corrected copyright year and added comment on NHWC. * chore: fixed the black version and ran formatting. * chore: ran make style. * chore: removed from_pt argument from test, ran make style. * rebasing * rebasing and removing playground.py. * rebasing * rebasing and removing playground.py. * fix: tests in the convnext subclass, ran make style. * rebasing * rebasing and removing playground.py. * rebasing * rebasing and removing playground.py. * chore: moved convnext test to the correct location * fix: locations for the test file of convnext. * fix: convnext tests. * chore: applied sgugger's suggestion for dealing w/ output_attentions. * chore: added comments. * chore: applied updated quality enviornment style. * chore: applied formatting with quality enviornment. * chore: revert to the previous tests/test_modeling_common.py. * chore: revert to the original test_modeling_common.py * chore: revert to previous states for test_modeling_tf_common.py and modeling_tf_utils.py * fix: tests for convnext. * chore: removed output_attentions argument from convnext config. * chore: revert to the earlier tf utils. * fix: output shapes of the hidden states * chore: removed unnecessary comment * chore: reverting to the right test_modeling_tf_common.py. * Styling nits Co-authored-by: ariG23498 <aritra.born2fly@gmail.com> Co-authored-by: Joao Gante <joao@huggingface.co> Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com> * minor changes * doc fix in feature extractor * doc * typose * removed detr logic from config * removed detr logic from config * removed num_labels * small fix in the config * auxilary -> auxiliary * make style * some test is failing * fix a weird char in config prevending doc-builder * retry to fix the doc-builder issue * make style * new try to fix the doc builder * CI * change weights to facebook Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by: ariG23498 <aritra.born2fly@gmail.com> Co-authored-by: Joao Gante <joao@huggingface.co> Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
https://github.com/huggingface/transformers.git
def remove_low_and_no_objects(self, masks, scores, labels, object_mask_threshold, num_labels): if not (masks.shape[0] == scores.shape[0] == labels.shape[0]): raise ValueError("mask, scores and labels must have the same shape!") to_keep = labels.ne(num_labels) & (scores > object_mask_threshold) return masks[to_keep], scores[to_keep], labels[to_keep]
75
feature_extraction_maskformer.py
Python
src/transformers/models/maskformer/feature_extraction_maskformer.py
d83d22f578276e9f201b0b3b0f8f9bd68e86c133
transformers
2
129,202
14
10
3
46
9
0
14
71
update_context
[doc] Fix sklearn doc error, introduce MyST markdown parser (#21527)
https://github.com/ray-project/ray.git
def update_context(app, pagename, templatename, context, doctree): context["feedback_form_url"] = feedback_form_url(app.config.project, pagename) # see also http://searchvoidstar.tumblr.com/post/125486358368/making-pdfs-from-markdown-on-readthedocsorg-using
29
conf.py
Python
doc/source/conf.py
703c1610348615dcb8c2d141a0c46675084660f5
ray
1
294,547
18
9
10
83
10
0
21
104
_async_update_and_abort_for_matching_unique_id
Add support for setting up encrypted samsung tvs from config flow (#68717) Co-authored-by: epenet <epenet@users.noreply.github.com>
https://github.com/home-assistant/core.git
def _async_update_and_abort_for_matching_unique_id(self) -> None: updates = {CONF_HOST: self._host} if self._mac: updates[CONF_MAC] = self._mac if self._ssdp_rendering_control_location: updates[ CONF_SSDP_RENDERING_CONTROL_LOCATION ] = self._ssdp_rendering_control_location self._abort_if_unique_id_configured(updates=updates)
51
config_flow.py
Python
homeassistant/components/samsungtv/config_flow.py
cc75cebfc5b3e9316cdbaf82c5c72437521f819b
core
3
245,867
162
11
74
862
54
0
338
1,252
test_condinst_bboxhead_loss
[Feature]: Support Condinst (#9223) * [Feature]: support condinst for instance segmentation * update * update * update * fix config name and add test unit * fix squeeze error * add README and chang mask to poly
https://github.com/open-mmlab/mmdetection.git
def test_condinst_bboxhead_loss(self): s = 256 img_metas = [{ 'img_shape': (s, s, 3), 'pad_shape': (s, s, 3), 'scale_factor': 1, }] condinst_bboxhead = CondInstBboxHead( num_classes=4, in_channels=1, feat_channels=1, stacked_convs=1, norm_cfg=None) # Fcos head expects a multiple levels of features per image feats = ( torch.rand(1, 1, s // stride[1], s // stride[0]) for stride in condinst_bboxhead.prior_generator.strides) cls_scores, bbox_preds, centernesses, param_preds =\ condinst_bboxhead.forward(feats) # Test that empty ground truth encourages the network to # predict background gt_instances = InstanceData() gt_instances.bboxes = torch.empty((0, 4)) gt_instances.labels = torch.LongTensor([]) gt_instances.masks = _rand_masks(0, gt_instances.bboxes.numpy(), s, s) empty_gt_losses = condinst_bboxhead.loss_by_feat( cls_scores, bbox_preds, centernesses, param_preds, [gt_instances], img_metas) # When there is no truth, the cls loss should be nonzero but # box loss and centerness loss should be zero empty_cls_loss = empty_gt_losses['loss_cls'].item() empty_box_loss = empty_gt_losses['loss_bbox'].item() empty_ctr_loss = empty_gt_losses['loss_centerness'].item() self.assertGreater(empty_cls_loss, 0, 'cls loss should be non-zero') self.assertEqual( empty_box_loss, 0, 'there should be no box loss when there are no true boxes') self.assertEqual( empty_ctr_loss, 0, 'there should be no centerness loss when there are no true boxes') # When truth is non-empty then all cls, box loss and centerness loss # should be nonzero for random inputs gt_instances = InstanceData() gt_instances.bboxes = torch.Tensor( [[23.6667, 23.8757, 238.6326, 151.8874]]) gt_instances.labels = torch.LongTensor([2]) gt_instances.masks = _rand_masks(1, gt_instances.bboxes.numpy(), s, s) one_gt_losses = condinst_bboxhead.loss_by_feat(cls_scores, bbox_preds, centernesses, param_preds, [gt_instances], img_metas) onegt_cls_loss = one_gt_losses['loss_cls'].item() onegt_box_loss = one_gt_losses['loss_bbox'].item() onegt_ctr_loss = one_gt_losses['loss_centerness'].item() self.assertGreater(onegt_cls_loss, 0, 'cls loss should be non-zero') self.assertGreater(onegt_box_loss, 0, 'box loss should be non-zero') self.assertGreater(onegt_ctr_loss, 0, 'centerness loss should be non-zero') # Test the `center_sampling` works fine. condinst_bboxhead.center_sampling = True ctrsamp_losses = condinst_bboxhead.loss_by_feat( cls_scores, bbox_preds, centernesses, param_preds, [gt_instances], img_metas) ctrsamp_cls_loss = ctrsamp_losses['loss_cls'].item() ctrsamp_box_loss = ctrsamp_losses['loss_bbox'].item() ctrsamp_ctr_loss = ctrsamp_losses['loss_centerness'].item() self.assertGreater(ctrsamp_cls_loss, 0, 'cls loss should be non-zero') self.assertGreater(ctrsamp_box_loss, 0, 'box loss should be non-zero') self.assertGreater(ctrsamp_ctr_loss, 0, 'centerness loss should be non-zero') # Test the `norm_on_bbox` works fine. condinst_bboxhead.norm_on_bbox = True normbox_losses = condinst_bboxhead.loss_by_feat( cls_scores, bbox_preds, centernesses, param_preds, [gt_instances], img_metas) normbox_cls_loss = normbox_losses['loss_cls'].item() normbox_box_loss = normbox_losses['loss_bbox'].item() normbox_ctr_loss = normbox_losses['loss_centerness'].item() self.assertGreater(normbox_cls_loss, 0, 'cls loss should be non-zero') self.assertGreater(normbox_box_loss, 0, 'box loss should be non-zero') self.assertGreater(normbox_ctr_loss, 0, 'centerness loss should be non-zero')
545
test_condinst_head.py
Python
tests/test_models/test_dense_heads/test_condinst_head.py
79c8295801acedee0cbdbf128a01b9fe162646b0
mmdetection
2
57,784
5
6
13
18
4
0
5
12
test_flow_timeouts_are_not_crashes
Add `Crashed` as canonical state type (PrefectHQ/orion#2353) Co-authored-by: Jeremiah Lowin <153965+jlowin@users.noreply.github.com>
https://github.com/PrefectHQ/prefect.git
async def test_flow_timeouts_are_not_crashes(self, flow_run, orion_client):
80
test_engine.py
Python
tests/test_engine.py
907fc7c0a895b9f46484690be44a2a5660b320ee
prefect
1
269,772
72
12
17
171
16
0
107
355
_get_benchmark_name
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _get_benchmark_name(self): stack = tf_inspect.stack() name = None for frame in stack[::-1]: f_locals = frame[0].f_locals f_self = f_locals.get("self", None) if isinstance(f_self, tf.test.Benchmark): name = frame[3] # Get the method name # This is a hack to get around the fact that some methods might have a # disable_tfrt decorator around them. In that case a function called # 'decorated' wraps the real called function underneath and so we # peek one deeper into the stack to get the real name. if name == "decorated": continue else: break if name is None: raise ValueError("Unable to determine calling Benchmark function.") if tf.__internal__.is_tfrt_enabled(): name = name + "_tfrt" return name
97
eager_microbenchmarks_test.py
Python
keras/benchmarks/eager_microbenchmarks_test.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
6
271,559
61
13
23
258
29
0
86
303
_set_save_spec
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _set_save_spec(self, inputs, args=None, kwargs=None): if self._saved_model_inputs_spec is not None: return # Already set. args = args or [] kwargs = kwargs or {} input_names = self.input_names if not input_names: input_names = compile_utils.create_pseudo_input_names(inputs) flat_inputs = tf.nest.flatten(inputs) inputs_spec = [] for name, tensor in zip(input_names, flat_inputs): inputs_spec.append( tf_utils.get_tensor_spec(tensor, dynamic_batch=False, name=name) ) inputs_spec = tf.nest.pack_sequence_as(inputs, inputs_spec) super()._set_save_spec(inputs_spec, args, kwargs) # Store the input shapes if ( self.__class__.__name__ == "Sequential" and self._build_input_shape is None ): self._build_input_shape = tf.nest.map_structure( lambda x: None if x is None else x.shape, inputs_spec )
165
training.py
Python
keras/engine/training.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
9
138,198
7
6
7
27
5
0
7
21
get_metrics
[data] New executor backend [1/n]--- Add basic interfaces (#31216) This PR adds the basic interfaces and feature flags; split out from https://github.com/ray-project/ray/pull/30903/files See REP ray-project/enhancements#18 for more details.
https://github.com/ray-project/ray.git
def get_metrics(self) -> Dict[str, int]: return {}
16
interfaces.py
Python
python/ray/data/_internal/execution/interfaces.py
2cd4637521a0c75e0375266ff52d3dfca2084c2d
ray
1
6,396
45
14
19
190
9
0
72
184
_get_text_feature_max_length
Improve AutoML heuristics for text classification (#1815) * Improve AutoML heuristics for text classification Co-authored-by: Anne Holler <anne@vmware.com>
https://github.com/ludwig-ai/ludwig.git
def _get_text_feature_max_length(config, training_set_metadata) -> int: max_length = 0 for feature in config["input_features"]: if feature["type"] == TEXT: feature_max_len = training_set_metadata[feature["name"]]["word_max_sequence_length"] if feature_max_len > max_length: max_length = feature_max_len if ( ("preprocessing" in config) and (TEXT in config["preprocessing"]) and ("word_sequence_length_limit" in config["preprocessing"][TEXT]) ): limit = config["preprocessing"][TEXT]["word_sequence_length_limit"] else: limit = 256 # Preprocessing default word_sequence_length_limit = 256 if max_length > limit + 2: # For start and stop symbols. max_length = limit + 2 return max_length
110
auto_tune_config.py
Python
ludwig/automl/auto_tune_config.py
d77aaf8da39f04a353a3a08fb699ae8a96ffea3a
ludwig
8
260,274
17
11
5
64
11
0
18
37
test_warning_new_default
API prepare change of default solver QuantileRegressor (#23637) Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
https://github.com/scikit-learn/scikit-learn.git
def test_warning_new_default(X_y_data): X, y = X_y_data model = QuantileRegressor() with pytest.warns(FutureWarning, match="The default solver will change"): model.fit(X, y)
36
test_quantile.py
Python
sklearn/linear_model/tests/test_quantile.py
a7e27dab2ce76281425e66bfc8611a84a2bf6afa
scikit-learn
1
94,771
19
9
6
75
6
0
25
47
resolve_full_name
feat(devtools): Add support to dump the import graph (#37698)
https://github.com/getsentry/sentry.git
def resolve_full_name(base, name, level): if level == 0: return name bits = base.rsplit(".", level - 1) base = bits[0] return f"{base}.{name}" if name else base
42
_importchecker.py
Python
src/sentry/_importchecker.py
36d17659021fab2e2a618a8c8a9c003bc0f359fe
sentry
3
189,404
15
12
6
86
12
0
16
62
add_axes
Fix `add_axes` in :class:`~.VectorScene`. (#2444) * fix bug * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
https://github.com/ManimCommunity/manim.git
def add_axes(self, animate=False, color=WHITE, **kwargs): axes = Axes(color=color, axis_config={"unit_size": 1}) if animate: self.play(Create(axes)) self.add(axes) return axes
53
vector_space_scene.py
Python
manim/scene/vector_space_scene.py
5789be81609c8bf6d98d1d87d4061477d0cd37b9
manim
2
337,603
70
13
36
182
26
0
79
243
debug_launcher
Refactor version checking into a utility (#395) Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
https://github.com/huggingface/accelerate.git
def debug_launcher(function, args=(), num_processes=2): if is_torch_version("<", "1.5.0"): raise ImportError( "Using `debug_launcher` for distributed training on GPUs require torch >= 1.5.0, got " f"{torch.__version__}." ) from torch.multiprocessing import start_processes with tempfile.NamedTemporaryFile() as tmp_file: # torch.distributed will expect a few environment variable to be here. We set the ones common to each # process here (the other ones will be set be the launcher). with patch_environment( world_size=num_processes, master_addr="127.0.01", master_port="29500", mixed_precision="no", accelerate_debug_rdv_file=tmp_file.name, use_cpu="yes", ): launcher = PrepareForLaunch(function, debug=True) start_processes(launcher, args=args, nprocs=num_processes, start_method="fork")
102
launchers.py
Python
src/accelerate/launchers.py
f6ec2660f01e5bb37399407b3a01b72a43ceb328
accelerate
2
6,553
55
14
18
229
30
0
62
253
extract_pytorch_structures
feat: Modify Trainer to use marshmallow_dataclass syntax for handling hyperparameters. Add basic scripting for docstring extraction to marshmallow schema. Fix some existing marshmallow issues. (#1606)
https://github.com/ludwig-ai/ludwig.git
def extract_pytorch_structures(): for opt in lmo.optimizer_registry: # Get the torch class: optimizer_class = lmo.optimizer_registry[opt][0] # Parse and clean the class structure: path = get_fully_qualified_class_name(optimizer_class) opt_struct = get_pytkdocs_structure_for_path(path, "google")["objects"][0] prune_pytorch_structures(opt_struct) # Write it to a file: parent_dir = str(Path(__file__).parent.parent) filename = os.path.join(parent_dir, "ludwig/marshmallow/generated/torch/", optimizer_class.__name__) + ".json" os.makedirs(os.path.dirname(filename), exist_ok=True) with open(filename, "w") as outfile: json.dump( opt_struct, outfile, indent=4, sort_keys=True, separators=(",", ": "), ) outfile.write("\n")
136
extract_schema.py
Python
scripts/extract_schema.py
23a33eef3bc7ea3ba33ec56dc9b56ba38462648a
ludwig
2
154,614
50
16
24
387
19
0
86
394
test_uint
FEAT-#4946: Replace OmniSci with HDK (#4947) Co-authored-by: Iaroslav Igoshev <Poolliver868@mail.ru> Signed-off-by: Andrey Pavlenko <andrey.a.pavlenko@gmail.com>
https://github.com/modin-project/modin.git
def test_uint(self, md_df_constructor): pd_df = pandas.DataFrame( { "uint8_in_int_bounds": np.array([1, 2, 3], dtype="uint8"), "uint8_out-of_int_bounds": np.array( [(2**8) - 1, (2**8) - 2, (2**8) - 3], dtype="uint8" ), "uint16_in_int_bounds": np.array([1, 2, 3], dtype="uint16"), "uint16_out-of_int_bounds": np.array( [(2**16) - 1, (2**16) - 2, (2**16) - 3], dtype="uint16" ), "uint32_in_int_bounds": np.array([1, 2, 3], dtype="uint32"), "uint32_out-of_int_bounds": np.array( [(2**32) - 1, (2**32) - 2, (2**32) - 3], dtype="uint32" ), "uint64_in_int_bounds": np.array([1, 2, 3], dtype="uint64"), } ) md_df = md_df_constructor(pd_df) with ForceHdkImport(md_df) as instance: md_df_exported = instance.export_frames()[0] result = md_df_exported.values reference = pd_df.values np.testing.assert_array_equal(result, reference)
248
test_dataframe.py
Python
modin/experimental/core/execution/native/implementations/hdk_on_native/test/test_dataframe.py
e5b1888cd932909e49194d58035da34b210b91c4
modin
1
154,579
79
16
36
387
35
0
126
604
astype
FEAT-#4946: Replace OmniSci with HDK (#4947) Co-authored-by: Iaroslav Igoshev <Poolliver868@mail.ru> Signed-off-by: Andrey Pavlenko <andrey.a.pavlenko@gmail.com>
https://github.com/modin-project/modin.git
def astype(self, col_dtypes, **kwargs): columns = col_dtypes.keys() new_dtypes = self._dtypes.copy() for column in columns: dtype = col_dtypes[column] if ( not isinstance(dtype, type(self._dtypes[column])) or dtype != self._dtypes[column] ): # Update the new dtype series to the proper pandas dtype try: new_dtype = np.dtype(dtype) except TypeError: new_dtype = dtype if dtype != np.int32 and new_dtype == np.int32: new_dtypes[column] = np.dtype("int64") elif dtype != np.float32 and new_dtype == np.float32: new_dtypes[column] = np.dtype("float64") # We cannot infer without computing the dtype if elif isinstance(new_dtype, str) and new_dtype == "category": raise NotImplementedError("unsupported type conversion") else: new_dtypes[column] = new_dtype exprs = self._index_exprs() for col in self.columns: col_expr = self.ref(col) if col in columns: exprs[col] = col_expr.cast(new_dtypes[col]) else: exprs[col] = col_expr new_op = TransformNode(self, exprs) return self.__constructor__( columns=self.columns, dtypes=new_dtypes, op=new_op, index_cols=self._index_cols, force_execution_mode=self._force_execution_mode, )
244
dataframe.py
Python
modin/experimental/core/execution/native/implementations/hdk_on_native/dataframe/dataframe.py
e5b1888cd932909e49194d58035da34b210b91c4
modin
13
100,326
20
12
6
84
10
0
21
87
_clear_trace_variables
Update code to support Tensorflow versions up to 2.8 (#1213) * Update maximum tf version in setup + requirements * - bump max version of tf version in launcher - standardise tf version check * update keras get_custom_objects for tf>2.6 * bugfix: force black text in GUI file dialogs (linux) * dssim loss - Move to stock tf.ssim function * Update optimizer imports for compatibility * fix logging for tf2.8 * Fix GUI graphing for TF2.8 * update tests * bump requirements.txt versions * Remove limit on nvidia-ml-py * Graphing bugfixes - Prevent live graph from displaying if data not yet available * bugfix: Live graph. Collect loss labels correctly * fix: live graph - swallow inconsistent loss errors * Bugfix: Prevent live graph from clearing during training * Fix graphing for AMD
https://github.com/deepfakes/faceswap.git
def _clear_trace_variables(self): if self._trace_vars: for name, (var, trace) in self._trace_vars.items(): logger.debug("Clearing trace from variable: %s", name) var.trace_vdelete("w", trace) self._trace_vars = {}
50
display_command.py
Python
lib/gui/display_command.py
c1512fd41d86ef47a5d1ce618d6d755ef7cbacdf
faceswap
3
86,527
33
15
10
116
18
0
33
126
update_existing_attachments
ref(perf-issues): Modularize post_process_group (ISP-11) (#39594) Fully modularizes `post_process_group` as final step before adding multiple event types to it.
https://github.com/getsentry/sentry.git
def update_existing_attachments(job): # Patch attachments that were ingested on the standalone path. with sentry_sdk.start_span(op="tasks.post_process_group.update_existing_attachments"): try: from sentry.models import EventAttachment event = job["event"] EventAttachment.objects.filter( project_id=event.project_id, event_id=event.event_id ).update(group_id=event.group_id) except Exception: logger.exception("Failed to update existing attachments")
66
post_process.py
Python
src/sentry/tasks/post_process.py
bc59434031930199dcdc056943c2ba4a17bbd5c8
sentry
2
306,781
31
8
3
117
17
1
31
96
test_convert_from_miles
Refactor distance, speed and volume utils (#77952) * Refactor distance util * Fix bmw connected drive tests * Adjust here travel time tests * Adjust waze travel time tests * Adjust test_distance * Adjust rounding values * Adjust more tests * Adjust volume conversions * Add tests
https://github.com/home-assistant/core.git
def test_convert_from_miles(unit, expected): miles = 5 assert distance_util.convert(miles, LENGTH_MILES, unit) == pytest.approx(expected) @pytest.mark.parametrize( "unit,expected", [ (LENGTH_KILOMETERS, 0.0045720000000000005), (LENGTH_METERS, 4.572), (LENGTH_CENTIMETERS, 457.2), (LENGTH_MILLIMETERS, 4572), (LENGTH_MILES, 0.002840908212), (LENGTH_FEET, 15.00000048), (LENGTH_INCHES, 180.0000972), ], )
@pytest.mark.parametrize( "unit,expected", [ (LENGTH_KILOMETERS, 0.0045720000000000005), (LENGTH_METERS, 4.572), (LENGTH_CENTIMETERS, 457.2), (LENGTH_MILLIMETERS, 4572), (LENGTH_MILES, 0.002840908212), (LENGTH_FEET, 15.00000048), (LENGTH_INCHES, 180.0000972), ], )
29
test_distance.py
Python
tests/util/test_distance.py
9490771a8737892a7a86afd866a3520b836779fd
core
1
276,113
14
6
7
18
4
0
14
20
layer_call_wrapper
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def layer_call_wrapper(call_collection, method, name): # Create wrapper that deals with losses and call context.
37
save_impl.py
Python
keras/saving/saved_model/save_impl.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
85,584
77
16
35
386
37
0
90
526
test_dynamic_smapling_rule_edition
ref(sampling): Create sampling audit logs - (#38501)
https://github.com/getsentry/sentry.git
def test_dynamic_smapling_rule_edition(self): dynamic_sampling = _dyn_sampling_data() project = self.project # force creation # Update project adding three rules project.update_option("sentry:dynamic_sampling", dynamic_sampling) self.login_as(self.user) token = ApiToken.objects.create(user=self.user, scope_list=["project:write"]) authorization = f"Bearer {token.token}" url = reverse( "sentry-api-0-project-details", kwargs={ "organization_slug": self.project.organization.slug, "project_slug": self.project.slug, }, ) data = { "dynamicSampling": { "rules": [ {**dynamic_sampling["rules"][0], "sampleRate": 0.2}, dynamic_sampling["rules"][1], dynamic_sampling["rules"][2], ] } } with Feature({"organizations:server-side-sampling": True}): self.client.put(url, format="json", HTTP_AUTHORIZATION=authorization, data=data) assert AuditLogEntry.objects.filter( organization=self.project.organization, event=audit_log.get_event_id("SAMPLING_RULE_EDIT"), ).exists() # Make sure that the early return logic worked, as only the above audit log was triggered with pytest.raises(AuditLogEntry.DoesNotExist): AuditLogEntry.objects.get( organization=self.organization, target_object=self.project.id, event=audit_log.get_event_id("PROJECT_EDIT"), )
230
test_project_details.py
Python
tests/sentry/api/endpoints/test_project_details.py
414347e903b490c69dbe5115bb5572c97aab4493
sentry
1
291,598
18
10
6
72
8
0
19
58
async_update_movies
Add Twinkly effects (#82861) * Add Twinkly effects * Remove spurious comment
https://github.com/home-assistant/core.git
async def async_update_movies(self) -> None: movies = await self._client.get_saved_movies() _LOGGER.debug("Movies: %s", movies) if "movies" in movies: self._movies = movies["movies"]
39
light.py
Python
homeassistant/components/twinkly/light.py
33cd59d3c2e1f945c16b39d929349e3eeb4cfb9a
core
2
266,660
26
10
9
93
11
0
29
65
collect_bootstrap
ansible-test - Fix consistency of managed venvs. (#77028)
https://github.com/ansible/ansible.git
def collect_bootstrap(python): # type: (PythonConfig) -> t.List[PipCommand] infrastructure_packages = get_venv_packages(python) pip_version = infrastructure_packages['pip'] packages = [f'{name}=={version}' for name, version in infrastructure_packages.items()] bootstrap = PipBootstrap( pip_version=pip_version, packages=packages, ) return [bootstrap]
51
python_requirements.py
Python
test/lib/ansible_test/_internal/python_requirements.py
68fb3bf90efa3a722ba5ab7d66b1b22adc73198c
ansible
2
3,815
20
10
9
125
17
0
27
98
test_jobs_completed_immediately
🎉 🎉 Source FB Marketing: performance and reliability fixes (#9805) * Facebook Marketing performance improvement * add comments and little refactoring * fix integration tests with the new config * improve job status handling, limit concurrency to 10 * fix campaign jobs, refactor manager * big refactoring of async jobs, support random order of slices * update source _read_incremental to hook new state logic * fix issues with timeout * remove debugging and clean up, improve retry logic * merge changes from #8234 * fix call super _read_increment * generalize batch execution, add use_batch flag * improve coverage, do some refactoring of spec * update test, remove overrides of source * add split by AdSet * add smaller insights * fix end_date < start_date case * add account_id to PK * add notes * fix new streams * fix reversed incremental stream * update spec.json for SAT * upgrade CDK and bump version Co-authored-by: Dmytro Rezchykov <dmitry.rezchykov@zazmic.com> Co-authored-by: Eugene Kulak <kulak.eugene@gmail.com>
https://github.com/airbytehq/airbyte.git
def test_jobs_completed_immediately(self, api, mocker, time_mock): jobs = [ mocker.Mock(spec=InsightAsyncJob, attempt_number=1, failed=False), mocker.Mock(spec=InsightAsyncJob, attempt_number=1, failed=False), ] manager = InsightAsyncJobManager(api=api, jobs=jobs) completed_jobs = list(manager.completed_jobs()) assert jobs == completed_jobs time_mock.sleep.assert_not_called()
83
test_async_job_manager.py
Python
airbyte-integrations/connectors/source-facebook-marketing/unit_tests/test_async_job_manager.py
a3aae8017a0a40ff2006e2567f71dccb04c997a5
airbyte
1
191,668
15
11
5
68
7
0
17
29
test__split_list_double_doc
Harrison/map reduce merge (#344) Co-authored-by: John Nay <JohnNay@users.noreply.github.com>
https://github.com/hwchase17/langchain.git
def test__split_list_double_doc() -> None: docs = [Document(page_content="foo"), Document(page_content="bar")] doc_list = _split_list_of_docs(docs, _fake_docs_len_func, 100) assert doc_list == [docs]
40
test_combine_documents.py
Python
tests/unit_tests/chains/test_combine_documents.py
c1b50b7b13545b1549c051182f547cfc2f8ed0be
langchain
1
266,482
30
14
7
151
18
0
36
98
get_default_targets
ansible-test - Defer loading of completion entries. (#76852) * ansible-test - Defer loading of completion entries. This avoids a traceback when running ansible-test outside of a supported directory.
https://github.com/ansible/ansible.git
def get_default_targets(self, context): # type: (HostContext) -> t.List[ControllerConfig] if self.name in filter_completion(docker_completion()): defaults = self.get_defaults(context) pythons = {version: defaults.get_python_path(version) for version in defaults.supported_pythons} else: pythons = {context.controller_config.python.version: context.controller_config.python.path} return [ControllerConfig(python=NativePythonConfig(version=version, path=path)) for version, path in pythons.items()]
95
host_configs.py
Python
test/lib/ansible_test/_internal/host_configs.py
e9ffcf3c85f2fa40a20ee03bd9c1ce7296574cd1
ansible
4
129,038
78
14
54
242
27
0
96
315
step
[tune] Add test for heterogeneous resource request deadlocks (#21397) This adds a test for potential resource deadlocks in experiments with heterogeneous PGFs. If the PGF of a later trial becomes ready before that of a previous trial, we could run into a deadlock. This is currently avoided, but untested, flagging the code path for removal in #21387.
https://github.com/ray-project/ray.git
def step(self): self._updated_queue = False if self.is_finished(): raise TuneError("Called step when all trials finished?") with warn_if_slow("on_step_begin"): self.trial_executor.on_step_begin(self.get_trials()) with warn_if_slow("callbacks.on_step_begin"): self._callbacks.on_step_begin( iteration=self._iteration, trials=self._trials) # This will contain the next trial to start next_trial = self._get_next_trial() # blocking # Create pending trials. If the queue was updated before, only # continue updating if this was successful (next_trial is not None) if not self._updated_queue or (self._updated_queue and next_trial): num_pending_trials = len( [t for t in self._live_trials if t.status == Trial.PENDING]) while num_pending_trials < self._max_pending_trials: if not self._update_trial_queue(blocking=False): break num_pending_trials += 1 # Update status of staged placement groups self.trial_executor.stage_and_update_status(self._live_trials)
352
trial_runner.py
Python
python/ray/tune/trial_runner.py
976ece4bc43abdb628cf4cbffc8546abab723a6d
ray
20
42,090
39
10
5
47
5
0
46
88
show
Add rudimentary themeing support (#2929) * WIP Plot.theme * Add default values for theme to match set_theme() * Depend on matplotib style defaults and update rcParams more selectively * Fix lines test * Improve test coverage
https://github.com/mwaskom/seaborn.git
def show(self, **kwargs) -> None: # TODO make pyplot configurable at the class level, and when not using, # import IPython.display and call on self to populate cell output? # Keep an eye on whether matplotlib implements "attaching" an existing # figure to pyplot: https://github.com/matplotlib/matplotlib/pull/14024 self.plot(pyplot=True).show(**kwargs)
25
plot.py
Python
seaborn/_core/plot.py
762db897b52d16ab2f164d5103df4cc26c1d0503
seaborn
1
96,483
77
12
34
329
23
0
162
333
get_time_params
ADS v1. (#31533) * ADS v1. * Using QueryBuilder to automatically be aware of filters selected. Removed code to submit SnQL query. * Incorporated PR feedback; using timeseries_query instead of QueryBuilder. * Added ADS call. * Failed attempt at writing an unit test :( * Made get_anomalies() a helped function. Got unit tests working. * Moved ADS connection params into server.py * Made map_snuba_queries() a helper function, and added relevant unit tests. * Removed hard-coded params in unit tests. * Included start and end into ADS call. Updated unit tests accordingly. Added a unit test. * Incorporated PR feedback. * format tests * Added feature check and a corresponding unit test. * Serializing ADS response. * Incorporated some more PR feedback. * Incorporated some more PR feedback. Co-authored-by: trillville <tillman.elser@sentry.io>
https://github.com/getsentry/sentry.git
def get_time_params(start, end): anomaly_detection_range = end - start if anomaly_detection_range > timedelta(days=14): snuba_range = timedelta(days=90) granularity = 3600 elif anomaly_detection_range > timedelta(days=1): granularity = 1200 snuba_range = timedelta(days=28) else: snuba_range = timedelta(days=14) granularity = 600 additional_time_needed = snuba_range - anomaly_detection_range now = datetime.utcnow().astimezone(timezone.utc) start_limit = now - timedelta(days=90) end_limit = now start = max(start, start_limit) end = min(end, end_limit) # By default, expand windows equally in both directions window_increase = additional_time_needed / 2 query_start, query_end = None, None # If window will go back farther than 90 days, use today - 90 as start if start - window_increase < start_limit: query_start = now - timedelta(days=90) additional_time_needed -= start - query_start window_increase = additional_time_needed # If window extends beyond today, use today as end if end + window_increase > end_limit: query_end = now additional_time_needed -= query_end - end window_increase = additional_time_needed query_start = query_start or max(start - window_increase, start_limit) query_end = query_end or min(end + window_increase, end_limit) return MappedParams( query_start, query_end, granularity, )
205
organization_transaction_anomaly_detection.py
Python
src/sentry/api/endpoints/organization_transaction_anomaly_detection.py
d1314a15f4e591c93c796f79696e779329cbf369
sentry
7
138,371
12
11
7
69
10
0
12
31
test_status_invalid_runtime_env
[serve] Catch `RuntimeEnvSetupError` when checking status (#31282) When the task `run_graph` is submitted, fetching the object ref result can raise `RayTaskError` if the task fails or `RuntimeEnvSetupError` if setting up the runtime env for the task fails. We want to deal with the latter case as well.
https://github.com/ray-project/ray.git
def test_status_invalid_runtime_env(ray_start_stop): config_file_name = os.path.join( os.path.dirname(__file__), "test_config_files", "bad_runtime_env.yaml" ) subprocess.check_output(["serve", "deploy", config_file_name])
45
test_cli.py
Python
python/ray/serve/tests/test_cli.py
fe7ef833783c7e2e24dc5acd78ecced61ea53b0b
ray
1
264,447
8
8
2
43
7
1
8
13
meta
Closes #8600: Document built-in template tags & filters
https://github.com/netbox-community/netbox.git
def meta(model, attr): return getattr(model._meta, attr, '') @register.filter()
@register.filter()
19
filters.py
Python
netbox/utilities/templatetags/builtins/filters.py
7c105019d8ae9205051c302e7499b33a455f9176
netbox
1
46,403
20
14
9
93
16
0
21
72
reflect_tables
Add map_index and run_id to TaskFail (#22260) TaskFail entities always belong to a TaskInstance. The PK for TaskInstance has changed, so we need to update TaskFail to have the new columns.
https://github.com/apache/airflow.git
def reflect_tables(models, session): import sqlalchemy.schema metadata = sqlalchemy.schema.MetaData(session.bind) for model in models: try: metadata.reflect(only=[model.__tablename__], extend_existing=True, resolve_fks=False) except exc.InvalidRequestError: continue return metadata
59
db.py
Python
airflow/utils/db.py
f06b3955b1d937138fb38021a6a373b94ae8f9e8
airflow
3
291,831
64
11
31
292
31
0
66
464
_data_to_save
Add `translation_key` property to entites (#82701) * Add translation_key attribute to entity state * Update accuweather test * Index entity translation keys by platform * Store translation key in entity registry
https://github.com/home-assistant/core.git
def _data_to_save(self) -> dict[str, Any]: data: dict[str, Any] = {} data["entities"] = [ { "area_id": entry.area_id, "capabilities": entry.capabilities, "config_entry_id": entry.config_entry_id, "device_class": entry.device_class, "device_id": entry.device_id, "disabled_by": entry.disabled_by, "entity_category": entry.entity_category, "entity_id": entry.entity_id, "hidden_by": entry.hidden_by, "icon": entry.icon, "id": entry.id, "has_entity_name": entry.has_entity_name, "name": entry.name, "options": entry.options, "original_device_class": entry.original_device_class, "original_icon": entry.original_icon, "original_name": entry.original_name, "platform": entry.platform, "supported_features": entry.supported_features, "translation_key": entry.translation_key, "unique_id": entry.unique_id, "unit_of_measurement": entry.unit_of_measurement, } for entry in self.entities.values() ] return data
177
entity_registry.py
Python
homeassistant/helpers/entity_registry.py
8e617bbc1d3ef07254a4151f2d014a86e6e8d05a
core
2
118,984
12
9
7
85
11
0
12
61
test_startup_shutdown
AppSession: handle script events on the main thread (#4467) Refactors AppSession to handle all ScriptRunner events on the main thread. - ScriptRunner no longer takes an `enqueue` callback param. Instead, it signals that it wants to enqueue a new ForwardMsg via the `ScriptRunnerEvent.ENQUEUE_FORWARD_MSG` event. (This means that there's now only one channel that ScriptRunner uses to communicate back to AppSession - the `on_event` signal.) - `AppSession.enqueue` is now a private function called `AppSession._enqueue_forward_msg`, for clarity - `AppSession._on_scriptrunner_event` performs all its logic in an ioloop callback, which means that all ScriptRunner events are handled on the main thread _only_. (Previously, _some_ scriptrunner events were handled in a callback, and others were handled in line.) - `app_session_test.py` now has a test for the very convoluted `handle_backmsg_exception` function. (This function should be targeted for refactor down the line!) - There's also a small amount of unrelated cleanups in `app_session_test.py` This is one of the last big pieces that needs to be in place for the "faster-reruns" branch. With this change, AppSession will be able to safely identify and ignore ScriptRunner events that come from a "zombie" ScriptRunner that's still executing. It also (hopefully) makes AppSession easier to reason about: with the exception of the two functions that explicitly enqueue ioloop callbacks to be performed on the main thread, AppSession now executes entirely on the main thread!
https://github.com/streamlit/streamlit.git
def test_startup_shutdown(self): scriptrunner = TestScriptRunner("good_script.py") scriptrunner.start() scriptrunner.join() self._assert_no_exceptions(scriptrunner) self._assert_control_events(scriptrunner, [ScriptRunnerEvent.SHUTDOWN]) self._assert_text_deltas(scriptrunner, [])
49
script_runner_test.py
Python
lib/tests/streamlit/scriptrunner/script_runner_test.py
3326118e83ffe87623ea440d0ead245f21106a27
streamlit
1
94,333
47
20
38
265
29
0
66
615
test_category_match_group
test(event_manager): Fix incorrect invocations of manager.save (#36615)
https://github.com/getsentry/sentry.git
def test_category_match_group(self): from sentry.grouping.enhancer import Enhancements enhancement = Enhancements.from_config_string( , ) event = make_event( platform="native", exception={ "values": [ { "type": "Hello", "stacktrace": { "frames": [ { "function": "foo", }, { "function": "bar", }, ] }, } ] }, ) manager = EventManager(event) manager.normalize() grouping_config = { "enhancements": enhancement.dumps(), "id": "mobile:2021-02-12", } manager.get_data()["grouping_config"] = grouping_config event1 = manager.save(self.project.id) event2 = Event(event1.project_id, event1.event_id, data=event1.data) assert event1.get_hashes().hashes == event2.get_hashes(grouping_config).hashes
154
test_event_manager.py
Python
tests/sentry/event_manager/test_event_manager.py
39cfdcb446e74732c67ce07d7dd8d8d5ace471b1
sentry
1
181,847
29
15
13
108
11
0
31
178
_update_val
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def _update_val(self, val, result_score_list): self._update_pbar() if val == "Timeout": self._update_pbar( pbar_msg=( "Skipped pipeline #{0} due to time out. " "Continuing to the next pipeline.".format(self._pbar.n) ) ) result_score_list.append(-float("inf")) else: result_score_list.append(val) return result_score_list
60
base.py
Python
tpot/base.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
2
277,072
18
12
6
73
12
0
19
49
getfullargspec
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def getfullargspec(obj): decorators, target = tf.__internal__.decorator.unwrap(obj) for d in decorators: if d.decorator_argspec is not None: return _convert_maybe_argspec_to_fullargspec(d.decorator_argspec) return _getfullargspec(target)
45
tf_inspect.py
Python
keras/utils/tf_inspect.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
3
165,238
27
11
15
120
20
0
37
176
_aggregate_with_numba
BUG/REF: Use lru_cache instead of NUMBA_FUNC_CACHE (#46086)
https://github.com/pandas-dev/pandas.git
def _aggregate_with_numba(self, data, func, *args, engine_kwargs=None, **kwargs): starts, ends, sorted_index, sorted_data = self._numba_prep(data) numba_.validate_udf(func) numba_agg_func = numba_.generate_numba_agg_func( func, **get_jit_arguments(engine_kwargs, kwargs) ) result = numba_agg_func( sorted_data, sorted_index, starts, ends, len(data.columns), *args, ) return result # ----------------------------------------------------------------- # apply/agg/transform
81
groupby.py
Python
pandas/core/groupby/groupby.py
696a8e90e2112e263c84bfe87168649c3f5234e6
pandas
1
313,672
6
11
3
37
4
0
6
19
cover_platform_only
Speed up mqtt tests (#73423) Co-authored-by: jbouwh <jan@jbsoft.nl> Co-authored-by: Jan Bouwhuis <jbouwh@users.noreply.github.com>
https://github.com/home-assistant/core.git
def cover_platform_only(): with patch("homeassistant.components.mqtt.PLATFORMS", [Platform.COVER]): yield
18
test_cover.py
Python
tests/components/mqtt/test_cover.py
51b4d15c8cb83bb715222841aa48e83f77ef38ff
core
1
267,890
8
7
3
48
7
0
8
14
get_config_handler_type_map
ansible-test - Use more native type hints. (#78435) * ansible-test - Use more native type hints. Simple search and replace to switch from comments to native type hints for return types of functions with no arguments. * ansible-test - Use more native type hints. Conversion of simple single-line function annotation type comments to native type hints. * ansible-test - Use more native type hints. Conversion of single-line function annotation type comments with default values to native type hints. * ansible-test - Use more native type hints. Manual conversion of type annotation comments for functions which have pylint directives.
https://github.com/ansible/ansible.git
def get_config_handler_type_map() -> t.Dict[t.Type[HostConfig], t.Type[CoverageHandler]]: return get_type_map(CoverageHandler, HostConfig)
31
coverage.py
Python
test/lib/ansible_test/_internal/commands/integration/coverage.py
3eb0485dd92c88cc92152d3656d94492db44b183
ansible
1
296,469
8
6
3
25
4
0
8
22
code_format
Replace Alarm Control Panel FORMAT_ constants with CodeFormat enum (#69861)
https://github.com/home-assistant/core.git
def code_format(self) -> CodeFormat | None: return self._attr_code_format
14
__init__.py
Python
homeassistant/components/alarm_control_panel/__init__.py
1e4aacaeb12b11090813cb19f1282b6c46705977
core
1
317,566
16
9
23
62
7
0
19
34
test_saving_state_with_sqlalchemy_exception
Use recorder get_instance function to improve typing (#75567)
https://github.com/home-assistant/core.git
def test_saving_state_with_sqlalchemy_exception(hass, hass_recorder, caplog): hass = hass_recorder() entity_id = "test.recorder" state = "restoring_from_db" attributes = {"test_attr": 5, "test_attr_10": "nice"}
151
test_init.py
Python
tests/components/recorder/test_init.py
606d5441573f52de961010551b34d23aef3317dc
core
1
286,355
11
9
5
35
4
0
12
35
clean_fraction
[SDK] Allow silencing verbose output in commands that use stocks/load (#3180) * remove verbose on load * Revert implementation of the verbosity setting in stocks controller * Edit docstrings to comply with pydocstyle linting rules * Fix typos in variable names and help text * Add verbosity setting to forex load helper as it uses the stocks helper * Update docstrings to comply with pydocstyle linting rules * Update tests * Fix test relying on local sources settings * Remove old test cassettes * Add new test data * WIP: Fix futures tests * Clean up test file * Fix futures tests having a time component * Fix futures model tests Co-authored-by: James Maslek <jmaslek11@gmail.com> Co-authored-by: Theodore Aptekarev <aptekarev@gmail.com>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def clean_fraction(num, denom): try: return num / denom except TypeError: return "N/A"
19
stocks_helper.py
Python
openbb_terminal/stocks/stocks_helper.py
47549cbd9f52a436c06b040fda5b88a7d2bf700a
OpenBBTerminal
2
287,770
9
9
2
31
4
0
9
15
test_get_provider
Clean up Speech-to-text integration and add tests (#79012)
https://github.com/home-assistant/core.git
async def test_get_provider(hass, test_provider): assert test_provider == async_get_provider(hass, "test")
17
test_init.py
Python
tests/components/stt/test_init.py
57746642349a3ca62959de4447a4eb5963a84ae1
core
1
35,696
51
11
6
108
11
1
70
145
update_keys_to_ignore
Add Data2Vec (#15507) * Add data2vec model cloned from roberta * Add checkpoint conversion script * Fix copies * Update docs * Add checkpoint conversion script * Remove fairseq data2vec_text script and fix format * Add comment on where to get data2vec_text.py * Remove mock implementation cheat.py and fix style * Fix copies * Remove TF and Flax classes from init * Add back copy from fairseq data2vec_text.py and fix style * Update model name in docs/source/index.mdx to be CamelCase * Revert model name in table to lower-case to get check_table test to pass * Update src/transformers/models/data2vec/__init__.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/convert_data2vec_original_pytorch_checkpoint_to_pytorch.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/model_doc/data2vec.mdx Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update docs/source/model_doc/data2vec.mdx Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/auto/configuration_auto.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/configuration_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update tests/test_modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/configuration_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update documentation * Copy-paste Data2VecConfig from BertConfig * Update config checkpoint to point to edugp/data2vec-nlp-base. Fix style and repo-consistency * Update config special tokens to match RoBERTa * Split multiple assertions and add individual error messages * Rename Data2VecModel to Data2VecForTextModel * Add Data2Vec to _toctree.yml * Rename Data2VecEmbeddings to Data2VecForTextEmbeddings * Add initial Data2VecForAudio model (unfinished). Only matching fairseq's implementation up to the feature encoder (before positional encoding). * finish audio model * finish audio file * Update names and fix style, quality and repo consistency * Remove Data2VecAudioForPretraining. Add tests for Data2VecAudio, mimicking the Wav2Vec2 test suite. Fix bias initilization in positional conv layers. Move back configurations for audio and text to separate files. * add inputs to logits to data2vec' * correct autio models * correct config auto * correct tok auto * Update utils/tests_fetcher.py * delete unnecessary files * delete unnecessary files * further renaming * make all tests pass * finish * remove useless test file * Update tests/test_modeling_common.py * Update utils/check_repo.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec_text.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Fix copies * Update docs * Remove fairseq data2vec_text script and fix format * Add comment on where to get data2vec_text.py * Remove mock implementation cheat.py and fix style * Fix copies * Remove TF and Flax classes from init * Add back copy from fairseq data2vec_text.py and fix style * Update model name in docs/source/index.mdx to be CamelCase * Revert model name in table to lower-case to get check_table test to pass * Update documentation * Update src/transformers/models/data2vec/__init__.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/convert_data2vec_original_pytorch_checkpoint_to_pytorch.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/auto/configuration_auto.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/configuration_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update tests/test_modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/configuration_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Copy-paste Data2VecConfig from BertConfig * Update config checkpoint to point to edugp/data2vec-nlp-base. Fix style and repo-consistency * Update config special tokens to match RoBERTa * Split multiple assertions and add individual error messages * Rename Data2VecModel to Data2VecForTextModel * Add Data2Vec to _toctree.yml * Rename Data2VecEmbeddings to Data2VecForTextEmbeddings * Add initial Data2VecForAudio model (unfinished). Only matching fairseq's implementation up to the feature encoder (before positional encoding). * finish audio model * finish audio file * add inputs to logits to data2vec' * Update names and fix style, quality and repo consistency * Remove Data2VecAudioForPretraining. Add tests for Data2VecAudio, mimicking the Wav2Vec2 test suite. Fix bias initilization in positional conv layers. Move back configurations for audio and text to separate files. * correct autio models * correct config auto * correct tok auto * delete unnecessary files * delete unnecessary files * Update utils/tests_fetcher.py * further renaming * make all tests pass * finish * remove useless test file * Update tests/test_modeling_common.py * Update utils/check_repo.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/data2vec/modeling_data2vec_text.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Move data2vec tests to new structure * Fix test imports for text tests * Remove fairseq files * Change paper link to arxiv * Modify Data2Vec documentation to reflect that the encoder is not shared across the audio and text models in the current implementation. * Update text model checkpoint to be facebook/data2vec-text-base * Add 'Copy from' statements and update paper links and docs * fix copy from statements * improve copied from * correct more copied from statements * finish copied from stuff * make style * add model to README * add to master Co-authored-by: Eduardo Gonzalez Ponferrada <eduardo@ferrumhealth.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
https://github.com/huggingface/transformers.git
def update_keys_to_ignore(self, config, del_keys_to_ignore): if not config.tie_word_embeddings: # must make a new list, or the class variable gets modified! self._keys_to_ignore_on_save = [k for k in self._keys_to_ignore_on_save if k not in del_keys_to_ignore] self._keys_to_ignore_on_load_missing = [ k for k in self._keys_to_ignore_on_load_missing if k not in del_keys_to_ignore ] DATA2VECTEXT_START_DOCSTRING = r DATA2VECTEXT_INPUTS_DOCSTRING = r @add_start_docstrings( "The bare Data2VecText Model for text transformer outputting raw hidden-states without any specific head on top.", DATA2VECTEXT_START_DOCSTRING, )
@add_start_docstrings( "The bare Data2VecText Model for text transformer outputting raw hidden-states without any specific head on top.", DATA2VECTEXT_START_DOCSTRING, )
52
modeling_data2vec_text.py
Python
src/transformers/models/data2vec/modeling_data2vec_text.py
df5a4094a6e3f98f2cb2058cdb688fcc3f453220
transformers
6
244,125
42
14
15
223
19
0
61
169
_expand_onehot_labels
[Fix] Fix reduction=mean in CELoss. (#7449) * [Fix] Fix ignore in CELoss. * add ut * fix and add comments * add avg_non_ignore option * bce avg * fix lint
https://github.com/open-mmlab/mmdetection.git
def _expand_onehot_labels(labels, label_weights, label_channels, ignore_index): bin_labels = labels.new_full((labels.size(0), label_channels), 0) valid_mask = (labels >= 0) & (labels != ignore_index) inds = torch.nonzero( valid_mask & (labels < label_channels), as_tuple=False) if inds.numel() > 0: bin_labels[inds, labels[inds]] = 1 valid_mask = valid_mask.view(-1, 1).expand(labels.size(0), label_channels).float() if label_weights is None: bin_label_weights = valid_mask else: bin_label_weights = label_weights.view(-1, 1).repeat(1, label_channels) bin_label_weights *= valid_mask return bin_labels, bin_label_weights, valid_mask
147
cross_entropy_loss.py
Python
mmdet/models/losses/cross_entropy_loss.py
3b2e9655631a2edd28bb94c640bd6a74c0bfad55
mmdetection
3
156,839
66
12
20
180
20
0
89
221
solve
Update `da.linalg.solve` for SciPy 1.9.0 compatibility (#9350)
https://github.com/dask/dask.git
def solve(a, b, sym_pos=None, assume_a="gen"): if sym_pos is not None: warnings.warn( "The sym_pos keyword is deprecated and should be replaced by using ``assume_a = 'pos'``." "``sym_pos`` will be removed in a future version.", category=FutureWarning, ) if sym_pos: assume_a = "pos" if assume_a == "pos": l, u = _cholesky(a) elif assume_a == "gen": p, l, u = lu(a) b = p.T.dot(b) else: raise ValueError( f"{assume_a = } is not a recognized matrix structure, valid structures in Dask are 'pos' and 'gen'." ) uy = solve_triangular(l, b, lower=True) return solve_triangular(u, uy)
105
linalg.py
Python
dask/array/linalg.py
386b4753f1aa5d96230c814391fde50758b8f533
dask
5
100,335
25
13
9
152
10
0
30
108
set_info_text
Update code to support Tensorflow versions up to 2.8 (#1213) * Update maximum tf version in setup + requirements * - bump max version of tf version in launcher - standardise tf version check * update keras get_custom_objects for tf>2.6 * bugfix: force black text in GUI file dialogs (linux) * dssim loss - Move to stock tf.ssim function * Update optimizer imports for compatibility * fix logging for tf2.8 * Fix GUI graphing for TF2.8 * update tests * bump requirements.txt versions * Remove limit on nvidia-ml-py * Graphing bugfixes - Prevent live graph from displaying if data not yet available * bugfix: Live graph. Collect loss labels correctly * fix: live graph - swallow inconsistent loss errors * Bugfix: Prevent live graph from clearing during training * Fix graphing for AMD
https://github.com/deepfakes/faceswap.git
def set_info_text(self): if not self.vars["enabled"].get(): msg = f"{self.tabname.title()} disabled" elif self.vars["enabled"].get() and not self.vars["ready"].get(): msg = f"Waiting for {self.tabname}..." else: msg = f"Displaying {self.tabname}" logger.debug(msg) self.set_info(msg) # DISPLAY OPTIONS BAR
69
display_page.py
Python
lib/gui/display_page.py
c1512fd41d86ef47a5d1ce618d6d755ef7cbacdf
faceswap
4
100,296
20
10
6
87
13
0
22
72
stream
alignments tool - Don't re-analyze video if metadata in alignments
https://github.com/deepfakes/faceswap.git
def stream(self, skip_list=None): loader = ImagesLoader(self.folder, queue_size=32, count=self._count) if skip_list is not None: loader.add_skip_list(skip_list) for filename, image in loader.load(): yield filename, image
55
media.py
Python
tools/alignments/media.py
30872ef265c0fc29465f4c3a0778d0049f8c3897
faceswap
3
109,422
21
11
7
80
10
0
23
84
set_boxstyle
Harmonize docstrings for boxstyle/connectionstyle/arrowstyle. - Rely on `__init_subclass__` to avoid the need for the out-of-order `interpd.update`/`dedent_interpd`. - Use consistent wording for all setters, and add ACCEPTS list in all cases. - Move get_boxstyle right next to set_boxstyle (consistently with the other setters/getters). - Make the type check in the setters consistent in all cases (check for str, not for forcing inheritance from the private _Base). - Support `set_connectionstyle()` as equivalent to `set_connectionstyle(None)`, consistently with the other two setters.
https://github.com/matplotlib/matplotlib.git
def set_boxstyle(self, boxstyle=None, **kwargs): if boxstyle is None: return BoxStyle.pprint_styles() self._bbox_transmuter = ( BoxStyle(boxstyle, **kwargs) if isinstance(boxstyle, str) else boxstyle) self.stale = True
51
patches.py
Python
lib/matplotlib/patches.py
0dc472b4c7cdcc1e88228988fff17762c90f1cb9
matplotlib
3
176,967
55
15
14
138
15
1
92
227
global_efficiency
added examples to efficiency_measures.py (#5643) * added example on efficiency * added example on global_efficiency * added example on local_efficiency * adjused round up
https://github.com/networkx/networkx.git
def global_efficiency(G): n = len(G) denom = n * (n - 1) if denom != 0: lengths = nx.all_pairs_shortest_path_length(G) g_eff = 0 for source, targets in lengths: for target, distance in targets.items(): if distance > 0: g_eff += 1 / distance g_eff /= denom # g_eff = sum(1 / d for s, tgts in lengths # for t, d in tgts.items() if d > 0) / denom else: g_eff = 0 # TODO This can be made more efficient by computing all pairs shortest # path lengths in parallel. return g_eff @not_implemented_for("directed")
@not_implemented_for("directed")
76
efficiency_measures.py
Python
networkx/algorithms/efficiency_measures.py
435b4622d106d14a3627e162ee163b113bac9854
networkx
5
276,721
35
10
11
121
6
0
54
103
conv_input_length
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def conv_input_length(output_length, filter_size, padding, stride): if output_length is None: return None assert padding in {"same", "valid", "full"} if padding == "same": pad = filter_size // 2 elif padding == "valid": pad = 0 elif padding == "full": pad = filter_size - 1 return (output_length - 1) * stride - 2 * pad + filter_size
70
conv_utils.py
Python
keras/utils/conv_utils.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
5
38,141
54
13
20
155
13
0
68
184
get_checkpoint_callback
Black preview (#17217) * Black preview * Fixup too! * Fix check copies * Use the same version as the CI * Bump black
https://github.com/huggingface/transformers.git
def get_checkpoint_callback(output_dir, metric, save_top_k=1, lower_is_better=False): if metric == "rouge2": exp = "{val_avg_rouge2:.4f}-{step_count}" elif metric == "bleu": exp = "{val_avg_bleu:.4f}-{step_count}" elif metric == "loss": exp = "{val_avg_loss:.4f}-{step_count}" else: raise NotImplementedError( f"seq2seq callbacks only support rouge2, bleu and loss, got {metric}, You can make your own by adding to" " this function." ) checkpoint_callback = ModelCheckpoint( dirpath=output_dir, filename=exp, monitor=f"val_{metric}", mode="min" if "loss" in metric else "max", save_top_k=save_top_k, ) return checkpoint_callback
83
callbacks.py
Python
examples/research_projects/seq2seq-distillation/callbacks.py
afe5d42d8d1d80af911ed980c2936bfe887078f6
transformers
5
20,529
6
8
2
33
4
0
6
12
remove_quotes
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def remove_quotes(s, l, t): return t[0][1:-1]
21
actions.py
Python
pipenv/patched/notpip/_vendor/pyparsing/actions.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
292,450
7
8
3
28
3
0
7
21
async_will_remove_from_hass
Add dlna_dms integration to support DLNA Digital Media Servers (#66437)
https://github.com/home-assistant/core.git
async def async_will_remove_from_hass(self) -> None: await self.device_disconnect()
14
dms.py
Python
homeassistant/components/dlna_dms/dms.py
b19bf9b147f4321e89d1f7f01e68337f2102f460
core
1
181,757
8
8
3
37
6
0
8
17
test_read_config_file_2
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def test_read_config_file_2(): tpot_obj = TPOTRegressor() assert_raises(ValueError, tpot_obj._read_config_file, "tests/test_config.py.bad")
20
tpot_tests.py
Python
tests/tpot_tests.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
288,105
8
9
3
32
5
0
8
22
available
Add light platform for switchbee integration (#78382) * Added Light platform for switchbee integration * added light to .coveragerc * Applied code review feedback from other PR * Fixes based on previous reviewes * fixed small mistake * added test coverage for light * aligned code with other PR * rebased * fixes * removed unecessary changes * Update homeassistant/components/switchbee/light.py Co-authored-by: epenet <6771947+epenet@users.noreply.github.com> * Update homeassistant/components/switchbee/light.py Co-authored-by: epenet <6771947+epenet@users.noreply.github.com> * more fixes * more fixes * adjusting switch with the new change * more fixes * Update homeassistant/components/switchbee/light.py Co-authored-by: epenet <6771947+epenet@users.noreply.github.com> * Update homeassistant/components/switchbee/light.py Co-authored-by: epenet <6771947+epenet@users.noreply.github.com> * Update homeassistant/components/switchbee/light.py Co-authored-by: epenet <6771947+epenet@users.noreply.github.com> Co-authored-by: epenet <6771947+epenet@users.noreply.github.com>
https://github.com/home-assistant/core.git
def available(self) -> bool: return self._is_online and super().available
18
entity.py
Python
homeassistant/components/switchbee/entity.py
de3a1f444ce6f2ab4ae4094a86defeb4290d6f3f
core
2