title
stringlengths
2
169
diff
stringlengths
235
19.5k
body
stringlengths
0
30.5k
url
stringlengths
48
84
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
diff_len
float64
101
3.99k
repo_name
stringclasses
83 values
__index_level_0__
int64
15
52.7k
zh-TW: Improve translations
diff --git a/README-zh-TW.md b/README-zh-TW.md index b608ec3277..8d286a28f1 100644 --- a/README-zh-TW.md +++ b/README-zh-TW.md @@ -623,7 +623,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢 其餘額外的好處有: -* **SSL 終結** - 將傳入的請求解密,並且加密伺服器的回應,如此一來後端伺服器就不需要進行這些高度消耗資源的願算 +* **SSL 終結** - 將傳入的請求解密,並且加密伺服器的回應,如此一來後端伺服器就不需要進行這些高度消耗資源的運算 * 不需要在每一台機器上安裝 [X.509 憑證](https://en.wikipedia.org/wiki/X.509)。 * **Session 保存** - 發行 cookie,並將特定使用者的請求路由到同樣的後端伺服器上。 @@ -918,7 +918,7 @@ SQL 優化是一個涵蓋範圍很廣的主題,有許多相關的 [參考書 * 當你使用 (`SELECT`, `GROUP BY`, `ORDER BY`, `JOIN`) 這些操作的對應欄位如果有使用索引就會查詢更快。 * 索引通常是使用平衡 [B 樹](https://en.wikipedia.org/wiki/B-tree) 表示,這樣可以保證資料是有序的,並允許在對數時間內進行搜尋、循序訪問以及插入、刪除等操作。 * 設定索引時,會將資料放置於記憶體中,會佔用更多記憶體空間。 -* 寫入操作會變慢,因為所隱諱需要更新。 +* 寫入操作會變慢,因為索引會需要更新。 * 當讀取大量資料時,禁用索引再讀取,之後再重新建立索引,這樣也許會更快。 ##### 避免高成本的 Join 操作 @@ -1065,7 +1065,7 @@ Google 發表了第一個列儲存型資料庫 [Bigtable](http://www.read.seas.h * 非關連式資料 * 不需要複雜的 joins * 儲存 TB (或 PB) 等級的資料 -* 高資料密集亮的工作負載 +* 高資料密集量的工作負載 * IOPS 的高吞吐量 適合使用 NoSQL 的範例: @@ -1121,7 +1121,7 @@ Redis 還有以下額外的功能: 你可以快取的級別有好幾種,大致上分為兩類:**資料庫查詢** 和 **物件**: * 記錄級別 -* 查詢及別 +* 查詢級別 * 完整的可序列化物件 * 完整的 HTML
https://api.github.com/repos/donnemartin/system-design-primer/pulls/176
2018-07-16T08:08:25Z
2018-07-20T04:46:36Z
2018-07-20T04:46:36Z
2018-07-20T04:46:46Z
732
donnemartin/system-design-primer
36,739
ref(js): Use function component for ShortId
diff --git a/static/app/components/shortId.tsx b/static/app/components/shortId.tsx index fcff9c1613b75..08b155655b682 100644 --- a/static/app/components/shortId.tsx +++ b/static/app/components/shortId.tsx @@ -10,22 +10,18 @@ type Props = { onClick?: (e: React.MouseEvent<HTMLDivElement>) => void; }; -export default class ShortId extends React.Component<Props> { - render() { - const {shortId, avatar} = this.props; - - if (!shortId) { - return null; - } - - return ( - <StyledShortId {...this.props}> - {avatar} - <StyledAutoSelectText avatar={!!avatar}>{shortId}</StyledAutoSelectText> - </StyledShortId> - ); +const ShortId = ({shortId, avatar, ...props}: Props) => { + if (!shortId) { + return null; } -} + + return ( + <StyledShortId {...props}> + {avatar} + <StyledAutoSelectText avatar={!!avatar}>{shortId}</StyledAutoSelectText> + </StyledShortId> + ); +}; const StyledShortId = styled('div')` font-family: ${p => p.theme.text.familyMono}; @@ -40,3 +36,5 @@ const StyledAutoSelectText = styled(AutoSelectText, {shouldForwardProp: isPropVa margin-left: ${p => p.avatar && '0.5em'}; min-width: 0; `; + +export default ShortId;
https://api.github.com/repos/getsentry/sentry/pulls/28215
2021-08-26T21:06:34Z
2021-08-26T22:20:29Z
2021-08-26T22:20:29Z
2021-09-11T00:01:20Z
363
getsentry/sentry
44,137
Resolves the parts of #1096 in requests proper.
diff --git a/requests/models.py b/requests/models.py index 5202e6f4ba..b7d52cd74a 100644 --- a/requests/models.py +++ b/requests/models.py @@ -375,13 +375,7 @@ def prepare_body(self, data, files): else: content_type = 'application/x-www-form-urlencoded' - self.headers['Content-Length'] = '0' - if hasattr(body, 'seek') and hasattr(body, 'tell'): - body.seek(0, 2) - self.headers['Content-Length'] = str(body.tell()) - body.seek(0, 0) - elif body is not None: - self.headers['Content-Length'] = str(len(body)) + self.prepare_content_length(body) # Add content-type if it wasn't explicitly provided. if (content_type) and (not 'content-type' in self.headers): @@ -389,6 +383,15 @@ def prepare_body(self, data, files): self.body = body + def prepare_content_length(self, body): + self.headers['Content-Length'] = '0' + if hasattr(body, 'seek') and hasattr(body, 'tell'): + body.seek(0, 2) + self.headers['Content-Length'] = str(body.tell()) + body.seek(0, 0) + elif body is not None: + self.headers['Content-Length'] = str(len(body)) + def prepare_auth(self, auth): """Prepares the given HTTP auth data.""" if auth: @@ -402,6 +405,9 @@ def prepare_auth(self, auth): # Update self to reflect the auth changes. self.__dict__.update(r.__dict__) + # Recompute Content-Length + self.prepare_content_length(self.body) + def prepare_cookies(self, cookies): """Prepares the given HTTP cookie data."""
https://api.github.com/repos/psf/requests/pulls/1097
2013-01-11T20:06:37Z
2013-01-22T13:11:14Z
2013-01-22T13:11:14Z
2021-09-08T23:07:32Z
423
psf/requests
32,355
PERF: performance improvements in multi-key groupby
diff --git a/doc/source/whatsnew/v0.16.0.txt b/doc/source/whatsnew/v0.16.0.txt index e8b398aec4b74..0234a0dab8e28 100644 --- a/doc/source/whatsnew/v0.16.0.txt +++ b/doc/source/whatsnew/v0.16.0.txt @@ -174,6 +174,7 @@ Performance - Performance improvement of up to 10x in ``DataFrame.count`` and ``DataFrame.dropna`` by taking advantage of homogeneous/heterogeneous dtypes appropriately (:issue:`9136`) - Performance improvement of up to 20x in ``DataFrame.count`` when using a ``MultiIndex`` and the ``level`` keyword argument (:issue:`9163`) - Performance and memory usage improvements in ``merge`` when key space exceeds ``int64`` bounds (:issue:`9151`) +- Performance improvements in multi-key ``groupby`` (:issue:`9429`) Bug Fixes ~~~~~~~~~ diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py index 28a1656832d56..0a12484f9ab3a 100644 --- a/pandas/core/groupby.py +++ b/pandas/core/groupby.py @@ -1217,11 +1217,9 @@ class BaseGrouper(object): """ def __init__(self, axis, groupings, sort=True, group_keys=True): - self.axis = axis - self.groupings = groupings - self.sort = sort - self.group_keys = group_keys - self.compressed = True + self._filter_empty_groups = self.compressed = len(groupings) != 1 + self.axis, self.groupings, self.sort, self.group_keys = \ + axis, groupings, sort, group_keys @property def shape(self): @@ -1373,31 +1371,34 @@ def _get_compressed_labels(self): return _compress_group_index(group_index) ping = self.groupings[0] - self.compressed = False - self._filter_empty_groups = False - return ping.labels, np.arange(len(ping.group_index)) @cache_readonly def ngroups(self): return len(self.result_index) + @property + def recons_labels(self): + comp_ids, obs_ids, _ = self.group_info + labels = (ping.labels for ping in self.groupings) + return decons_obs_group_ids(comp_ids, obs_ids, self.shape, labels) + @cache_readonly def result_index(self): - recons = self.get_group_levels() - return MultiIndex.from_arrays(recons, names=self.names) + if not self.compressed and len(self.groupings) == 1: + return self.groupings[0].group_index.rename(self.names[0]) - def get_group_levels(self): - comp_ids, obs_ids, _ = self.group_info + return MultiIndex(levels=[ping.group_index for ping in self.groupings], + labels=self.recons_labels, + verify_integrity=False, + names=self.names) + def get_group_levels(self): if not self.compressed and len(self.groupings) == 1: return [self.groupings[0].group_index] - recons_labels = decons_obs_group_ids(comp_ids, obs_ids, - self.shape, (ping.labels for ping in self.groupings)) - name_list = [] - for ping, labels in zip(self.groupings, recons_labels): + for ping, labels in zip(self.groupings, self.recons_labels): labels = com._ensure_platform_int(labels) levels = ping.group_index.take(labels) @@ -1432,8 +1433,6 @@ def get_group_levels(self): _name_functions = {} - _filter_empty_groups = True - def _get_aggregate_function(self, how, values): dtype_str = values.dtype.name @@ -1797,8 +1796,6 @@ def size(self): 'ohlc': lambda *args: ['open', 'high', 'low', 'close'] } - _filter_empty_groups = True - def _aggregate(self, result, counts, values, how, is_numeric=True): agg_func, dtype = self._get_aggregate_function(how, values) diff --git a/vb_suite/groupby.py b/vb_suite/groupby.py index fd18c81a7d00d..eb690df4870e8 100644 --- a/vb_suite/groupby.py +++ b/vb_suite/groupby.py @@ -501,6 +501,25 @@ def f(g): groupby_int64_overflow = Benchmark("df.groupby(list('abcde')).max()", setup, name='groupby_int64_overflow') + +setup = common_setup + ''' +from itertools import product +from string import ascii_letters, digits + +n = 5 * 7 * 11 * (1 << 9) +alpha = list(map(''.join, product(ascii_letters + digits, repeat=4))) +f = lambda k: np.repeat(np.random.choice(alpha, n // k), k) + +df = DataFrame({'a': f(11), 'b': f(7), 'c': f(5), 'd': f(1)}) +df['joe'] = (np.random.randn(len(df)) * 10).round(3) + +i = np.random.permutation(len(df)) +df = df.iloc[i].reset_index(drop=True).copy() +''' + +groupby_multi_index = Benchmark("df.groupby(list('abcd')).max()", setup, + name='groupby_multi_index') + #---------------------------------------------------------------------- # groupby with a variable value for ngroups
``` ------------------------------------------------------------------------------- Test name | head[ms] | base[ms] | ratio | ------------------------------------------------------------------------------- groupby_multi_index | 1008.6970 | 1861.1110 | 0.5420 | ------------------------------------------------------------------------------- Test name | head[ms] | base[ms] | ratio | ------------------------------------------------------------------------------- Ratio < 1.0 means the target commit is faster then the baseline. Seed used: 1234 Target [907e1b7] : performance improvements in multi-key groupby Base [5dc8009] : Merge pull request #9418 from sinhrks/rn_20150205 DOC: Fix release note format ```
https://api.github.com/repos/pandas-dev/pandas/pulls/9429
2015-02-06T00:38:58Z
2015-02-07T12:44:31Z
2015-02-07T12:44:31Z
2015-02-07T13:05:53Z
1,278
pandas-dev/pandas
45,392
core: Convert SimSIMD back to NumPy
diff --git a/libs/community/langchain_community/utils/math.py b/libs/community/langchain_community/utils/math.py index 99d473681973c4..2522c1255c6948 100644 --- a/libs/community/langchain_community/utils/math.py +++ b/libs/community/langchain_community/utils/math.py @@ -29,7 +29,7 @@ def cosine_similarity(X: Matrix, Y: Matrix) -> np.ndarray: Z = 1 - simd.cdist(X, Y, metric="cosine") if isinstance(Z, float): return np.array([Z]) - return Z + return np.array(Z) except ImportError: logger.info( "Unable to import simsimd, defaulting to NumPy implementation. If you want " diff --git a/libs/partners/elasticsearch/langchain_elasticsearch/_utilities.py b/libs/partners/elasticsearch/langchain_elasticsearch/_utilities.py index 0280708736f512..33b302241eaab5 100644 --- a/libs/partners/elasticsearch/langchain_elasticsearch/_utilities.py +++ b/libs/partners/elasticsearch/langchain_elasticsearch/_utilities.py @@ -79,7 +79,7 @@ def cosine_similarity(X: Matrix, Y: Matrix) -> np.ndarray: Z = 1 - simd.cdist(X, Y, metric="cosine") if isinstance(Z, float): return np.array([Z]) - return Z + return np.array(Z) except ImportError: X_norm = np.linalg.norm(X, axis=1) Y_norm = np.linalg.norm(Y, axis=1) diff --git a/libs/partners/mongodb/langchain_mongodb/utils.py b/libs/partners/mongodb/langchain_mongodb/utils.py index feb34ad1c23d61..854b2bc939a998 100644 --- a/libs/partners/mongodb/langchain_mongodb/utils.py +++ b/libs/partners/mongodb/langchain_mongodb/utils.py @@ -38,7 +38,7 @@ def cosine_similarity(X: Matrix, Y: Matrix) -> np.ndarray: Z = 1 - simd.cdist(X, Y, metric="cosine") if isinstance(Z, float): return np.array([Z]) - return Z + return np.array(Z) except ImportError: logger.info( "Unable to import simsimd, defaulting to NumPy implementation. If you want " diff --git a/libs/partners/pinecone/langchain_pinecone/_utilities.py b/libs/partners/pinecone/langchain_pinecone/_utilities.py index 37d61e9fb111c4..5ad9e407fcd511 100644 --- a/libs/partners/pinecone/langchain_pinecone/_utilities.py +++ b/libs/partners/pinecone/langchain_pinecone/_utilities.py @@ -69,7 +69,7 @@ def cosine_similarity(X: Matrix, Y: Matrix) -> np.ndarray: Z = 1 - simd.cdist(X, Y, metric="cosine") if isinstance(Z, float): return np.array([Z]) - return Z + return np.array(Z) except ImportError: X_norm = np.linalg.norm(X, axis=1) Y_norm = np.linalg.norm(Y, axis=1)
This patch fixes the #18022 issue, converting the SimSIMD internal zero-copy outputs to NumPy. I've also noticed, that oftentimes `dtype=np.float32` conversion is used before passing to SimSIMD. Which numeric types do LangChain users generally care about? We support `float64`, `float32`, `float16`, and `int8` for cosine distances and `float16` seems reasonable for practically any kind of embeddings and any modern piece of hardware, so we can change that part as well 🤗
https://api.github.com/repos/langchain-ai/langchain/pulls/19473
2024-03-23T23:43:16Z
2024-03-25T23:36:26Z
2024-03-25T23:36:26Z
2024-03-25T23:36:27Z
707
langchain-ai/langchain
43,356
Add support for 3.10
diff --git a/.github/workflows/run-tests.yml b/.github/workflows/run-tests.yml index cf5f0b4b91..4fb5eef968 100644 --- a/.github/workflows/run-tests.yml +++ b/.github/workflows/run-tests.yml @@ -9,7 +9,7 @@ jobs: strategy: fail-fast: false matrix: - python-version: [2.7, 3.6, 3.7, 3.8, 3.9] + python-version: [2.7, 3.6, 3.7, 3.8, 3.9, 3.10-dev] os: [ubuntu-18.04, macOS-latest, windows-latest] include: # pypy3 on Mac OS currently fails trying to compile diff --git a/requirements-dev.txt b/requirements-dev.txt index effb0c81f5..bc4c3e4b14 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -1,4 +1,4 @@ -pytest>=2.8.0,<=3.10.1 +pytest>=2.8.0,<=6.2.5 pytest-cov pytest-httpbin==1.0.0 pytest-mock==2.0.0 diff --git a/setup.py b/setup.py index ce5e5c8009..008565a6d3 100755 --- a/setup.py +++ b/setup.py @@ -95,6 +95,7 @@ def run_tests(self): 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', + 'Programming Language :: Python :: 3.10', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy' ],
This will make sure we don't introduce any regressions into the code base prior to the official 3.10 release. At that point, we'll promote this to the normal test runner on all platforms.
https://api.github.com/repos/psf/requests/pulls/5928
2021-09-03T16:41:45Z
2021-09-04T02:03:16Z
2021-09-04T02:03:16Z
2021-12-03T02:11:50Z
431
psf/requests
32,310
[extractor/rtbf] Fix jwt extraction
diff --git a/yt_dlp/extractor/redbee.py b/yt_dlp/extractor/redbee.py index 89a10448e13..ee510eb40f6 100644 --- a/yt_dlp/extractor/redbee.py +++ b/yt_dlp/extractor/redbee.py @@ -11,6 +11,7 @@ int_or_none, strip_or_none, traverse_obj, + try_call, unified_timestamp, ) @@ -255,7 +256,7 @@ def _get_formats_and_subtitles(self, url, media_id): if not login_token: self.raise_login_required() - session_jwt = self._download_json( + session_jwt = try_call(lambda: self._get_cookies(url)['rtbf_jwt'].value) or self._download_json( 'https://login.rtbf.be/accounts.getJWT', media_id, query={ 'login_token': login_token.value, 'APIKey': self._GIGYA_API_KEY,
**IMPORTANT**: PRs without the template will be CLOSED ### Description of your *pull request* and other information </details> <!-- Explanation of your *pull request* in arbitrary form goes here. Please **make sure the description explains the purpose and effect** of your *pull request* and is worded well enough to be understood. Provide as much **context and examples** as possible --> Sometimes RTBF returns our jwt session token as a cookie and doesn't allow us to make more requests to `/accounts.getJWT`. This fixes an issue where we could only use username and password, but not cookies. Fixes #4683 <details open><summary>Template</summary> <!-- OPEN is intentional --> <!-- # PLEASE FOLLOW THE GUIDE BELOW - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x]) - Use *Preview* tab to see how your *pull request* will actually look like --> ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [x] Fix or improvement to an extractor (Make sure to add/update tests) - [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy)) - [ ] Core bug fix/improvement - [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/4738
2022-08-22T18:13:59Z
2022-08-22T18:45:46Z
2022-08-22T18:45:46Z
2022-08-22T18:45:46Z
220
yt-dlp/yt-dlp
7,944
bpo-35907: Fix typo in the NEWS entry
diff --git a/Misc/NEWS.d/next/Security/2019-05-21-23-20-18.bpo-35907.NC_zNK.rst b/Misc/NEWS.d/next/Security/2019-05-21-23-20-18.bpo-35907.NC_zNK.rst index 9628c8797572eb..37b567a5b6f93b 100644 --- a/Misc/NEWS.d/next/Security/2019-05-21-23-20-18.bpo-35907.NC_zNK.rst +++ b/Misc/NEWS.d/next/Security/2019-05-21-23-20-18.bpo-35907.NC_zNK.rst @@ -1,3 +1,3 @@ CVE-2019-9948: Avoid file reading by disallowing ``local-file://`` and -``local_file://`` URL schemes in ``URLopener().open()`` +``local_file://`` URL schemes in ``URLopener().open()`` and ``URLopener().retrieve()`` of :mod:`urllib.request`.
<!-- issue-number: [bpo-35907](https://bugs.python.org/issue35907) --> https://bugs.python.org/issue35907 <!-- /issue-number -->
https://api.github.com/repos/python/cpython/pulls/13559
2019-05-24T20:31:35Z
2019-05-24T21:06:26Z
2019-05-24T21:06:26Z
2019-05-24T21:06:36Z
253
python/cpython
4,434
add u14m results of cppd
diff --git a/doc/doc_ch/algorithm_rec_cppd.md b/doc/doc_ch/algorithm_rec_cppd.md index 1d48ed3059..4fde62e549 100644 --- a/doc/doc_ch/algorithm_rec_cppd.md +++ b/doc/doc_ch/algorithm_rec_cppd.md @@ -45,7 +45,17 @@ CPPD在场景文本识别公开数据集上的精度(%)和模型文件如下: | CPPD Base | 65.5 | 18.6 | 56.0 | 61.9 | 71.0 | 57.5 | 65.8 | 56.63 | 同上表 | | CPPD Base 48*160 | 71.9 | 22.1 | 60.5 | 67.9 | 78.3 | 63.9 | 67.1 | 61.69 | 同上表 | -* Union14M-L 训练集训练,英文测试结果。 +* Union14M-L 训练集From scratch训练,英文测试结果。 + +| 模型 |IC13<br/>857 | SVT |IIIT5k<br/>3000 |IC15<br/>1811| SVTP |CUTE80 | Avg | 下载链接 | +|:----------:|:------:|:-----:|:---------:|:------:|:-----:|:-----:|:-----:|:-------:| +| CPPD Base 32*128 | 98.5 | 97.7 | 99.2 | 90.3 | 94.6 | 98.3 | 96.42 | Coming soon | + +| 模型 |Curve | Multi-<br/>Oriented |Artistic |Contextless| Salient | Multi-<br/>word | General | Avg | 下载链接 | +|:----------:|:------:|:-----:|:---------:|:------:|:-----:|:-----:|:-----:|:-------:|:-------:| +| CPPD Base 32*128 | 83.0 | 71.2 | 75.1 | 80.9 | 79.4 | 82.6 | 83.7 | 79.41 | Coming soon | + +* 加载合成数据集预训练模型,Union14M-L 训练集微调训练,英文测试结果。 | 模型 |IC13<br/>857 | SVT |IIIT5k<br/>3000 |IC15<br/>1811| SVTP |CUTE80 | Avg | 下载链接 | |:----------:|:------:|:-----:|:---------:|:------:|:-----:|:-----:|:-----:|:-------:|
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/11943
2024-04-16T08:27:39Z
2024-04-17T02:44:58Z
2024-04-17T02:44:58Z
2024-04-17T02:44:58Z
627
PaddlePaddle/PaddleOCR
42,027
Fixed #32195 -- Added system check for invalid view in path() and improved error messages.
diff --git a/django/urls/conf.py b/django/urls/conf.py index 119e95df411dd..b3937d55122ea 100644 --- a/django/urls/conf.py +++ b/django/urls/conf.py @@ -55,6 +55,8 @@ def include(arg, namespace=None): def _path(route, view, kwargs=None, name=None, Pattern=None): + from django.views import View + if isinstance(view, (list, tuple)): # For include(...) processing. pattern = Pattern(route, is_endpoint=False) @@ -69,6 +71,12 @@ def _path(route, view, kwargs=None, name=None, Pattern=None): elif callable(view): pattern = Pattern(route, name=name, is_endpoint=True) return URLPattern(pattern, view, kwargs, name) + elif isinstance(view, View): + view_cls_name = view.__class__.__name__ + raise TypeError( + f'view must be a callable, pass {view_cls_name}.as_view(), not ' + f'{view_cls_name}().' + ) else: raise TypeError('view must be a callable or a list/tuple in the case of include().') diff --git a/django/urls/resolvers.py b/django/urls/resolvers.py index fac77fc4bc402..674fd0c58ed9e 100644 --- a/django/urls/resolvers.py +++ b/django/urls/resolvers.py @@ -345,6 +345,7 @@ def __repr__(self): def check(self): warnings = self._check_pattern_name() warnings.extend(self.pattern.check()) + warnings.extend(self._check_callback()) return warnings def _check_pattern_name(self): @@ -361,6 +362,22 @@ def _check_pattern_name(self): else: return [] + def _check_callback(self): + from django.views import View + + view = self.callback + if inspect.isclass(view) and issubclass(view, View): + return [Error( + 'Your URL pattern %s has an invalid view, pass %s.as_view() ' + 'instead of %s.' % ( + self.pattern.describe(), + view.__name__, + view.__name__, + ), + id='urls.E009', + )] + return [] + def resolve(self, path): match = self.pattern.match(path) if match: diff --git a/docs/ref/checks.txt b/docs/ref/checks.txt index f304da7e1168c..3147114f8dfab 100644 --- a/docs/ref/checks.txt +++ b/docs/ref/checks.txt @@ -584,6 +584,8 @@ The following checks are performed on your URL configuration: take the correct number of arguments (…). * **urls.E008**: The custom ``handlerXXX`` view ``'path.to.view'`` could not be imported. +* **urls.E009**: Your URL pattern ``<pattern>`` has an invalid view, pass + ``<view>.as_view()`` instead of ``<view>``. ``contrib`` app checks ====================== diff --git a/tests/check_framework/test_urls.py b/tests/check_framework/test_urls.py index 2ef3d5b07fced..663c7a299ca48 100644 --- a/tests/check_framework/test_urls.py +++ b/tests/check_framework/test_urls.py @@ -134,6 +134,16 @@ def test_check_unique_namespaces(self): result = check_url_namespaces_unique(None) self.assertEqual(result, []) + @override_settings(ROOT_URLCONF='check_framework.urls.cbv_as_view') + def test_check_view_not_class(self): + self.assertEqual(check_url_config(None), [ + Error( + "Your URL pattern 'missing_as_view' has an invalid view, pass " + "EmptyCBV.as_view() instead of EmptyCBV.", + id='urls.E009', + ), + ]) + class UpdatedToPathTests(SimpleTestCase): diff --git a/tests/check_framework/urls/cbv_as_view.py b/tests/check_framework/urls/cbv_as_view.py new file mode 100644 index 0000000000000..932a2bfcc97d6 --- /dev/null +++ b/tests/check_framework/urls/cbv_as_view.py @@ -0,0 +1,19 @@ +from django.http import HttpResponse +from django.urls import path +from django.views import View + + +class EmptyCBV(View): + pass + + +class EmptyCallableView: + def __call__(self, request, *args, **kwargs): + return HttpResponse() + + +urlpatterns = [ + path('missing_as_view', EmptyCBV), + path('has_as_view', EmptyCBV.as_view()), + path('callable_class', EmptyCallableView()), +] diff --git a/tests/urlpatterns/tests.py b/tests/urlpatterns/tests.py index 1cd3523ca5055..dca9f630866ab 100644 --- a/tests/urlpatterns/tests.py +++ b/tests/urlpatterns/tests.py @@ -5,6 +5,7 @@ from django.test import SimpleTestCase from django.test.utils import override_settings from django.urls import NoReverseMatch, Resolver404, path, resolve, reverse +from django.views import View from .converters import DynamicConverter from .views import empty_view @@ -141,6 +142,19 @@ def test_invalid_converter(self): with self.assertRaisesMessage(ImproperlyConfigured, msg): path('foo/<nonexistent:var>/', empty_view) + def test_invalid_view(self): + msg = 'view must be a callable or a list/tuple in the case of include().' + with self.assertRaisesMessage(TypeError, msg): + path('articles/', 'invalid_view') + + def test_invalid_view_instance(self): + class EmptyCBV(View): + pass + + msg = 'view must be a callable, pass EmptyCBV.as_view(), not EmptyCBV().' + with self.assertRaisesMessage(TypeError, msg): + path('foo', EmptyCBV()) + def test_whitespace_in_route(self): msg = ( "URL route 'space/<int:num>/extra/<str:%stest>' cannot contain "
Ticket https://code.djangoproject.com/ticket/32195#ticket
https://api.github.com/repos/django/django/pulls/13682
2020-11-14T17:37:01Z
2021-06-09T09:42:35Z
2021-06-09T09:42:35Z
2021-06-09T09:42:35Z
1,390
django/django
51,583
Update installation steps and specify opencv version
diff --git a/PPOCRLabel/Makefile b/PPOCRLabel/Makefile new file mode 100644 index 0000000000..7d72a890cb --- /dev/null +++ b/PPOCRLabel/Makefile @@ -0,0 +1,35 @@ +# ex: set ts=8 noet: + +all: qt5 test + +test: testpy3 + +testpy2: + python -m unittest discover tests + +testpy3: + python3 -m unittest discover tests + +qt4: qt4py2 + +qt5: qt5py3 + +qt4py2: + pyrcc4 -py2 -o libs/resources.py resources.qrc + +qt4py3: + pyrcc4 -py3 -o libs/resources.py resources.qrc + +qt5py3: + pyrcc5 -o libs/resources.py resources.qrc + +clean: + rm -rf ~/.labelImgSettings.pkl *.pyc dist labelImg.egg-info __pycache__ build + +pip_upload: + python3 setup.py upload + +long_description: + restview --long-description + +.PHONY: all diff --git a/PPOCRLabel/README.md b/PPOCRLabel/README.md index 93fd64ffe1..1b46e41276 100644 --- a/PPOCRLabel/README.md +++ b/PPOCRLabel/README.md @@ -24,11 +24,9 @@ python PPOCRLabel.py #### Ubuntu Linux ``` -sudo apt-get install pyqt5-dev-tools -sudo apt-get install trash-cli +pip3 install pyqt5 +pip3 install trash-cli cd ./PPOCRLabel # 将目录切换到PPOCRLabel文件夹下 -sudo pip3 install -r requirements/requirements-linux-python3.txt -make qt5py3 python3 PPOCRLabel.py ``` @@ -38,7 +36,6 @@ pip3 install pyqt5 pip3 uninstall opencv-python # 由于mac版本的opencv与pyqt有冲突,需先手动卸载opencv pip3 install opencv-contrib-python-headless # 安装headless版本的open-cv cd ./PPOCRLabel # 将目录切换到PPOCRLabel文件夹下 -make qt5py3 python3 PPOCRLabel.py ``` @@ -75,6 +72,20 @@ python3 PPOCRLabel.py | rec_gt.txt | 识别标签。可直接用于PPOCR识别模型训练。需用户手动点击菜单栏“PaddleOCR” - "保存识别结果"后产生。 | | crop_img | 识别数据。按照检测框切割后的图片。与rec_gt.txt同时产生。 | +## 说明 +### 内置模型 + - 默认模型:PPOCRLabel默认使用PaddleOCR中的中英文超轻量OCR模型,支持中英文与数字识别,多种语言检测。 + - 模型语言切换:用户可通过菜单栏中 "PaddleOCR" - "选择模型" 切换内置模型语言,目前支持的语言包括法文、德文、韩文、日文。具体模型下载链接可参考[PaddleOCR模型列表](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/models_list.md). + - 自定义模型:用户可根据[自定义模型代码使用](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/whl.md#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%A8%A1%E5%9E%8B),通过修改PPOCRLabel.py中针对[PaddleOCR类的实例化](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/PPOCRLabel/PPOCRLabel.py#L110)替换成自己训练的模型 + +### 错误提示 +- 如果同时使用whl包安装了paddleocr,其优先级大于通过paddleocr.py调用PaddleOCR类,whl包未更新时会导致程序异常。 +- PPOCRLabel**不支持对中文文件名**的图片进行自动标注。 +- 如果您在打开软件过程中出现**objc[XXXXX]**开头的错误,证明您的opencv版本太高,建议安装4.2版本: +``` +pip install opencv-python==4.2.0.32 +``` + ### 参考资料 1.[Tzutalin. LabelImg. Git code (2015)](https://github.com/tzutalin/labelImg) diff --git a/PPOCRLabel/README_en.md b/PPOCRLabel/README_en.md index 7ebbb97c1f..d503dd3d39 100644 --- a/PPOCRLabel/README_en.md +++ b/PPOCRLabel/README_en.md @@ -26,11 +26,9 @@ python PPOCRLabel.py --lang en #### Ubuntu Linux ``` -sudo apt-get install pyqt5-dev-tools -sudo apt-get install trash-cli +pip3 install pyqt5 +pip3 install trash-cli cd ./PPOCRLabel # Change the directory to the PPOCRLabel folder -sudo pip3 install -r requirements/requirements-linux-python3.txt -make qt5py3 python3 PPOCRLabel.py --lang en ``` @@ -40,7 +38,6 @@ pip3 install pyqt5 pip3 uninstall opencv-python # Uninstall opencv manually as it conflicts with pyqt pip3 install opencv-contrib-python-headless # Install the headless version of opencv cd ./PPOCRLabel # Change the directory to the PPOCRLabel folder -make qt5py3 python3 PPOCRLabel.py --lang en ``` @@ -92,6 +89,14 @@ Therefore, if the recognition result has been manually changed before, it may ch | rec_gt.txt | The recognition label file, which can be directly used for PPOCR identification model training, is generated after the user clicks on the menu bar "PaddleOCR"-"Save recognition result". | | crop_img | The recognition data, generated at the same time with *rec_gt.txt* | + +### Built-in Model +- Default model: PPOCRLabel uses the Chinese and English ultra-lightweight OCR model in PaddleOCR by default, supports Chinese, English and number recognition, and multiple language detection. +- Model language switching: Changing the built-in model language is supportable by clicking "PaddleOCR"-"Choose OCR Model" in the menu bar. Currently supported languages​include French, German, Korean, and Japanese. +For specific model download links, please refer to [PaddleOCR Model List](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/models_list_en.md#multilingual-recognition-modelupdating) +- Custom model: The model trained by users can be replaced by modifying PPOCRLabel.py in [PaddleOCR class instantiation](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/PPOCRLabel/PPOCRLabel.py#L110) referring [Custom Model Code](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/whl_en.md#use-custom-model) + + ## Related 1.[Tzutalin. LabelImg. Git code (2015)](https://github.com/tzutalin/labelImg) diff --git a/requirements.txt b/requirements.txt index aa1b6db4c2..4d9e8650f5 100644 --- a/requirements.txt +++ b/requirements.txt @@ -4,4 +4,4 @@ pyclipper lmdb tqdm numpy -opencv-python +opencv-python==4.2.0.32
As the new version of OpenCV(4.4.X) has conflicts with PYQT, the version of OpenCV needs to be specified.
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/1309
2020-12-02T13:22:37Z
2020-12-03T03:05:37Z
2020-12-03T03:05:37Z
2020-12-03T03:05:37Z
1,761
PaddlePaddle/PaddleOCR
42,685
Added YCML
diff --git a/README.md b/README.md index 14283b3d..2ed9ceef 100644 --- a/README.md +++ b/README.md @@ -575,6 +575,7 @@ on MNIST digits[DEEP LEARNING] <a name="objectivec-general-purpose"> ### General-Purpose Machine Learning +* [YCML](https://github.com/yconst/YCML) - A Machine Learning framework for Objective-C and Swift (OS X / iOS). * [MLPNeuralNet](https://github.com/nikolaypavlov/MLPNeuralNet) - Fast multilayer perceptron neural network library for iOS and Mac OS X. MLPNeuralNet predicts new examples by trained neural network. It is built on top of the Apple's Accelerate Framework, using vectorized operations and hardware acceleration if available. * [MAChineLearning](https://github.com/gianlucabertani/MAChineLearning) - An Objective-C multilayer perceptron library, with full support for training through backpropagation. Implemented using vDSP and vecLib, it's 20 times faster than its Java equivalent. Includes sample code for use from Swift. * [BPN-NeuralNetwork](https://github.com/Kalvar/ios-BPN-NeuralNetwork) - It implemented 3 layers neural network ( Input Layer, Hidden Layer and Output Layer ) and it named Back Propagation Neural Network (BPN). This network can be used in products recommendation, user behavior analysis, data mining and data analysis.
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/181
2015-08-24T18:05:46Z
2015-08-25T01:52:21Z
2015-08-25T01:52:21Z
2015-08-25T01:52:26Z
334
josephmisiti/awesome-machine-learning
52,519
Save hypernetwork hash and fix hypernetwork parameter restoring
diff --git a/modules/generation_parameters_copypaste.py b/modules/generation_parameters_copypaste.py index 565e342df09..fbd91300550 100644 --- a/modules/generation_parameters_copypaste.py +++ b/modules/generation_parameters_copypaste.py @@ -14,6 +14,7 @@ re_param = re.compile(re_param_code) re_params = re.compile(r"^(?:" + re_param_code + "){3,}$") re_imagesize = re.compile(r"^(\d+)x(\d+)$") +re_hypernet_hash = re.compile("\(([0-9a-f]+)\)$") type_of_gr_update = type(gr.update()) paste_fields = {} bind_list = [] @@ -139,6 +140,30 @@ def run_bind(): ) +def find_hypernetwork_key(hypernet_name, hypernet_hash=None): + """Determines the config parameter name to use for the hypernet based on the parameters in the infotext. + + Example: an infotext provides "Hypernet: ke-ta" and "Hypernet hash: 1234abcd". For the "Hypernet" config + parameter this means there should be an entry that looks like "ke-ta-10000(1234abcd)" to set it to. + + If the infotext has no hash, then a hypernet with the same name will be selected instead. + """ + hypernet_name = hypernet_name.lower() + if hypernet_hash is not None: + # Try to match the hash in the name + for hypernet_key in shared.hypernetworks.keys(): + result = re_hypernet_hash.search(hypernet_key) + if result is not None and result[1] == hypernet_hash: + return hypernet_key + else: + # Fall back to a hypernet with the same name + for hypernet_key in shared.hypernetworks.keys(): + if hypernet_key.lower().startswith(hypernet_name): + return hypernet_key + + return None + + def parse_generation_parameters(x: str): """parses generation parameters string, the one you see in text field under the picture in UI: ``` @@ -188,6 +213,14 @@ def parse_generation_parameters(x: str): if "Clip skip" not in res: res["Clip skip"] = "1" + if "Hypernet strength" not in res: + res["Hypernet strength"] = "1" + + if "Hypernet" in res: + hypernet_name = res["Hypernet"] + hypernet_hash = res.get("Hypernet hash", None) + res["Hypernet"] = find_hypernetwork_key(hypernet_name, hypernet_hash) + return res diff --git a/modules/processing.py b/modules/processing.py index 24c537d14ec..6dd7491b846 100644 --- a/modules/processing.py +++ b/modules/processing.py @@ -314,7 +314,7 @@ def js(self): return json.dumps(obj) - def infotext(self, p: StableDiffusionProcessing, index): + def infotext(self, p: StableDiffusionProcessing, index): return create_infotext(p, self.all_prompts, self.all_seeds, self.all_subseeds, comments=[], position_in_batch=index % self.batch_size, iteration=index // self.batch_size) @@ -429,6 +429,7 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments, iteration "Model hash": getattr(p, 'sd_model_hash', None if not opts.add_model_hash_to_info or not shared.sd_model.sd_model_hash else shared.sd_model.sd_model_hash), "Model": (None if not opts.add_model_name_to_info or not shared.sd_model.sd_checkpoint_info.model_name else shared.sd_model.sd_checkpoint_info.model_name.replace(',', '').replace(':', '')), "Hypernet": (None if shared.loaded_hypernetwork is None else shared.loaded_hypernetwork.name), + "Hypernet hash": (None if shared.loaded_hypernetwork is None else sd_models.model_hash(shared.loaded_hypernetwork.filename)), "Hypernet strength": (None if shared.loaded_hypernetwork is None or shared.opts.sd_hypernetwork_strength >= 1 else shared.opts.sd_hypernetwork_strength), "Batch size": (None if p.batch_size < 2 else p.batch_size), "Batch pos": (None if p.batch_size < 2 else position_in_batch), @@ -446,7 +447,7 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments, iteration generation_params_text = ", ".join([k if k == v else f'{k}: {generation_parameters_copypaste.quote(v)}' for k, v in generation_params.items() if v is not None]) - negative_prompt_text = "\nNegative prompt: " + p.all_negative_prompts[index] if p.all_negative_prompts[index] else "" + negative_prompt_text = "\nNegative prompt: " + p.all_negative_prompts[index] if p.all_negative_prompts[index] else "" return f"{all_prompts[index]}{negative_prompt_text}\n{generation_params_text}".strip()
1. Saves the hash of the current hypernetwork into the infotext 2. Fixes the restoring of the hypernetwork config parameter from the hypernetwork name/hash 3. Fixes the restoring of the hypernetwork strength parameter Closes #4824
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/5718
2022-12-13T22:33:31Z
2022-12-24T08:14:19Z
2022-12-24T08:14:19Z
2022-12-24T08:14:19Z
1,167
AUTOMATIC1111/stable-diffusion-webui
40,564
[web] don't crash when server has disappeared, fix #5677
diff --git a/web/src/js/components/FlowView/Messages.tsx b/web/src/js/components/FlowView/Messages.tsx index 4e74e05f77..89ae95cdaa 100644 --- a/web/src/js/components/FlowView/Messages.tsx +++ b/web/src/js/components/FlowView/Messages.tsx @@ -24,7 +24,16 @@ export default function Messages({flow, messages_meta}: MessagesPropTypes) { MessageUtils.getContentURL(flow, "messages", contentView, maxLines + 1), flow.id + messages_meta.count ); - const messages = useMemo<ContentViewData[] | undefined>(() => content && JSON.parse(content), [content]) || []; + const messages = useMemo<ContentViewData[] | undefined>(() => { + if (content) { + try { + return JSON.parse(content) + } catch (e) { + const err: ContentViewData = {"description": "Network Error", lines: [[["error", `${content}`]]]}; + return err; + } + } + }, [content]) || []; return ( <div className="contentview">
Coincidentally managed to reproduce this one during testing. :)
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/5683
2022-10-26T16:12:53Z
2022-10-26T16:47:09Z
2022-10-26T16:47:09Z
2022-10-26T16:47:48Z
252
mitmproxy/mitmproxy
27,628
Improve options UX
diff --git a/mitmproxy/tools/main.py b/mitmproxy/tools/main.py index d8d16ea433..3735cbf445 100644 --- a/mitmproxy/tools/main.py +++ b/mitmproxy/tools/main.py @@ -20,7 +20,7 @@ from mitmproxy import optmanager # noqa from mitmproxy import proxy # noqa from mitmproxy import log # noqa -from mitmproxy.utils import debug # noqa +from mitmproxy.utils import debug, arg_check # noqa def assert_utf8_env(): @@ -72,7 +72,17 @@ def run( master = master_cls(opts) parser = make_parser(opts) - args = parser.parse_args(arguments) + + # To make migration from 2.x to 3.0 bearable. + if "-R" in sys.argv and sys.argv[sys.argv.index("-R") + 1].startswith("http"): + print("-R is used for specifying replacements.\n" + "To use mitmproxy in reverse mode please use --mode reverse:SPEC instead") + + try: + args = parser.parse_args(arguments) + except SystemExit: + arg_check.check() + sys.exit(1) try: unknown = optmanager.load_paths(opts, args.conf) pconf = process_options(parser, opts, args) diff --git a/mitmproxy/utils/arg_check.py b/mitmproxy/utils/arg_check.py new file mode 100644 index 0000000000..73f7047cca --- /dev/null +++ b/mitmproxy/utils/arg_check.py @@ -0,0 +1,148 @@ +import sys + +DEPRECATED = """ +--cadir +-Z +--body-size-limit +--stream +--palette +--palette-transparent +--follow +--order +--no-mouse +--reverse +--socks +--http2-priority +--no-http2-priority +--no-websocket +--websocket +--spoof-source-address +--upstream-bind-address +--ciphers-client +--ciphers-server +--client-certs +--no-upstream-cert +--add-upstream-certs-to-client-chain +--upstream-trusted-cadir +--upstream-trusted-ca +--ssl-version-client +--ssl-version-server +--no-onboarding +--onboarding-host +--onboarding-port +--server-replay-use-header +--no-pop +--replay-ignore-content +--replay-ignore-payload-param +--replay-ignore-param +--replay-ignore-host +--replace-from-file +""" + +REPLACED = """ +-t +-u +--wfile +-a +--afile +-z +-b +--bind-address +--port +-I +--ignore +--tcp +--cert +--insecure +-c +--replace +-i +-f +--filter +""" + +REPLACEMENTS = { + "--stream": "stream_large_bodies", + "--palette": "console_palette", + "--palette-transparent": "console_palette_transparent:", + "--follow": "console_focus_follow", + "--order": "console_order", + "--no-mouse": "console_mouse", + "--reverse": "console_order_reversed", + "--no-http2-priority": "http2_priority", + "--no-websocket": "websocket", + "--no-upstream-cert": "upstream_cert", + "--upstream-trusted-cadir": "ssl_verify_upstream_trusted_cadir", + "--upstream-trusted-ca": "ssl_verify_upstream_trusted_ca", + "--no-onboarding": "onboarding", + "--no-pop": "server_replay_nopop", + "--replay-ignore-content": "server_replay_ignore_content", + "--replay-ignore-payload-param": "server_replay_ignore_payload_params", + "--replay-ignore-param": "server_replay_ignore_params", + "--replay-ignore-host": "server_replay_ignore_host", + "--replace-from-file": "replacements (use @ to specify path)", + "-t": "--stickycookie", + "-u": "--stickyauth", + "--wfile": "--save-stream-file", + "-a": "-w Prefix path with + to append.", + "--afile": "-w Prefix path with + to append.", + "-z": "--anticomp", + "-b": "--listen-host", + "--bind-address": "--listen-host", + "--port": "--listen-port", + "-I": "--ignore-hosts", + "--ignore": "--ignore-hosts", + "--tcp": "--tcp-hosts", + "--cert": "--certs", + "--insecure": "--ssl-insecure", + "-c": "-C", + "--replace": "--replacements", + "-i": "--intercept", + "-f": "--view-filter", + "--filter": "--view-filter" +} + + +def check(): + args = sys.argv[1:] + print() + if "-U" in args: + print("-U is deprecated, please use --mode upstream:SPEC instead") + + if "-T" in args: + print("-T is deprecated, please use --mode transparent instead") + + for option in ("-e", "--eventlog", "--norefresh"): + if option in args: + print("{} has been removed.".format(option)) + + for option in ("--nonanonymous", "--singleuser", "--htpasswd"): + if option in args: + print( + '{} is deprecated.\n' + 'Please use `--proxyauth SPEC` instead.\n' + 'SPEC Format: "username:pass", "any" to accept any user/pass combination,\n' + '"@path" to use an Apache htpasswd file, or\n' + '"ldap[s]:url_server_ldap:dn_auth:password:dn_subtree" ' + 'for LDAP authentication.'.format(option)) + + for option in REPLACED.splitlines(): + if option in args: + print( + "{} is deprecated.\n" + "Please use `{}` instead.".format( + option, + REPLACEMENTS.get(option) + ) + ) + + for option in DEPRECATED.splitlines(): + if option in args: + print( + "{} is deprecated.\n" + "Please use `--set {}=value` instead.\n" + "To show all options and their default values use --options".format( + option, + REPLACEMENTS.get(option, None) or option.lstrip("-").replace("-", "_") + ) + ) diff --git a/test/mitmproxy/utils/test_arg_check.py b/test/mitmproxy/utils/test_arg_check.py new file mode 100644 index 0000000000..72913955a6 --- /dev/null +++ b/test/mitmproxy/utils/test_arg_check.py @@ -0,0 +1,36 @@ +import io +import contextlib +from unittest import mock + +import pytest + +from mitmproxy.utils import arg_check + + +@pytest.mark.parametrize('arg, output', [ + (["-T"], "-T is deprecated, please use --mode transparent instead"), + (["-U"], "-U is deprecated, please use --mode upstream:SPEC instead"), + (["--cadir"], "--cadir is deprecated.\n" + "Please use `--set cadir=value` instead.\n" + "To show all options and their default values use --options"), + (["--palette"], "--palette is deprecated.\n" + "Please use `--set console_palette=value` instead.\n" + "To show all options and their default values use --options"), + (["--wfile"], "--wfile is deprecated.\n" + "Please use `--save-stream-file` instead."), + (["--eventlog"], "--eventlog has been removed."), + (["--nonanonymous"], '--nonanonymous is deprecated.\n' + 'Please use `--proxyauth SPEC` instead.\n' + 'SPEC Format: "username:pass", "any" to accept any user/pass combination,\n' + '"@path" to use an Apache htpasswd file, or\n' + '"ldap[s]:url_server_ldap:dn_auth:password:dn_subtree" ' + 'for LDAP authentication.') + +]) +def test_check_args(arg, output): + f = io.StringIO() + with contextlib.redirect_stdout(f): + with mock.patch('sys.argv') as m: + m.__getitem__.return_value = arg + arg_check.check() + assert f.getvalue().strip() == output
More checks can be added to this. Almost all the options which will be removed can be set using `--set option_name=value` Maybe add a check for that? ref: #2498
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/2503
2017-08-05T17:09:18Z
2017-08-07T14:22:17Z
2017-08-07T14:22:17Z
2017-08-07T14:22:17Z
1,986
mitmproxy/mitmproxy
28,043
[updated] improve softmax implementation
diff --git a/keras/activations.py b/keras/activations.py index a0ae77385fe..cf0f2134866 100644 --- a/keras/activations.py +++ b/keras/activations.py @@ -25,7 +25,9 @@ def softmax(x, axis=-1): ValueError: In case `dim(x) == 1`. """ ndim = K.ndim(x) - if ndim == 2: + if ndim == 1: + raise ValueError('Cannot apply softmax to a tensor that is 1D') + elif ndim == 2: return K.softmax(x) elif ndim > 2: e = K.exp(x - K.max(x, axis=axis, keepdims=True)) diff --git a/keras/backend/theano_backend.py b/keras/backend/theano_backend.py index 17ce97fb13f..9a47923c058 100644 --- a/keras/backend/theano_backend.py +++ b/keras/backend/theano_backend.py @@ -1716,10 +1716,11 @@ def relu(x, alpha=0., max_value=None): def softmax(x, axis=-1): - if axis == -1 or axis == x.ndim - 1: + if (axis == -1 or axis == x.ndim - 1) and x.ndim == 2: return T.nnet.softmax(x) - return T.exp(x - x.max()) / T.exp( - x - x.max()).sum(axis=axis, keepdims=True) + xm = x.max(axis=axis, keepdims=True) + return T.exp(x - xm) / T.exp( + x - xm).sum(axis=axis, keepdims=True) def softplus(x): diff --git a/tests/keras/activations_test.py b/tests/keras/activations_test.py index 5be59def8c8..fe93075fa01 100644 --- a/tests/keras/activations_test.py +++ b/tests/keras/activations_test.py @@ -79,6 +79,23 @@ def test_softmax_invalid(): f = K.function([x], [activations.softmax(x)]) +def test_softmax_3d(): + """Test using a reference implementation of softmax. + """ + def softmax(values, axis): + m = np.max(values, axis=axis, keepdims=True) + e = np.exp(values - m) + return e / np.sum(e, axis=axis, keepdims=True) + + x = K.placeholder(ndim=3) + f = K.function([x], [activations.softmax(x, axis=1)]) + test_values = get_standard_values()[:, :, np.newaxis].copy() + + result = f([test_values])[0] + expected = softmax(test_values, axis=1) + assert_allclose(result, expected, rtol=1e-05) + + def test_time_distributed_softmax(): x = K.placeholder(shape=(1, 1, 5)) f = K.function([x], [activations.softmax(x)]) diff --git a/tests/keras/backend/backend_test.py b/tests/keras/backend/backend_test.py index f1d44c1a1de..eb614eda27c 100644 --- a/tests/keras/backend/backend_test.py +++ b/tests/keras/backend/backend_test.py @@ -975,6 +975,7 @@ def test_nn_operations(self): check_single_tensor_operation('tanh', (4, 2), WITH_NP) check_single_tensor_operation('softmax', (4, 10), WITH_NP) + check_single_tensor_operation('softmax', (4, 5, 3), WITH_NP, axis=1) check_single_tensor_operation('softmax', (4, 5, 3, 10), WITH_NP, axis=2) check_two_tensor_operation('binary_crossentropy', (4, 2), (4, 2), WITH_NP, from_logits=True)
### Summary This PR is an update of #10027 which I kinda broke. Sorry about it. ### Related Issues ### PR Overview I moved the correctness test in the backend tests. I think it belongs there rather than in the activations tests. - [ ] This PR requires new unit tests [y/n] (make sure tests are included) - [ ] This PR requires to update the documentation [y/n] (make sure the docs are up-to-date) - [x] This PR is backwards compatible [y/n] - [ ] This PR changes the current API [y/n] (all API changes need to be approved by fchollet)
https://api.github.com/repos/keras-team/keras/pulls/11189
2018-09-21T10:52:25Z
2018-09-28T19:09:49Z
2018-09-28T19:09:48Z
2018-11-07T17:05:29Z
897
keras-team/keras
47,014
[embedding] add more detail profiling
diff --git a/colossalai/nn/parallel/layers/cache_embedding/cache_mgr.py b/colossalai/nn/parallel/layers/cache_embedding/cache_mgr.py index 893188b71e85..d892901453c8 100644 --- a/colossalai/nn/parallel/layers/cache_embedding/cache_mgr.py +++ b/colossalai/nn/parallel/layers/cache_embedding/cache_mgr.py @@ -256,13 +256,13 @@ def flush(self): assert torch.all(self.cached_idx_map == -1).item() def print_comm_stats(self): - if self._cuda_to_cpu_numel > 0: + if self._cuda_to_cpu_numel > 0 and "3_2_2_evict_out_gpu_to_cpu_copy" in self._elapsed_dict: elapsed = self._elapsed_dict["3_2_2_evict_out_gpu_to_cpu_copy"] print( f"CUDA->CPU BWD {self._cuda_to_cpu_numel * self.elem_size_in_byte / 1e6 / elapsed} MB/s {self._cuda_to_cpu_numel / 1e6} M elem" ) print(f'cuda_to_cpu_elapse {elapsed} sec') - if self._cpu_to_cuda_numel > 0: + if self._cpu_to_cuda_numel > 0 and "3_4_2_evict_in_gpu_to_cpu_copy" in self._elapsed_dict: elapsed = self._elapsed_dict["3_4_2_evict_in_gpu_to_cpu_copy"] print( f"CPU->CUDA BWD {self._cpu_to_cuda_numel * self.elem_size_in_byte / 1e6 / elapsed} MB/s {self._cpu_to_cuda_numel / 1e6} M elem" @@ -382,8 +382,9 @@ def _prepare_rows_on_cuda(self, cpu_row_idxs: torch.Tensor) -> None: self.cached_idx_map.index_copy_(0, invalid_idxs, backup_idxs) elif self._evict_strategy == EvictionStrategy.LFU: - backup_freqs = self.freq_cnter[invalid_idxs].clone() - self.freq_cnter.index_fill_(0, invalid_idxs, sys.maxsize) + with self.timer("3_1_0_backup_freqs") as timer: + backup_freqs = self.freq_cnter[invalid_idxs].clone() + self.freq_cnter.index_fill_(0, invalid_idxs, sys.maxsize) with self.timer("3_1_1_find_evict_gpu_idxs_elapsed") as timer: evict_gpu_row_idxs = self._find_evict_gpu_idxs(evict_num) @@ -393,7 +394,8 @@ def _prepare_rows_on_cuda(self, cpu_row_idxs: torch.Tensor) -> None: -1).index_select(0, evict_gpu_row_idxs) evict_out_rows_cpu = torch.empty_like(evict_out_rows_gpu, device='cpu', pin_memory=True) evict_out_rows_cpu.copy_(evict_out_rows_gpu, non_blocking=True) - self.freq_cnter.index_copy_(0, invalid_idxs, backup_freqs) + with self.timer("3_1_2_find_evict_index_copy") as timer: + self.freq_cnter.index_copy_(0, invalid_idxs, backup_freqs) evict_info = self.cached_idx_map[evict_gpu_row_idxs] @@ -416,7 +418,8 @@ def _prepare_rows_on_cuda(self, cpu_row_idxs: torch.Tensor) -> None: with self.timer("3_2_2_evict_out_gpu_to_cpu_copy") as timer: evict_out_rows_cpu = evict_out_rows_cpu.cpu() - self.weight.view(self.num_embeddings, -1).index_copy_(0, evict_info.cpu(), evict_out_rows_cpu) + with self.timer("3_2_2_evict_out_index_select") as timer: + self.weight.view(self.num_embeddings, -1).index_copy_(0, evict_info.cpu(), evict_out_rows_cpu) self.cached_idx_map.index_fill_(0, evict_gpu_row_idxs, -1) self.inverted_cached_idx.index_fill_(0, evict_info, -1) @@ -447,13 +450,12 @@ def _prepare_rows_on_cuda(self, cpu_row_idxs: torch.Tensor) -> None: evict_in_rows_gpu.copy_(rows_cpu, non_blocking=True) pass else: - # TODO hotspot: index select copy cpu -> gpu, cpu index? - with self.timer("3_4_1_evict_in_index_select") as timer: # narrow index select to a subset of self.weight # tmp = torch.narrow(self.weight.view(self.num_embeddings, -1), 0, min(cpu_row_idxs).cpu(), max(cpu_row_idxs) - min(cpu_row_idxs) + 1) # evict_in_rows_gpu = tmp.index_select(0, cpu_row_idxs_copy - min(cpu_row_idxs).cpu()) - evict_in_rows_gpu = self.weight.view(self.num_embeddings, -1).index_select(0, cpu_row_idxs_copy) + evict_in_rows_gpu = self.weight.view(self.num_embeddings, + -1).index_select(0, cpu_row_idxs_copy).pin_memory() with self.timer("3_4_2_evict_in_gpu_to_cpu_copy") as timer: evict_in_rows_gpu = evict_in_rows_gpu.cuda() @@ -461,10 +463,9 @@ def _prepare_rows_on_cuda(self, cpu_row_idxs: torch.Tensor) -> None: with self.timer("3_4_3_evict_in_index_copy") as timer: self.cuda_cached_weight.view(self.cuda_row_num, -1).index_copy_(0, slots, evict_in_rows_gpu) - with self.timer("3_4_evict_in_elapse") as timer: - slot_offsets = slots + with self.timer("3_5_evict_in_elapse_final") as timer: self.cached_idx_map[slots] = cpu_row_idxs - self.inverted_cached_idx.index_copy_(0, cpu_row_idxs, slot_offsets) + self.inverted_cached_idx.index_copy_(0, cpu_row_idxs, slots) if self._evict_strategy == EvictionStrategy.LFU: self.freq_cnter.index_fill_(0, slots, 0) self._cuda_available_row_num -= cpu_row_idxs.numel()
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/1656
2022-09-27T05:43:35Z
2022-09-27T05:44:00Z
2022-09-27T05:44:00Z
2022-09-27T05:44:00Z
1,386
hpcaitech/ColossalAI
11,344
Take `no_emb_class` into account when calling `resize_pos_embed`
diff --git a/timm/models/vision_transformer.py b/timm/models/vision_transformer.py index 9066a9de88..9602355bb1 100644 --- a/timm/models/vision_transformer.py +++ b/timm/models/vision_transformer.py @@ -644,7 +644,7 @@ def checkpoint_filter_fn(state_dict, model, adapt_layer_scale=False): v = resize_pos_embed( v, model.pos_embed, - getattr(model, 'num_prefix_tokens', 1), + 0 if getattr(model, 'no_embed_class') else getattr(model, 'num_prefix_tokens', 1), model.patch_embed.grid_size ) elif adapt_layer_scale and 'gamma_' in k:
`num_prefix_tokens` gets overridden by `no_embed_class` [here](https://github.com/rwightman/pytorch-image-models/blob/d7b55a9429f3d56a991e604cbc2e9fdf1901612f/timm/models/vision_transformer.py#L372) when initializing `VisionTransformer`. However, `no_embed_class` is not considered when calling `rezie_post_embed`, which causes the mismatch between the shape of the resized embeddings and the shape of embedding in the initialized model. This fix is required to load `deit3` models for different image sizes.
https://api.github.com/repos/huggingface/pytorch-image-models/pulls/1365
2022-07-24T11:28:38Z
2022-07-25T04:03:02Z
2022-07-25T04:03:02Z
2022-07-25T04:03:02Z
161
huggingface/pytorch-image-models
16,263
reduce unnecessary re-indexing extra networks directory
diff --git a/modules/ui_extra_networks.py b/modules/ui_extra_networks.py index beea1316083..e1c679eca35 100644 --- a/modules/ui_extra_networks.py +++ b/modules/ui_extra_networks.py @@ -417,21 +417,21 @@ def create_ui(interface: gr.Blocks, unrelated_tabs, tabname): dropdown_sort.change(fn=lambda: None, _js="function(){ applyExtraNetworkSort('" + tabname + "'); }") + def create_html(): + ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages] + def pages_html(): if not ui.pages_contents: - return refresh() - + create_html() return ui.pages_contents def refresh(): for pg in ui.stored_extra_pages: pg.refresh() - - ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages] - + create_html() return ui.pages_contents - interface.load(fn=pages_html, inputs=[], outputs=[*ui.pages]) + interface.load(fn=pages_html, inputs=[], outputs=ui.pages) button_refresh.click(fn=refresh, inputs=[], outputs=ui.pages) return ui
## Description webui calles `extra networks refresh()` when the webpages loads the refresh function includes clearing out the existing extra networks list and crawling through the directories again this is unnecessary particularly in lots of cases on initial launch of the server followed by loading of web page from the browser when the list was just created seconds before this possibly would help reduce the issue with https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14507 > [Issue]: Long wait times before generation and display of Extra Networks after every launch, due to large amounts of Loras and Checkpoints assuming that the issue is caused by crawling through the large directory --- more improvements can be done if we cache the extra networks list and HTML and only updated them when there is change to the directories it might be possible to improve the performance by making indexing and creating of html multi-threaded (for each network type) moveing indexing on to a separate thread might also improved performance https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/9c6ea5386b568f72fc8f539c7f3c90053fd64e4a/extensions-builtin/Lora/networks.py#L643 --- ## Changes split the `creating of the HTML` from the `refreshing the list of extra networks` the refresh button still refreshes the list reindexing the directory but the creating of the HTML triggered by the loading of web page only creates the HTML and not reloads the list ## Checklist: - [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) - [x] I have performed a self-review of my own code - [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style) - [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/14512
2024-01-03T10:12:57Z
2024-01-03T16:15:46Z
2024-01-03T16:15:46Z
2024-01-03T16:15:48Z
276
AUTOMATIC1111/stable-diffusion-webui
40,139
TST: Add test for pandas on sys.getsizeof GH#21758
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index d31f617b9be15..4860ee235c03d 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -1,6 +1,8 @@ import collections from functools import partial import string +import subprocess +import sys import numpy as np import pytest @@ -229,3 +231,17 @@ def test_temp_setattr(with_exception): raise ValueError("Inside exception raised") raise ValueError("Outside exception raised") assert ser.name == "first" + + +def test_str_size(): + # GH#21758 + a = "a" + expected = sys.getsizeof(a) + pyexe = sys.executable.replace("\\", "/") + call = [ + pyexe, + "-c", + "a='a';import sys;sys.getsizeof(a);import pandas;print(sys.getsizeof(a));", + ] + result = subprocess.check_output(call).decode()[-4:-1].strip("\n") + assert int(result) == int(expected)
- [ ] closes #21758 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/53385
2023-05-25T07:24:35Z
2023-05-30T17:04:10Z
2023-05-30T17:04:10Z
2023-05-31T12:22:47Z
255
pandas-dev/pandas
45,214
Update semantic_segmentation.md
diff --git a/docs/source/en/tasks/semantic_segmentation.md b/docs/source/en/tasks/semantic_segmentation.md index 6bb25e3e2e18f..c3ad3e00f61aa 100644 --- a/docs/source/en/tasks/semantic_segmentation.md +++ b/docs/source/en/tasks/semantic_segmentation.md @@ -206,7 +206,7 @@ The transform is applied on the fly which is faster and consumes less disk space ## Evaluate -Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [mean Intersection over Union](https://huggingface.co/spaces/evaluate-metric/accuracy) (IoU) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): +Including a metric during training is often helpful for evaluating your model's performance. You can quickly load an evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [mean Intersection over Union](https://huggingface.co/spaces/evaluate-metric/accuracy) (IoU) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ```py >>> import evaluate
fixed typo
https://api.github.com/repos/huggingface/transformers/pulls/26419
2023-09-26T14:07:57Z
2023-09-27T09:51:44Z
2023-09-27T09:51:44Z
2023-09-27T09:51:44Z
338
huggingface/transformers
12,324
jenkins: lock device resource first before making container
diff --git a/Jenkinsfile b/Jenkinsfile index dffffae7f7bdcf..ef1996d701b109 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -78,8 +78,8 @@ def deviceStage(String stageName, String deviceType, List extra_env, def steps) def extra = extra_env.collect { "export ${it}" }.join('\n'); def branch = env.BRANCH_NAME ?: 'master'; - docker.image('ghcr.io/commaai/alpine-ssh').inside('--user=root') { - lock(resource: "", label: deviceType, inversePrecedence: true, variable: 'device_ip', quantity: 1, resourceSelectStrategy: 'random') { + lock(resource: "", label: deviceType, inversePrecedence: true, variable: 'device_ip', quantity: 1, resourceSelectStrategy: 'random') { + docker.image('ghcr.io/commaai/alpine-ssh').inside('--user=root') { timeout(time: 20, unit: 'MINUTES') { retry (3) { device(device_ip, "git checkout", extra + "\n" + readFile("selfdrive/test/setup_device_ci.sh"))
so we don't create a bunch of docker containers while waiting for the resource to open up
https://api.github.com/repos/commaai/openpilot/pulls/31330
2024-02-06T21:32:16Z
2024-02-06T21:35:09Z
2024-02-06T21:35:09Z
2024-02-06T21:36:48Z
270
commaai/openpilot
8,951
[3.6] bpo-29706: Test that IDLE colors async/await as keywords. (GH-6846)
diff --git a/Lib/idlelib/colorizer.py b/Lib/idlelib/colorizer.py index 5cb85f24dfd723..1f31ce22d7e510 100644 --- a/Lib/idlelib/colorizer.py +++ b/Lib/idlelib/colorizer.py @@ -17,8 +17,6 @@ def make_pat(): builtinlist = [str(name) for name in dir(builtins) if not name.startswith('_') and \ name not in keyword.kwlist] - # self.file = open("file") : - # 1st 'file' colorized normal, 2nd as builtin, 3rd as string builtin = r"([^.'\"\\#]\b|^)" + any("BUILTIN", builtinlist) + r"\b" comment = any("COMMENT", [r"#[^\n]*"]) stringprefix = r"(?i:r|u|f|fr|rf|b|br|rb)?" @@ -268,13 +266,14 @@ def _color_delegator(parent): # htest # "else: float(None)\n" "if iF + If + IF: 'keyword matching must respect case'\n" "if'': x or'' # valid string-keyword no-space combinations\n" + "async def f(): await g()\n" "# All valid prefixes for unicode and byte strings should be colored.\n" "'x', '''x''', \"x\", \"\"\"x\"\"\"\n" "r'x', u'x', R'x', U'x', f'x', F'x'\n" "fr'x', Fr'x', fR'x', FR'x', rf'x', rF'x', Rf'x', RF'x'\n" "b'x',B'x', br'x',Br'x',bR'x',BR'x', rb'x'.rB'x',Rb'x',RB'x'\n" "# Invalid combinations of legal characters should be half colored.\n" - "ur'x', ru'x', uf'x', fu'x', UR'x', ufr'x', rfu'x', xf'x', fx'x'" + "ur'x', ru'x', uf'x', fu'x', UR'x', ufr'x', rfu'x', xf'x', fx'x'\n" ) text = Text(top, background="white") text.pack(expand=1, fill="both")
Added to the eye-verified htest, not to the unittests. Also remove some stray leftover comments. (cherry picked from commit 389a48ede92bf7965889d554d2cd17b50d6e3d86) Co-authored-by: Terry Jan Reedy <tjreedy@udel.edu> <!-- issue-number: bpo-29706 --> https://bugs.python.org/issue29706 <!-- /issue-number -->
https://api.github.com/repos/python/cpython/pulls/6868
2018-05-15T18:21:47Z
2018-05-15T20:48:14Z
2018-05-15T20:48:14Z
2018-05-15T20:48:18Z
574
python/cpython
4,074
[ci] fix shardformer tests.
diff --git a/colossalai/booster/plugin/hybrid_parallel_plugin.py b/colossalai/booster/plugin/hybrid_parallel_plugin.py index 205660f946e9..8ee1e97c6ce3 100644 --- a/colossalai/booster/plugin/hybrid_parallel_plugin.py +++ b/colossalai/booster/plugin/hybrid_parallel_plugin.py @@ -919,6 +919,7 @@ class HybridParallelPlugin(PipelinePluginBase): custom_policy (Policy, optional): Custom policy for Shardformer. Defaults to None. pp_style (str, optional): The style for pipeline parallelism. Defaults to '1f1b'. num_model_chunks (int, optional): The number of model chunks for interleaved pipeline parallelism. Defaults to 1. + enable_metadata_cache (bool, optional): Whether to enable metadata cache for pipeline parallelism. Defaults to True. """ def __init__( @@ -956,6 +957,7 @@ def __init__( custom_policy: Policy = None, pp_style: str = "1f1b", num_model_chunks: int = 1, + enable_metadata_cache: bool = True, ) -> None: super().__init__() assert ( @@ -1002,10 +1004,14 @@ def __init__( num_model_chunks=num_model_chunks, num_microbatch=num_microbatches, microbatch_size=microbatch_size, + enable_metadata_cache=enable_metadata_cache, ) elif pp_style == "1f1b": self.schedule = OneForwardOneBackwardSchedule( - self.stage_manager, num_microbatches=num_microbatches, microbatch_size=microbatch_size + stage_manager=self.stage_manager, + num_microbatches=num_microbatches, + microbatch_size=microbatch_size, + enable_metadata_cache=enable_metadata_cache, ) else: raise NotImplementedError() diff --git a/tests/test_shardformer/test_model/test_shard_gpt2.py b/tests/test_shardformer/test_model/test_shard_gpt2.py index 66b30641acc8..3155420f1cf2 100644 --- a/tests/test_shardformer/test_model/test_shard_gpt2.py +++ b/tests/test_shardformer/test_model/test_shard_gpt2.py @@ -165,7 +165,7 @@ def check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, ) @clear_cache_before_run() def run_gpt2_test(test_config): - sub_model_zoo = model_zoo.get_sub_registry("transformers_gpt") + sub_model_zoo = model_zoo.get_sub_registry("transformers_gpt", exclude="transformers_gptj") for name, (model_fn, data_gen_fn, output_transform_fn, loss_fn, _) in sub_model_zoo.items(): check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, test_config) @@ -200,7 +200,7 @@ def run_gpt2_test(test_config): ) @clear_cache_before_run() def run_gpt2_3d_test(test_config): - sub_model_zoo = model_zoo.get_sub_registry("transformers_gpt") + sub_model_zoo = model_zoo.get_sub_registry("transformers_gpt", exclude="transformers_gptj") for name, (model_fn, data_gen_fn, output_transform_fn, loss_fn, _) in sub_model_zoo.items(): check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, test_config) diff --git a/tests/test_shardformer/test_model/test_shard_t5.py b/tests/test_shardformer/test_model/test_shard_t5.py index 73f203d1f023..22c201458ad4 100644 --- a/tests/test_shardformer/test_model/test_shard_t5.py +++ b/tests/test_shardformer/test_model/test_shard_t5.py @@ -86,6 +86,7 @@ def check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, "tp_size": 2, "pp_size": 2, "num_microbatches": 2, + "enable_metadata_cache": False, "enable_all_optimization": True, "use_lazy_init": True, "precision": "fp16", @@ -95,6 +96,7 @@ def check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, "tp_size": 1, "pp_size": 2, "num_microbatches": 4, + "enable_metadata_cache": False, "use_lazy_init": False, "precision": "fp16", "initial_scale": 1, @@ -110,6 +112,7 @@ def check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, "tp_size": 1, "pp_size": 4, "num_microbatches": 4, + "enable_metadata_cache": False, "enable_all_optimization": False, "use_lazy_init": False, "precision": "fp32", @@ -128,6 +131,7 @@ def check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, "tp_size": 1, "pp_size": 2, "num_microbatches": 2, + "enable_metadata_cache": False, "enable_all_optimization": True, "use_lazy_init": True, "zero_stage": 1, @@ -159,6 +163,7 @@ def run_t5_test(test_config): "tp_size": 2, "pp_size": 2, "num_microbatches": 4, + "enable_metadata_cache": False, "enable_all_optimization": False, "use_lazy_init": False, "precision": "fp32", @@ -168,6 +173,7 @@ def run_t5_test(test_config): "tp_size": 2, "pp_size": 2, "num_microbatches": 4, + "enable_metadata_cache": False, "enable_all_optimization": False, "use_lazy_init": False, "precision": "fp16", diff --git a/tests/test_shardformer/test_model/test_shard_whisper.py b/tests/test_shardformer/test_model/test_shard_whisper.py index f839bd84ab69..6efb8a922f85 100644 --- a/tests/test_shardformer/test_model/test_shard_whisper.py +++ b/tests/test_shardformer/test_model/test_shard_whisper.py @@ -114,6 +114,7 @@ def check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, "tp_size": 2, "pp_size": 2, "num_microbatches": 2, + "enable_metadata_cache": False, "enable_all_optimization": True, "use_lazy_init": True, "precision": "fp32", @@ -123,6 +124,7 @@ def check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, "tp_size": 1, "pp_size": 2, "num_microbatches": 4, + "enable_metadata_cache": False, "use_lazy_init": False, "precision": "fp32", "initial_scale": 1, @@ -138,6 +140,7 @@ def check_forward_backward(model_fn, data_gen_fn, output_transform_fn, loss_fn, "tp_size": 1, "pp_size": 4, "num_microbatches": 4, + "enable_metadata_cache": False, "use_lazy_init": False, "precision": "fp32", }, @@ -163,6 +166,7 @@ def run_whisper_test(test_config): "tp_size": 2, "pp_size": 2, "num_microbatches": 4, + "enable_metadata_cache": False, "enable_all_optimization": False, "use_lazy_init": False, "precision": "fp32", @@ -172,6 +176,7 @@ def run_whisper_test(test_config): "tp_size": 2, "pp_size": 2, "num_microbatches": 2, + "enable_metadata_cache": False, "enable_all_optimization": False, "use_lazy_init": False, "precision": "fp32",
## 📌 Checklist before creating the PR - [ ] I have created an issue for this PR for traceability - [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description` - [ ] I have added relevant tags if possible for us to better distinguish different PRs ## 🚨 Issue number > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge > > e.g. `fixed #1234`, `closed #1234`, `resolved #1234` ## 📝 What does this PR do? > Summarize your work here. > if you have any plots/diagrams/screenshots/tables, please attach them here. ## 💥 Checklist before requesting a review - [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)) - [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible - [ ] I have performed a self-review of my code - [ ] I have added thorough tests. - [ ] I have added docstrings for all the functions/methods I implemented ## ⭐️ Do you enjoy contributing to Colossal-AI? - [ ] 🌝 Yes, I do. - [ ] 🌚 No, I don't. Tell us more if you don't enjoy contributing to Colossal-AI.
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/5255
2024-01-11T09:21:40Z
2024-01-11T11:07:45Z
2024-01-11T11:07:45Z
2024-01-11T11:07:46Z
1,866
hpcaitech/ColossalAI
11,640
[ffmpeg] use subprocess.check_call
diff --git a/src/you_get/processor/ffmpeg.py b/src/you_get/processor/ffmpeg.py index 94378daac4..ab262e555f 100644 --- a/src/you_get/processor/ffmpeg.py +++ b/src/you_get/processor/ffmpeg.py @@ -109,11 +109,9 @@ def ffmpeg_concat_flv_to_mp4(files, output='output.mp4'): params.append(output + '.txt') params += ['-c', 'copy', output] - if subprocess.call(params) == 0: - os.remove(output + '.txt') - return True - else: - raise + subprocess.check_call(params) + os.remove(output + '.txt') + return True for file in files: if os.path.isfile(file):
This fixes `RuntimeError: No active exception to reraise`. Raising a `CalledProcessError` is more useful for debugging and manual fixing. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/678) <!-- Reviewable:end -->
https://api.github.com/repos/soimort/you-get/pulls/678
2015-09-27T08:46:22Z
2015-09-27T18:23:30Z
2015-09-27T18:23:30Z
2015-09-27T18:23:30Z
179
soimort/you-get
21,289
chore(hybridcloud) Reduce logging level in pagerduty alert rule action
diff --git a/src/sentry/integrations/pagerduty/actions/notification.py b/src/sentry/integrations/pagerduty/actions/notification.py index 35184e617ebae..a0d84782e880d 100644 --- a/src/sentry/integrations/pagerduty/actions/notification.py +++ b/src/sentry/integrations/pagerduty/actions/notification.py @@ -41,14 +41,19 @@ def _get_service(self): def after(self, event, state, notification_uuid: Optional[str] = None): integration = self.get_integration() + log_context = { + "organization_id": self.project.organization_id, + "integration_id": self.get_option("account"), + "service": self.get_option("service"), + } if not integration: # integration removed but rule still exists - logger.error("Integration removed, however, the rule still refers to it.") + logger.info("pagerduty.org_integration_missing", extra=log_context) return service = self._get_service() if not service: - logger.error("The PagerDuty service does not exist anymore while integration does.") + logger.info("pagerduty.service_missing", extra=log_context) return def send_notification(event, futures):
Previously alert rules that were missing an integration would log at ERROR. However, there isn't anything we can do about these most of the time because the source of the problem is customers removing integrations. Reducing the logging level to info will give us enough context to figure out why an alert isn't triggering should a customer ask. Fixes SENTRY-19CK
https://api.github.com/repos/getsentry/sentry/pulls/61628
2023-12-12T20:36:13Z
2023-12-13T15:15:39Z
2023-12-13T15:15:39Z
2023-12-29T00:19:42Z
278
getsentry/sentry
44,693
Fixed False Positives on Polarsteps
diff --git a/sherlock/resources/data.json b/sherlock/resources/data.json index 2fc99b562..6b881d6f5 100644 --- a/sherlock/resources/data.json +++ b/sherlock/resources/data.json @@ -15,9 +15,10 @@ }, "polarsteps": { "errorType": "status_code", - "url": "https://www.polarsteps.com/{}", + "url": "https://polarsteps.com/{}", "urlMain": "https://polarsteps.com/", - "username_claimed": "vidheeshnacode", + "urlProbe": "https://api.polarsteps.com/users/byusername/{}", + "username_claimed": "james", "username_unclaimed": "noonewouldeverusethis7" }, "2Dimensions": {
Using API https://api.polarsteps.com/users/byusername/{}
https://api.github.com/repos/sherlock-project/sherlock/pulls/703
2020-08-07T10:41:50Z
2020-08-07T10:54:58Z
2020-08-07T10:54:58Z
2020-08-07T11:02:21Z
199
sherlock-project/sherlock
36,232
fix: remove useless token
diff --git a/.github/workflows/build_documentation.yml b/.github/workflows/build_documentation.yml index b4ef7415c7..6cbf1da6fc 100644 --- a/.github/workflows/build_documentation.yml +++ b/.github/workflows/build_documentation.yml @@ -17,5 +17,4 @@ jobs: path_to_docs: pytorch-image-models/hfdocs/source version_tag_suffix: "" secrets: - token: ${{ secrets.HUGGINGFACE_PUSH }} - hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }} \ No newline at end of file + hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
This token is not used by your action. Secret is removed from the repository.
https://api.github.com/repos/huggingface/pytorch-image-models/pulls/1995
2023-10-19T11:28:34Z
2023-10-19T12:58:52Z
2023-10-19T12:58:52Z
2023-10-19T12:59:30Z
150
huggingface/pytorch-image-models
16,449
Fixed typo
diff --git a/gym/envs/toy_text/taxi.py b/gym/envs/toy_text/taxi.py index faf7d433db9..270b33f9921 100644 --- a/gym/envs/toy_text/taxi.py +++ b/gym/envs/toy_text/taxi.py @@ -24,7 +24,7 @@ class TaxiEnv(discrete.DiscreteEnv): There are four designated locations in the grid world indicated by R(ed), B(lue), G(reen), and Y(ellow). When the episode starts, the taxi starts off at a random square and the passenger is at a random location. The taxi drive to the passenger's location, pick up the passenger, drive to the passenger's destination (another one of the four specified locations), and then drop off the passenger. Once the passenger is dropped off, the episode ends. Observations: - There are 500 discrete actions since there are 25 taxi positions, 5 possible locations of the passenger (including the case when the passenger is the taxi), and 4 destination locations. + There are 500 discrete states since there are 25 taxi positions, 5 possible locations of the passenger (including the case when the passenger is the taxi), and 4 destination locations. Actions: There are 6 discrete deterministic actions:
Fixed the typo 'actions' to 'states', which is what is implied by context.
https://api.github.com/repos/openai/gym/pulls/1199
2018-10-14T12:25:16Z
2018-10-18T20:56:08Z
2018-10-18T20:56:08Z
2018-10-18T20:56:08Z
294
openai/gym
5,425
Add Heroku Platform API
diff --git a/README.md b/README.md index 0362497d2e..22d654a267 100644 --- a/README.md +++ b/README.md @@ -484,6 +484,7 @@ API | Description | Auth | HTTPS | CORS | | [Google Slides](https://developers.google.com/slides/api/reference/rest) | API to read, write, and format Google Slides presentations | `OAuth` | Yes | Unknown | | [Gorest](https://gorest.co.in/) | Online REST API for Testing and Prototyping | `OAuth` | Yes | Unknown | | [Hasura](https://hasura.io/opensource/) | GraphQL and REST API Engine with built in Authorization | `apiKey` | Yes | Yes | +| [Heroku](https://devcenter.heroku.com/articles/platform-api-reference/) | REST API to programmatically create apps, provision add-ons and perform other task on Heroku | `OAuth` | Yes | Yes | | [host-t.com](https://host-t.com) | Basic DNS query via HTTP GET request | No | Yes | No | | [Host.io](https://host.io) | Domains Data API for Developers | `apiKey` | Yes | Yes | | [HTTP2.Pro](https://http2.pro/doc/api) | Test endpoints for client and server HTTP/2 protocol support | No | Yes | Unknown |
<!-- Thank you for taking the time to work on a Pull Request for this project! --> <!-- To ensure your PR is dealt with swiftly please check the following: --> - [x] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md) - [x] My addition is ordered alphabetically - [x] My submission has a useful description - [x] The description does not have more than 100 characters - [x] The description does not end with punctuation - [x] Each table column is padded with one space on either side - [x] I have searched the repository for any relevant issues or pull requests - [x] Any category I am creating has the minimum requirement of 3 items - [x] All changes have been [squashed][squash-link] into a single commit [squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
https://api.github.com/repos/public-apis/public-apis/pulls/2788
2021-10-29T14:56:51Z
2021-10-30T01:07:59Z
2021-10-30T01:07:59Z
2021-10-30T06:15:51Z
295
public-apis/public-apis
35,212
DOC Ensures that sklearn.datasets._base.load_breast_cancer passes numpydoc validation
diff --git a/sklearn/datasets/_base.py b/sklearn/datasets/_base.py index 332d0e74b6818..9ac2e75a9b62a 100644 --- a/sklearn/datasets/_base.py +++ b/sklearn/datasets/_base.py @@ -660,6 +660,10 @@ def load_breast_cancer(*, return_X_y=False, as_frame=False): Features real, positive ================= ============== + The copy of UCI ML Breast Cancer Wisconsin (Diagnostic) dataset is + downloaded from: + https://goo.gl/U2Uwz2 + Read more in the :ref:`User Guide <breast_cancer_dataset>`. Parameters @@ -687,33 +691,34 @@ def load_breast_cancer(*, return_X_y=False, as_frame=False): data : {ndarray, dataframe} of shape (569, 30) The data matrix. If `as_frame=True`, `data` will be a pandas DataFrame. - target: {ndarray, Series} of shape (569,) + target : {ndarray, Series} of shape (569,) The classification target. If `as_frame=True`, `target` will be a pandas Series. - feature_names: list + feature_names : list The names of the dataset columns. - target_names: list + target_names : list The names of target classes. - frame: DataFrame of shape (569, 31) + frame : DataFrame of shape (569, 31) Only present when `as_frame=True`. DataFrame with `data` and `target`. .. versionadded:: 0.23 - DESCR: str + DESCR : str The full description of the dataset. - filename: str + filename : str The path to the location of the data. .. versionadded:: 0.20 (data, target) : tuple if ``return_X_y`` is True + A tuple of two ndarrays by default. The first contains a 2D ndarray of + shape (569, 30) with each row representing one sample and each column + representing the features. The second ndarray of shape (569,) contains + the target samples. If `as_frame=True`, both arrays are pandas objects, + i.e. `X` a dataframe and `y` a series. .. versionadded:: 0.18 - The copy of UCI ML Breast Cancer Wisconsin (Diagnostic) dataset is - downloaded from: - https://goo.gl/U2Uwz2 - Examples -------- Let's say you are interested in the samples 10, 50, and 85, and want to @@ -989,6 +994,7 @@ def load_diabetes(*, return_X_y=False, as_frame=False, scaled=True): Returns a tuple of two ndarray of shape (n_samples, n_features) A 2D array with each row representing one sample and each column representing the features and/or target of a given sample. + .. versionadded:: 0.18 """ data_filename = "diabetes_data_raw.csv.gz" diff --git a/sklearn/tests/test_docstrings.py b/sklearn/tests/test_docstrings.py index 50e1ad7376089..2ab8a2f3543fa 100644 --- a/sklearn/tests/test_docstrings.py +++ b/sklearn/tests/test_docstrings.py @@ -14,7 +14,6 @@ FUNCTION_DOCSTRING_IGNORE_LIST = [ "sklearn.covariance._shrunk_covariance.ledoit_wolf", "sklearn.covariance._shrunk_covariance.ledoit_wolf_shrinkage", - "sklearn.datasets._base.load_breast_cancer", "sklearn.datasets._base.load_digits", "sklearn.datasets._base.load_linnerud", "sklearn.datasets._base.load_sample_image",
#### Reference Issues/PRs Addresses #21350 #### What does this implement/fix? Explain your changes. 1. Removed sklearn.datasets._base.load_breast_cancer from FUNCTION_DOCSTRING_IGNORE_LIST 2. Added spaces before and after colon of return values in sklearn.datasets._base.load_breast_cancer. 3. Redefined the description of return value of (data, target) 4. Removed trailing whitespace from description of return value of (data, target) 5. Moved the link of UCI ML Breast Cancer Wisconsin (Diagnostic) dataset to below the breast cancer dataset table. #### Any other comments?
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/22346
2022-01-31T06:23:28Z
2022-02-01T09:39:41Z
2022-02-01T09:39:41Z
2022-02-08T12:33:20Z
896
scikit-learn/scikit-learn
46,876
[Tudou] Fix titles in playlists
diff --git a/src/you_get/extractors/tudou.py b/src/you_get/extractors/tudou.py index 6bbbc12bb5..8c434437c7 100644 --- a/src/you_get/extractors/tudou.py +++ b/src/you_get/extractors/tudou.py @@ -32,11 +32,11 @@ def tudou_download_by_id(id, title, output_dir = '.', merge = True, info_only = def tudou_download(url, output_dir = '.', merge = True, info_only = False, **kwargs): if 'acfun.tudou.com' in url: #wrong way! url = url.replace('acfun.tudou.com', 'www.acfun.tv') - you_get.extractors.acfun.acfun_download(url, output_dir, - merge, + you_get.extractors.acfun.acfun_download(url, output_dir, + merge, info_only) return #throw you back - + # Embedded player id = r1(r'http://www.tudou.com/v/([^/]+)/', url) if id: @@ -44,7 +44,7 @@ def tudou_download(url, output_dir = '.', merge = True, info_only = False, **kwa html = get_decoded_html(url) - title = r1(r'kw\s*[:=]\s*[\'\"]([^\n]+?)\'\s*\n', html).replace("\\'", "\'") + title = r1(r'\Wkw\s*[:=]\s*[\'\"]([^\n]+?)\'\s*\n', html).replace("\\'", "\'") assert title title = unescape_html(title)
![tumblr_inline_nufdyktwnf1tta2k2_500](https://cloud.githubusercontent.com/assets/342945/20640275/8fa686f0-b3d9-11e6-8ce7-d20655ec8c7a.gif) <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/1528) <!-- Reviewable:end -->
https://api.github.com/repos/soimort/you-get/pulls/1528
2016-11-26T12:13:20Z
2016-11-26T15:29:40Z
2016-11-26T15:29:40Z
2016-11-26T15:30:00Z
382
soimort/you-get
21,452
GH-103092: isolate `pyexpat`
diff --git a/Modules/pyexpat.c b/Modules/pyexpat.c index c0fbd4d39f0096..92f594ab63ea2a 100644 --- a/Modules/pyexpat.c +++ b/Modules/pyexpat.c @@ -1076,13 +1076,31 @@ static struct PyMethodDef xmlparse_methods[] = { Make it as simple as possible. */ +static const unsigned char template_buffer[256] = + {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, + 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, + 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, + 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, + 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, + 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, + 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, + 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, + 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, + 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, + 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, + 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, + 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, + 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, + 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, + 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255}; + + static int PyUnknownEncodingHandler(void *encodingHandlerData, const XML_Char *name, XML_Encoding *info) { - static unsigned char template_buffer[256] = {0}; - PyObject* u; + PyObject *u; int i; const void *data; int kind; @@ -1090,12 +1108,7 @@ PyUnknownEncodingHandler(void *encodingHandlerData, if (PyErr_Occurred()) return XML_STATUS_ERROR; - if (template_buffer[1] == 0) { - for (i = 0; i < 256; i++) - template_buffer[i] = i; - } - - u = PyUnicode_Decode((char*) template_buffer, 256, name, "replace"); + u = PyUnicode_Decode((const char*) template_buffer, 256, name, "replace"); if (u == NULL || PyUnicode_READY(u)) { Py_XDECREF(u); return XML_STATUS_ERROR; diff --git a/Tools/c-analyzer/cpython/globals-to-fix.tsv b/Tools/c-analyzer/cpython/globals-to-fix.tsv index 9863acdade308b..e2b93a3a2ec274 100644 --- a/Tools/c-analyzer/cpython/globals-to-fix.tsv +++ b/Tools/c-analyzer/cpython/globals-to-fix.tsv @@ -458,7 +458,6 @@ Modules/_tkinter.c - trbInCmd - ## pre-allocated buffer Modules/nismodule.c nisproc_maplist_2 res - -Modules/pyexpat.c PyUnknownEncodingHandler template_buffer - ## other Include/datetime.h - PyDateTimeAPI -
<!-- Thanks for your contribution! Please read this comment in its entirety. It's quite important. # Pull Request title It should be in the following format: ``` gh-NNNNN: Summary of the changes made ``` Where: gh-NNNNN refers to the GitHub issue number. Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue. # Backport Pull Request title If this is a backport PR (PR made against branches other than `main`), please ensure that the PR title is in the following format: ``` [X.Y] <title from the original PR> (GH-NNNN) ``` Where: [X.Y] is the branch name, e.g. [3.6]. GH-NNNN refers to the PR number from `main`. --> <!-- gh-issue-number: gh-103092 --> * Issue: gh-103092 <!-- /gh-issue-number -->
https://api.github.com/repos/python/cpython/pulls/104506
2023-05-15T12:26:47Z
2023-05-16T20:03:01Z
2023-05-16T20:03:01Z
2023-05-16T21:31:11Z
1,282
python/cpython
4,120
Add answer to Spine & Leaf network question
diff --git a/README.md b/README.md index 5066e26b9..ca98f46da 100644 --- a/README.md +++ b/README.md @@ -622,6 +622,14 @@ Throughput. To have good throughput, the upload stream should be routed to an un <details> <summary>Explain Spine & Leaf</summary><br><b> +"Spine & Leaf" is a networking topology commonly used in data center environments to connect multiple switches and manage network traffic efficiently. It is also known as "spine-leaf" architecture or "leaf-spine" topology. This design provides high bandwidth, low latency, and scalability, making it ideal for modern data centers handling large volumes of data and traffic. + +Within a Spine & Leaf network there are two main tipology of switches: + +* Spine Switches: Spine switches are high-performance switches arranged in a spine layer. These switches act as the core of the network and are typically interconnected with each leaf switch. Each spine switch is connected to all the leaf switches in the data center. +* Leaf Switches: Leaf switches are connected to end devices like servers, storage arrays, and other networking equipment. Each leaf switch is connected to every spine switch in the data center. This creates a non-blocking, full-mesh connectivity between leaf and spine switches, ensuring any leaf switch can communicate with any other leaf switch with maximum throughput. + +The Spine & Leaf architecture has become increasingly popular in data centers due to its ability to handle the demands of modern cloud computing, virtualization, and big data applications, providing a scalable, high-performance, and reliable network infrastructure </b></details> <details>
Add answer to question `Explain Spine & Leaf` within README.md file.
https://api.github.com/repos/bregman-arie/devops-exercises/pulls/410
2023-08-01T21:23:13Z
2024-02-02T13:23:36Z
2024-02-02T13:23:36Z
2024-02-02T13:23:36Z
361
bregman-arie/devops-exercises
17,671
Remove trailing \n from texttobase64.sh
diff --git a/scripts/texttobase64.sh b/scripts/texttobase64.sh index 791f21c..04c1feb 100755 --- a/scripts/texttobase64.sh +++ b/scripts/texttobase64.sh @@ -2,7 +2,7 @@ commentChar="#" while read p; do firstChar=${p:0:1} if [[ "$firstChar" != "$commentChar" && "$firstChar" != "" ]] ; then - echo $p | base64; + echo -n $p | base64; else echo $p; fi
Without this, all base64 strings will get a trailing \n in their encoded form ref #86 Does _not_ update the `blns.base64.txt` file
https://api.github.com/repos/minimaxir/big-list-of-naughty-strings/pulls/94
2016-02-11T09:43:08Z
2016-02-11T15:04:01Z
2016-02-11T15:04:01Z
2016-05-01T15:57:15Z
141
minimaxir/big-list-of-naughty-strings
4,836
[docs; tiny] Clarify that Response.ok will *only* return True/False
diff --git a/requests/models.py b/requests/models.py index 4041cac3f0..ce4e284e64 100644 --- a/requests/models.py +++ b/requests/models.py @@ -686,11 +686,11 @@ def __iter__(self): @property def ok(self): - """Returns True if :attr:`status_code` is less than 400. + """Returns True if :attr:`status_code` is less than 400, False if not. This attribute checks if the status code of the response is between 400 and 600 to see if there was a client error or a server error. If - the status code, is between 200 and 400, this will return True. This + the status code is between 200 and 400, this will return True. This is **not** a check to see if the response code is ``200 OK``. """ try:
This is a tiny documentation fix, just to be super-duper explicit about the return value of `response.ok`. (I wanted to check I could use its value in a function that’s meant to return a boolean, but the docs weren’t clear about what a non-Truthy return value would be.)
https://api.github.com/repos/psf/requests/pulls/4390
2017-11-20T09:19:38Z
2017-11-20T13:18:01Z
2017-11-20T13:18:01Z
2021-09-04T00:06:40Z
215
psf/requests
32,429
Merge Augeas lens fix for quotes in directive arguments
diff --git a/letsencrypt-apache/letsencrypt_apache/augeas_lens/httpd.aug b/letsencrypt-apache/letsencrypt_apache/augeas_lens/httpd.aug index d665ea7a73f..0f2cb7b4551 100644 --- a/letsencrypt-apache/letsencrypt_apache/augeas_lens/httpd.aug +++ b/letsencrypt-apache/letsencrypt_apache/augeas_lens/httpd.aug @@ -59,7 +59,7 @@ let empty = Util.empty_dos let indent = Util.indent (* borrowed from shellvars.aug *) -let char_arg_dir = /([^\\ '"{\t\r\n]|[^ '"{\t\r\n]+[^\\ '"\t\r\n])|\\\\"|\\\\'/ +let char_arg_dir = /([^\\ '"{\t\r\n]|[^ '"{\t\r\n]+[^\\ \t\r\n])|\\\\"|\\\\'/ let char_arg_sec = /[^ '"\t\r\n>]|\\\\"|\\\\'/ let char_arg_wl = /([^\\ '"},\t\r\n]|[^ '"},\t\r\n]+[^\\ '"},\t\r\n])/ diff --git a/tests/apache-conf-files/passing/graphite-quote-1934.conf b/tests/apache-conf-files/passing/graphite-quote-1934.conf new file mode 100644 index 00000000000..2a8734b43f3 --- /dev/null +++ b/tests/apache-conf-files/passing/graphite-quote-1934.conf @@ -0,0 +1,21 @@ +<VirtualHost *:80> + + WSGIDaemonProcess _graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 user=_graphite group=_graphite + WSGIProcessGroup _graphite + WSGIImportScript /usr/share/graphite-web/graphite.wsgi process-group=_graphite application-group=%{GLOBAL} + WSGIScriptAlias / /usr/share/graphite-web/graphite.wsgi + + Alias /content/ /usr/share/graphite-web/static/ + <Location "/content/"> + SetHandler None + </Location> + + ErrorLog ${APACHE_LOG_DIR}/graphite-web_error.log + + # Possible values include: debug, info, notice, warn, error, crit, + # alert, emerg. + LogLevel warn + + CustomLog ${APACHE_LOG_DIR}/graphite-web_access.log combined + +</VirtualHost>
Augeas fails to parse a directive argument with a quote inside (expecting either fully quoted or unquoted values). From hercules-team/augeas@d4d7ea9 Closes: #1934
https://api.github.com/repos/certbot/certbot/pulls/1945
2015-12-18T08:11:58Z
2015-12-18T09:04:14Z
2015-12-18T09:04:14Z
2016-05-06T19:21:58Z
569
certbot/certbot
1,183
Fix Python syntax error in ftp_send_receive.py
diff --git a/ftp_send_receive.py b/ftp_send_receive.py index caafc0c4d7..4dc761504c 100644 --- a/ftp_send_receive.py +++ b/ftp_send_receive.py @@ -9,7 +9,7 @@ """ from ftplib import FTP -ftp = FTP('xxx.xxx.x.x') """ Enter the ip address or the domain name here """ +ftp = FTP('xxx.xxx.x.x') # Enter the ip address or the domain name here ftp.login(user='username', passwd='password') ftp.cwd('/Enter the directory here/') @@ -33,4 +33,4 @@ def ReceiveFile(): def SendFile(): FileName = 'example.txt' """ Enter the name of the file """ ftp.storbinary('STOR ' + FileName, open(FileName, 'rb')) - ftp.quit() \ No newline at end of file + ftp.quit()
https://api.github.com/repos/geekcomputers/Python/pulls/454
2019-01-07T10:22:10Z
2019-01-08T06:10:55Z
2019-01-08T06:10:55Z
2019-01-08T07:18:32Z
206
geekcomputers/Python
31,743
Fix optional import crash and error
diff --git a/interpreter/core/computer/display/display.py b/interpreter/core/computer/display/display.py index 1f7a8b361..f21a1b5c4 100644 --- a/interpreter/core/computer/display/display.py +++ b/interpreter/core/computer/display/display.py @@ -12,7 +12,6 @@ import requests from ...utils.lazy_import import lazy_import from ..utils.recipient_utils import format_to_recipient -import cv2 from screeninfo import get_monitors # for getting info about connected monitors @@ -20,6 +19,7 @@ # from utils.get_active_window import get_active_window # Lazy import of optional packages +cv2 = lazy_import("cv2") pyautogui = lazy_import("pyautogui") np = lazy_import("numpy") plt = lazy_import("matplotlib.pyplot") diff --git a/interpreter/terminal_interface/profiles/defaults/01.py b/interpreter/terminal_interface/profiles/defaults/01.py index 30fda0aef..ec1c34351 100644 --- a/interpreter/terminal_interface/profiles/defaults/01.py +++ b/interpreter/terminal_interface/profiles/defaults/01.py @@ -216,7 +216,7 @@ def get_function_info(file_path): print("Attempting to start OS control anyway...\n\n") for pip_name in ["pip", "pip3"]: - command = f"{pip_name} install 'open-interpreter[os]'" + command = f"{pip_name} install open-interpreter[os]" interpreter.computer.run("shell", command, display=True) diff --git a/interpreter/terminal_interface/profiles/defaults/os.py b/interpreter/terminal_interface/profiles/defaults/os.py index 5d5998e22..f998cae1d 100644 --- a/interpreter/terminal_interface/profiles/defaults/os.py +++ b/interpreter/terminal_interface/profiles/defaults/os.py @@ -170,7 +170,7 @@ print("Attempting to start OS control anyway...\n\n") for pip_name in ["pip", "pip3"]: - command = f"{pip_name} install 'open-interpreter[os]'" + command = f"{pip_name} install open-interpreter[os]" interpreter.computer.run("shell", command, display=True)
### Describe the changes you have made: - Fixed a crash by converting a missed import cv2 in display.py to lazy_import. - Removed the single quotes around pip install open-interpreter[os]. ### Reference any relevant issues (e.g. "Fixes #000"): - ModuleNotFoundError: No module named 'cv2' - ERROR: Invalid requirement: "'open-interpreter[os]'" ### Pre-Submission Checklist (optional but appreciated): - [x] I have included relevant documentation updates (stored in /docs) - [x] I have read `docs/CONTRIBUTING.md` - [x] I have read `docs/ROADMAP.md` ### OS Tests (optional but appreciated): - [x] Tested on Windows - [ ] Tested on MacOS - [ ] Tested on Linux
https://api.github.com/repos/OpenInterpreter/open-interpreter/pulls/1194
2024-04-10T21:03:59Z
2024-04-17T01:07:04Z
2024-04-17T01:07:04Z
2024-04-17T01:07:38Z
514
OpenInterpreter/open-interpreter
40,662
[extractor/viu] Add ViuOTTNew extractor
diff --git a/yt_dlp/extractor/_extractors.py b/yt_dlp/extractor/_extractors.py index 12ef50cc6bc..56949c8af60 100644 --- a/yt_dlp/extractor/_extractors.py +++ b/yt_dlp/extractor/_extractors.py @@ -2178,6 +2178,7 @@ ViuIE, ViuPlaylistIE, ViuOTTIE, + ViuOTTIndonesiaIE, ) from .vk import ( VKIE, diff --git a/yt_dlp/extractor/viu.py b/yt_dlp/extractor/viu.py index dd4cad7ba80..6f9af9f643b 100644 --- a/yt_dlp/extractor/viu.py +++ b/yt_dlp/extractor/viu.py @@ -9,9 +9,12 @@ from ..utils import ( ExtractorError, int_or_none, + remove_end, strip_or_none, + traverse_obj, try_get, smuggle_url, + unified_timestamp, unsmuggle_url, url_or_none, ) @@ -394,3 +397,146 @@ def download_playback(): 'formats': formats, 'subtitles': subtitles, } + + +class ViuOTTIndonesiaBaseIE(InfoExtractor): + _BASE_QUERY = { + 'ver': 1.0, + 'fmt': 'json', + 'aver': 5.0, + 'appver': 2.0, + 'appid': 'viu_desktop', + 'platform': 'desktop', + } + + _DEVICE_ID = str(uuid.uuid4()) + _SESSION_ID = str(uuid.uuid4()) + _TOKEN = None + + _HEADERS = { + 'x-session-id': _SESSION_ID, + 'x-client': 'browser' + } + + _AGE_RATINGS_MAPPER = { + 'ADULTS': 18, + 'teens': 13 + } + + def _real_initialize(self): + ViuOTTIndonesiaBaseIE._TOKEN = self._download_json( + 'https://um.viuapi.io/user/identity', None, + headers={'Content-type': 'application/json', **self._HEADERS}, + query={**self._BASE_QUERY, 'iid': self._DEVICE_ID}, + data=json.dumps({'deviceId': self._DEVICE_ID}).encode(), + note='Downloading token information')['token'] + + +class ViuOTTIndonesiaIE(ViuOTTIndonesiaBaseIE): + _VALID_URL = r'https?://www\.viu\.com/ott/\w+/\w+/all/video-[\w-]+-(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://www.viu.com/ott/id/id/all/video-japanese-drama-tv_shows-detective_conan_episode_793-1165863142?containerId=playlist-26271226', + 'info_dict': { + 'id': '1165863142', + 'ext': 'mp4', + 'episode_number': 793, + 'episode': 'Episode 793', + 'title': 'Detective Conan - Episode 793', + 'duration': 1476, + 'description': 'md5:b79d55345bc1e0217ece22616267c9a5', + 'thumbnail': 'https://vuclipi-a.akamaihd.net/p/cloudinary/h_171,w_304,dpr_1.5,f_auto,c_thumb,q_auto:low/1165863189/d-1', + 'upload_date': '20210101', + 'timestamp': 1609459200, + } + }, { + 'url': 'https://www.viu.com/ott/id/id/all/video-korean-reality-tv_shows-entertainment_weekly_episode_1622-1118617054', + 'info_dict': { + 'id': '1118617054', + 'ext': 'mp4', + 'episode_number': 1622, + 'episode': 'Episode 1622', + 'description': 'md5:6d68ca450004020113e9bf27ad99f0f8', + 'title': 'Entertainment Weekly - Episode 1622', + 'duration': 4729, + 'thumbnail': 'https://vuclipi-a.akamaihd.net/p/cloudinary/h_171,w_304,dpr_1.5,f_auto,c_thumb,q_auto:low/1120187848/d-1', + 'timestamp': 1420070400, + 'upload_date': '20150101', + 'cast': ['Shin Hyun-joon', 'Lee Da-Hee'] + } + }, { + # age-limit test + 'url': 'https://www.viu.com/ott/id/id/all/video-japanese-trailer-tv_shows-trailer_jujutsu_kaisen_ver_01-1166044219?containerId=playlist-26273140', + 'info_dict': { + 'id': '1166044219', + 'ext': 'mp4', + 'upload_date': '20200101', + 'timestamp': 1577836800, + 'title': 'Trailer \'Jujutsu Kaisen\' Ver.01', + 'duration': 92, + 'thumbnail': 'https://vuclipi-a.akamaihd.net/p/cloudinary/h_171,w_304,dpr_1.5,f_auto,c_thumb,q_auto:low/1166044240/d-1', + 'description': 'Trailer \'Jujutsu Kaisen\' Ver.01', + 'cast': ['Junya Enoki', ' Yûichi Nakamura', ' Yuma Uchida', 'Asami Seto'], + 'age_limit': 13, + } + }, { + # json ld metadata type equal to Movie instead of TVEpisodes + 'url': 'https://www.viu.com/ott/id/id/all/video-japanese-animation-movies-demon_slayer_kimetsu_no_yaiba_the_movie_mugen_train-1165892707?containerId=1675060691786', + 'info_dict': { + 'id': '1165892707', + 'ext': 'mp4', + 'timestamp': 1577836800, + 'upload_date': '20200101', + 'title': 'Demon Slayer - Kimetsu no Yaiba - The Movie: Mugen Train', + 'age_limit': 13, + 'cast': 'count:9', + 'thumbnail': 'https://vuclipi-a.akamaihd.net/p/cloudinary/h_171,w_304,dpr_1.5,f_auto,c_thumb,q_auto:low/1165895279/d-1', + 'description': 'md5:1ce9c35a3aeab384085533f746c87469', + 'duration': 7021, + } + }] + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + + video_data = self._download_json( + f'https://um.viuapi.io/drm/v1/content/{display_id}', display_id, data=b'', + headers={'Authorization': ViuOTTIndonesiaBaseIE._TOKEN, **self._HEADERS, 'ccode': 'ID'}) + formats, subtitles = self._extract_m3u8_formats_and_subtitles(video_data['playUrl'], display_id) + + initial_state = self._search_json( + r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', + display_id)['content']['clipDetails'] + for key, url in initial_state.items(): + lang, ext = self._search_regex( + r'^subtitle_(?P<lang>[\w-]+)_(?P<ext>\w+)$', key, 'subtitle metadata', + default=(None, None), group=('lang', 'ext')) + if lang and ext: + subtitles.setdefault(lang, []).append({ + 'ext': ext, + 'url': url, + }) + + if ext == 'vtt': + subtitles[lang].append({ + 'ext': 'srt', + 'url': f'{remove_end(initial_state[key], "vtt")}srt', + }) + + episode = traverse_obj(list(filter( + lambda x: x.get('@type') in ('TVEpisode', 'Movie'), self._yield_json_ld(webpage, display_id))), 0) or {} + return { + 'id': display_id, + 'title': (traverse_obj(initial_state, 'title', 'display_title') + or episode.get('name')), + 'description': initial_state.get('description') or episode.get('description'), + 'duration': initial_state.get('duration'), + 'thumbnail': traverse_obj(episode, ('image', 'url')), + 'timestamp': unified_timestamp(episode.get('dateCreated')), + 'formats': formats, + 'subtitles': subtitles, + 'episode_number': (traverse_obj(initial_state, 'episode_no', 'episodeno', expected_type=int_or_none) + or int_or_none(episode.get('episodeNumber'))), + 'cast': traverse_obj(episode, ('actor', ..., 'name'), default=None), + 'age_limit': self._AGE_RATINGS_MAPPER.get(initial_state.get('internal_age_rating')) + }
**IMPORTANT**: PRs without the template will be CLOSED ### Description of your *pull request* and other information <!-- Explanation of your *pull request* in arbitrary form goes here. Please **make sure the description explains the purpose and effect** of your *pull request* and is worded well enough to be understood. Provide as much **context and examples** as possible --> This PR add site support for another type of ViuOTT that have been reported in #1757 . This extractor works in Indonesia, but i didn't test this extractor outside Indonesia. This PR only support extraction for single video without support for login. The checkbox in code license section is both checked on purpose as some part of this code is copied from @MinePlayersPE code (with minimal adjustment) (ref: https://github.com/yt-dlp/yt-dlp/issues/1757#issuecomment-1014949397) Fixes #1757 <details open><summary>Template</summary> <!-- OPEN is intentional --> <!-- # PLEASE FOLLOW THE GUIDE BELOW - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x]) - Use *Preview* tab to see how your *pull request* will actually look like --> ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [x] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [ ] Fix or improvement to an extractor (Make sure to add/update tests) - [x] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy)) - [ ] Core bug fix/improvement - [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes)) </details>
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/6099
2023-01-27T02:33:47Z
2023-02-17T02:57:52Z
2023-02-17T02:57:52Z
2023-02-17T02:57:53Z
2,178
yt-dlp/yt-dlp
8,283
✏ Fix typos and add rewording in docs
diff --git a/docs/en/docs/tutorial/body.md b/docs/en/docs/tutorial/body.md index 2afc979989058..24b2285968eb0 100644 --- a/docs/en/docs/tutorial/body.md +++ b/docs/en/docs/tutorial/body.md @@ -131,7 +131,7 @@ Inside of the function, you can access all the attributes of the model object di ## Request body + path parameters -You can declare path parameters and body requests at the same time. +You can declare path parameters and request body at the same time. **FastAPI** will recognize that the function parameters that match path parameters should be **taken from the path**, and that function parameters that are declared to be Pydantic models should be **taken from the request body**. diff --git a/docs/en/docs/tutorial/path-params-numeric-validations.md b/docs/en/docs/tutorial/path-params-numeric-validations.md index 1fcd9dbdbc685..5da69a21b2ef3 100644 --- a/docs/en/docs/tutorial/path-params-numeric-validations.md +++ b/docs/en/docs/tutorial/path-params-numeric-validations.md @@ -106,7 +106,7 @@ And you can also declare numeric validations: * `le`: `l`ess than or `e`qual !!! info - `Query`, `Path` and others you will see later subclasses of a common `Param` class (that you don't need to use). + `Query`, `Path`, and others you will see later are subclasses of a common `Param` class (that you don't need to use). And all of them share the same all these same parameters of additional validation and metadata you have seen. diff --git a/docs/en/docs/tutorial/query-params-str-validations.md b/docs/en/docs/tutorial/query-params-str-validations.md index 4edb4f597256a..8deccda0bd615 100644 --- a/docs/en/docs/tutorial/query-params-str-validations.md +++ b/docs/en/docs/tutorial/query-params-str-validations.md @@ -17,7 +17,7 @@ The query parameter `q` is of type `Optional[str]`, that means that it's of type ## Additional validation -We are going to enforce that even though `q` is optional, whenever it is provided, it **doesn't exceed a length of 50 characters**. +We are going to enforce that even though `q` is optional, whenever it is provided, **its length doesn't exceed 50 characters**. ### Import `Query`
Fix typos in docs. ## target - "Tutorial - User Guide" - "Request Body" - "Query Parameters and String Validations" - "Path Parameters and Numeric Validations"
https://api.github.com/repos/tiangolo/fastapi/pulls/2159
2020-10-10T22:50:55Z
2020-11-05T22:33:07Z
2020-11-05T22:33:07Z
2020-11-06T10:41:08Z
554
tiangolo/fastapi
23,033
DOC modified the graph for better readability
diff --git a/examples/classification/plot_lda.py b/examples/classification/plot_lda.py index 4213fc614a31a..322cc8bb4007c 100644 --- a/examples/classification/plot_lda.py +++ b/examples/classification/plot_lda.py @@ -47,8 +47,8 @@ def generate_data(n_samples, n_features): for _ in range(n_averages): X, y = generate_data(n_train, n_features) - clf1 = LinearDiscriminantAnalysis(solver="lsqr", shrinkage="auto").fit(X, y) - clf2 = LinearDiscriminantAnalysis(solver="lsqr", shrinkage=None).fit(X, y) + clf1 = LinearDiscriminantAnalysis(solver="lsqr", shrinkage=None).fit(X, y) + clf2 = LinearDiscriminantAnalysis(solver="lsqr", shrinkage="auto").fit(X, y) oa = OAS(store_precision=False, assume_centered=False) clf3 = LinearDiscriminantAnalysis(solver="lsqr", covariance_estimator=oa).fit( X, y @@ -69,23 +69,23 @@ def generate_data(n_samples, n_features): features_samples_ratio, acc_clf1, linewidth=2, - label="Linear Discriminant Analysis with Ledoit Wolf", - color="navy", - linestyle="dashed", + label="LDA", + color="gold", + linestyle="solid", ) plt.plot( features_samples_ratio, acc_clf2, linewidth=2, - label="Linear Discriminant Analysis", - color="gold", - linestyle="solid", + label="LDA with Ledoit Wolf", + color="navy", + linestyle="dashed", ) plt.plot( features_samples_ratio, acc_clf3, linewidth=2, - label="Linear Discriminant Analysis with OAS", + label="LDA with OAS", color="red", linestyle="dotted", ) @@ -93,12 +93,13 @@ def generate_data(n_samples, n_features): plt.xlabel("n_features / n_samples") plt.ylabel("Classification accuracy") -plt.legend(loc=3, prop={"size": 12}) +plt.legend(loc="lower left") +plt.ylim((0.65, 1.0)) plt.suptitle( - "Linear Discriminant Analysis vs. " + "LDA (Linear Discriminant Analysis) vs. " + "\n" - + "Shrinkage Linear Discriminant Analysis vs. " + + "LDA with Ledoit Wolf vs. " + "\n" - + "OAS Linear Discriminant Analysis (1 discriminative feature)" + + "LDA with OAS (1 discriminative feature)" ) plt.show()
<!-- Thanks for contributing a pull request! Please ensure you have taken a look at the contribution guidelines: https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md --> #### Reference Issues/PRs <!-- Example: Fixes #1234. See also #3456. Please use keywords (e.g., Fixes) to create link to the issues or pull requests you resolved, so that they will automatically be closed when your pull request is merged. See https://github.com/blog/1506-closing-issues-via-pull-requests --> #### What does this implement/fix? Explain your changes. 1. Changed the line orders to Linear Discriminant Analysis, with Ledoit Wolf, with OAS. 2. Moved the legend to the center right, so it does not block the lines. 3. Shortened the labels in the legend for better readability. #### Any other comments? <!-- Please be aware that we are a loose team of volunteers so patience is necessary; assistance handling other issues is very welcome. We value all user contributions, no matter how minor they are. If we are slow to review, either the pull request needs some benchmarking, tinkering, convincing, etc. or more likely the reviewers are simply busy. In either case, we ask for your understanding during the review process. For more information, see our FAQ on this topic: http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention. Thanks for contributing! -->
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/25644
2023-02-19T01:59:38Z
2023-02-20T20:23:21Z
2023-02-20T20:23:21Z
2023-02-20T20:23:22Z
624
scikit-learn/scikit-learn
46,788
Fix stray ellipsis
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index 2ad06fdef..0c1052d1e 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -3152,7 +3152,7 @@ Usually you forward the entire parameter (or parameter pack, using `...`) exactl Sometimes you may forward a composite parameter piecewise, each subobject once on every static control flow path: template<class PairLike> - inline auto test(PairLike&&... pairlike) + inline auto test(PairLike&& pairlike) { // ... f1(some, args, and, forward<PairLike>(pairlike).first); // forward .first
This PR removes a stray ellipsis where the author clearly didn't intend to use a parameter pack. The code is ill-formed without this fix.
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/2091
2023-06-23T16:24:44Z
2023-06-23T17:08:51Z
2023-06-23T17:08:51Z
2023-06-23T17:08:52Z
169
isocpp/CppCoreGuidelines
15,808
`process_mask_native()` cleanup
diff --git a/segment/predict.py b/segment/predict.py index 4d8458fd879..4ba9e46ddab 100644 --- a/segment/predict.py +++ b/segment/predict.py @@ -44,7 +44,7 @@ from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, increment_path, non_max_suppression, print_args, scale_boxes, scale_segments, - strip_optimizer, xyxy2xywh) + strip_optimizer) from utils.plots import Annotator, colors, save_one_box from utils.segment.general import masks2segments, process_mask, process_mask_native from utils.torch_utils import select_device, smart_inference_mode @@ -161,10 +161,9 @@ def run( # Segments if save_txt: - segments = reversed(masks2segments(masks)) segments = [ scale_segments(im0.shape if retina_masks else im.shape[2:], x, im0.shape, normalize=True) - for x in segments] + for x in reversed(masks2segments(masks))] # Print results for c in det[:, 5].unique(): @@ -172,15 +171,17 @@ def run( s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string # Mask plotting - plot_img = torch.as_tensor(im0, dtype=torch.float16).to(device).permute(2, 0, 1).flip(0).contiguous() / 255. \ - if retina_masks else im[i] - annotator.masks(masks, colors=[colors(x, True) for x in det[:, 5]], im_gpu=plot_img) + annotator.masks( + masks, + colors=[colors(x, True) for x in det[:, 5]], + im_gpu=torch.as_tensor(im0, dtype=torch.float16).to(device).permute(2, 0, 1).flip(0).contiguous() / + 255 if retina_masks else im[i]) # Write results for j, (*xyxy, conf, cls) in enumerate(reversed(det[:, :6])): if save_txt: # Write to file - segj = segments[j].reshape(-1) # (n,2) to (n*2) - line = (cls, *segj, conf) if save_conf else (cls, *segj) # label format + seg = segments[j].reshape(-1) # (n,2) to (n*2) + line = (cls, *seg, conf) if save_conf else (cls, *seg) # label format with open(f'{txt_path}.txt', 'a') as f: f.write(('%g ' * len(line)).rstrip() % line + '\n') diff --git a/segment/val.py b/segment/val.py index 48bf28d4bf4..368a058f9ce 100644 --- a/segment/val.py +++ b/segment/val.py @@ -48,7 +48,7 @@ from utils.metrics import ConfusionMatrix, box_iou from utils.plots import output_to_target, plot_val_study from utils.segment.dataloaders import create_dataloader -from utils.segment.general import mask_iou, process_mask, process_mask_upsample, scale_image +from utils.segment.general import mask_iou, process_mask, process_mask_native, scale_image from utils.segment.metrics import Metrics, ap_per_class_box_and_mask from utils.segment.plots import plot_images_and_masks from utils.torch_utils import de_parallel, select_device, smart_inference_mode @@ -160,7 +160,7 @@ def run( ): if save_json: check_requirements(['pycocotools']) - process = process_mask_upsample # more accurate + process = process_mask_native # more accurate else: process = process_mask # faster @@ -312,7 +312,7 @@ def run( pred_masks = torch.as_tensor(pred_masks, dtype=torch.uint8) if plots and batch_i < 3: - plot_masks.append(pred_masks[:15].cpu()) # filter top 15 to plot + plot_masks.append(pred_masks[:15]) # filter top 15 to plot # Save/log if save_txt: @@ -367,8 +367,8 @@ def run( # Save JSON if save_json and len(jdict): w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights - anno_json = str(Path(data.get('path', '../coco')) / 'annotations/instances_val2017.json') # annotations json - pred_json = str(save_dir / f"{w}_predictions.json") # predictions json + anno_json = str(Path('../datasets/coco/annotations/instances_val2017.json')) # annotations + pred_json = str(save_dir / f"{w}_predictions.json") # predictions LOGGER.info(f'\nEvaluating pycocotools mAP... saving {pred_json}...') with open(pred_json, 'w') as f: json.dump(jdict, f) diff --git a/utils/segment/general.py b/utils/segment/general.py index 6ebfd27bd9d..9da89453866 100644 --- a/utils/segment/general.py +++ b/utils/segment/general.py @@ -25,10 +25,10 @@ def crop_mask(masks, boxes): def process_mask_upsample(protos, masks_in, bboxes, shape): """ Crop after upsample. - proto_out: [mask_dim, mask_h, mask_w] - out_masks: [n, mask_dim], n is number of masks after nms + protos: [mask_dim, mask_h, mask_w] + masks_in: [n, mask_dim], n is number of masks after nms bboxes: [n, 4], n is number of masks after nms - shape:input_image_size, (h, w) + shape: input_image_size, (h, w) return: h, w, n """ @@ -67,25 +67,25 @@ def process_mask(protos, masks_in, bboxes, shape, upsample=False): return masks.gt_(0.5) -def process_mask_native(protos, masks_in, bboxes, dst_shape): +def process_mask_native(protos, masks_in, bboxes, shape): """ Crop after upsample. - proto_out: [mask_dim, mask_h, mask_w] - out_masks: [n, mask_dim], n is number of masks after nms + protos: [mask_dim, mask_h, mask_w] + masks_in: [n, mask_dim], n is number of masks after nms bboxes: [n, 4], n is number of masks after nms - shape:input_image_size, (h, w) + shape: input_image_size, (h, w) return: h, w, n """ c, mh, mw = protos.shape # CHW masks = (masks_in @ protos.float().view(c, -1)).sigmoid().view(-1, mh, mw) - gain = min(mh / dst_shape[0], mw / dst_shape[1]) # gain = old / new - pad = (mw - dst_shape[1] * gain) / 2, (mh - dst_shape[0] * gain) / 2 # wh padding + gain = min(mh / shape[0], mw / shape[1]) # gain = old / new + pad = (mw - shape[1] * gain) / 2, (mh - shape[0] * gain) / 2 # wh padding top, left = int(pad[1]), int(pad[0]) # y, x bottom, right = int(mh - pad[1]), int(mw - pad[0]) masks = masks[:, top:bottom, left:right] - masks = F.interpolate(masks[None], dst_shape, mode='bilinear', align_corners=False)[0] # CHW + masks = F.interpolate(masks[None], shape, mode='bilinear', align_corners=False)[0] # CHW masks = crop_mask(masks, bboxes) # CHW return masks.gt_(0.5) diff --git a/val.py b/val.py index 7c610e83a85..e84249ed383 100644 --- a/val.py +++ b/val.py @@ -302,8 +302,8 @@ def run( # Save JSON if save_json and len(jdict): w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights - anno_json = str(Path(data.get('path', '../coco')) / 'annotations/instances_val2017.json') # annotations json - pred_json = str(save_dir / f"{w}_predictions.json") # predictions json + anno_json = str(Path('../datasets/coco/annotations/instances_val2017.json')) # annotations + pred_json = str(save_dir / f"{w}_predictions.json") # predictions LOGGER.info(f'\nEvaluating pycocotools mAP... saving {pred_json}...') with open(pred_json, 'w') as f: json.dump(jdict, f)
<!-- Thank you for submitting a YOLOv5 🚀 Pull Request! We want to make contributing to YOLOv5 as easy and transparent as possible. A few tips to get you started: - Search existing YOLOv5 [PRs](https://github.com/ultralytics/yolov5/pull) to see if a similar PR already exists. - Link this PR to a YOLOv5 [issue](https://github.com/ultralytics/yolov5/issues) to help us understand what bug fix or feature is being implemented. - Provide before and after profiling/inference/training results to help us quantify the improvement your PR provides (if applicable). Please see our ✅ [Contributing Guide](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) for more details. --> ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Refactoring and simplification of segmentation mask processing in the YOLOv5 repository. ### 📊 Key Changes - Removal of `xyxy2xywh` import as it's no longer needed. - Reorganization of segment scaling code in `predict.py` for clarity. - Introduction of `process_mask_native` function in `val.py` and `general.py` as a replacement for `process_mask_upsample` for a more accurate mask processing. - Use of native tensor operations for mask plotting in `predict.py` replacing explicit type conversion. - Changed the paths in `val.py` and `val.py` for loading annotations JSON for cleaner code. ### 🎯 Purpose & Impact - To streamline the codebase, making it simpler and more maintainable. - To boost the accuracy of mask processing by leveraging native functions. - To reduce complexity and potential for type-related bugs in mask plotting. - These changes improve the reliability and efficiency of the code, which could enhance the performance for end users working with semantic segmentation tasks in YOLOv5. 🚀
https://api.github.com/repos/ultralytics/yolov5/pulls/10366
2022-12-01T20:30:41Z
2022-12-03T22:41:09Z
2022-12-03T22:41:09Z
2024-01-19T04:02:54Z
2,189
ultralytics/yolov5
24,954
Move misplaced line
diff --git a/src/transformers/models/mistral/modeling_mistral.py b/src/transformers/models/mistral/modeling_mistral.py index f4251b98304c4..fbba155f19d57 100644 --- a/src/transformers/models/mistral/modeling_mistral.py +++ b/src/transformers/models/mistral/modeling_mistral.py @@ -1176,11 +1176,11 @@ def forward( shift_logits = logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() # Flatten the tokens - loss_fct = CrossEntropyLoss() shift_logits = shift_logits.view(-1, self.config.vocab_size) shift_labels = shift_labels.view(-1) - # Enable model parallelism + # Ensure tensors are on the same device shift_labels = shift_labels.to(shift_logits.device) + loss_fct = CrossEntropyLoss() loss = loss_fct(shift_logits, shift_labels) if not return_dict:
Move misplaced line, improve code comment. No functional change, the loss_fct is not used earlier and did not match the code comment either. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @ArthurZucker and @younesbelkada
https://api.github.com/repos/huggingface/transformers/pulls/29117
2024-02-19T18:46:50Z
2024-02-20T01:24:48Z
2024-02-20T01:24:48Z
2024-02-20T01:41:32Z
231
huggingface/transformers
11,963
Avoid blocking event loop when using DataFusionAsyncHook by replacing sleep by asyncio.sleep
diff --git a/airflow/providers/google/cloud/hooks/datafusion.py b/airflow/providers/google/cloud/hooks/datafusion.py index 38b06602120d5..b0e44081c7b97 100644 --- a/airflow/providers/google/cloud/hooks/datafusion.py +++ b/airflow/providers/google/cloud/hooks/datafusion.py @@ -17,6 +17,7 @@ """This module contains Google DataFusion hook.""" from __future__ import annotations +import asyncio import json import os from time import monotonic, sleep @@ -528,7 +529,7 @@ async def _get_link(self, url: str, session): break except Exception as exc: if "404" in str(exc): - sleep(time_to_wait) + await asyncio.sleep(time_to_wait) else: raise if pipeline:
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Thank you for contributing! Please make sure that your code changes are covered with tests. And in case of new features or big changes remember to adjust the documentation. Feel free to ping committers for the review! In case of an existing issue, reference it using one of the following: closes: #ISSUE related: #ISSUE How to write a good git commit message: http://chris.beams.io/posts/git-commit/ --> <!-- Please keep an empty line above the dashes. --> --- **^ Add meaningful description above** Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information. In case of fundamental code changes, an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals)) is needed. In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x). In case of backwards incompatible changes please leave a note in a newsfragment file, named `{pr_number}.significant.rst` or `{issue_number}.significant.rst`, in [newsfragments](https://github.com/apache/airflow/tree/main/newsfragments).
https://api.github.com/repos/apache/airflow/pulls/33756
2023-08-26T01:20:07Z
2023-08-26T07:15:51Z
2023-08-26T07:15:50Z
2023-08-26T07:15:51Z
188
apache/airflow
14,418
DeepSpeed Ulysses tutorial
diff --git a/docs/_tutorials/ds-sequence.md b/docs/_tutorials/ds-sequence.md new file mode 100755 index 000000000000..815b99d6de35 --- /dev/null +++ b/docs/_tutorials/ds-sequence.md @@ -0,0 +1,117 @@ +--- +title: "Getting Started with DeepSpeed-Ulysses for Training Transformer Models with Extreme Long Sequences" +tags: training +--- + +In this tutorial we describe how to enable DeepSpeed-Ulysses. DeepSpeed-Ulysses is a simple but highly communication and memory efficient mechanism sequence parallelism approach for training of large transformer models with massive sequence lengths. It partitions input tensors along the sequence dimension and uses a communication-efficient all-2-all collective for distributed attention computations. Additionally, DeepSpeed-Ulysses incorporates advanced modeling and system optimizations, such as Flash attention, sparse attention, and ZeRO optimizer, to optimize both computational efficiency and memory usage. Training with DeepSpeed sequence parallelism allows both model size and sequence length to scale near indefinitely unbounded by single GPU memory limitation and at a high fraction of peak compute performance. Currently, DeepSpeed-Ulysses can handle sequences up to 1 million in length (10 times the size of a complete Harry Potter book!) on 64 A100 GPUs. Please read our [DeepSpeed-Ulysses blog](https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-ulysses) to learn more! + +## 1. Installation + +You will need to install DeepSpeed v0.10.2 or higher to use the DeepSpeed Sequence feature. Installing DeepSpeed is as simple as `pip install deepspeed`, [see more details](/tutorials/getting-started/). + + +## 2. How to use DeepSpeed-Ulysses in your application? + +Integrating DS-Seq into your training code is easy, and in this section we describe how to integrate DeepSpeed-Ulysses through our [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed) code repo. + + +* **Replace attention module**: First, you need to update your attention module with DeepSpeed-Ulysses DistributedAttention. Here, we use the attention from [Megatron-DeepSpeed ](https://github.com/microsoft/Megatron-DeepSpeed/blob/main/megatron/model/transformer.py) which is the causal attention used in GPT-3 like model training. Rewrite the attention block: + +```python +def __init__(): + ... + self.local_attn = CoreAttention(self.layer_number, config, self.attn_mask_type) + self.core_attention = local_attn + ... + +def forward(): + ... + context_layer = self.core_attention( + query_layer, key_layer, value_layer, attention_mask) + ... +``` + +with: + +```python +from deepspeed.sequence.layer import DistributedAttention + +def __init__(): + ... + self.local_attn = CoreAttention(self.layer_number, config, self.attn_mask_type) + self.dist_attn = DistributedAttention(self.local_attn, parallel_state.get_sequence_parallel_group()) + ... + +def forward(): + ... + context_layer = self.dist_attn(query_layer, key_layer, value_layer, attention_mask) + ... + +``` + +* **Add sequence parallel communication group**: Note that DistributedAttention takes `local_attn` and `sequence_parallel_group` as the parameters, where local_attn can be your original attention block. You also need to build the sequence parallel nication group and pass that the DistributedAttention. One way to do this is to build the sequence parallel group at the model initialization stage. + + +```python +def initialize_model_parallel( + ... + sequence_parallel_size, + ... +): + ... + num_sequence_parallel_groups: int = world_size // sequence_parallel_size + num_sequence_data_parallel_groups: int = world_size // sequence_parallel_size // data_parallel_size + ... + global _SEQUENCE_PARALLEL_GROUP + for i in range(num_sequence_parallel_groups): + ranks = range(i * sequence_parallel_size, + (i + 1) * sequence_parallel_size) + group = torch.distributed.new_group(ranks) + if rank in ranks: + _SEQUENCE_PARALLEL_GROUP = group + +def get_sequence_parallel_group(): + """Get the sequence parallel group the caller rank belongs to.""" + return _SEQUENCE_PARALLEL_GROUP + +``` + +In the Megatron-DeepSpeed exampele, to enable sequence parallelism, set the degree of parallelism using the --ds-sequence-parallel-size argument. You also need to ensure that the number of attention heads is divisible by this value. +We have prepared scripts for you to quickly get some examples for training GPT-3 like models with very long sequences: + +```shell +Megatron-DeepSpeed/examples_deepspeed/sequence_parallel$ bash ds_pretrain_gpt_1.3B_seq_parallel_32k.sh +Megatron-DeepSpeed/examples_deepspeed/sequence_parallel$ bash ds_pretrain_gpt_30B_seq_parallel_32k.sh +``` + +Please note that our sequence parallelism feature is currently incompatible with Megatron-LM's tensor or pipeline parallelism. + +## 3. Enabling DeepSpeed-Ulysses with FlashAttention? + +DeepSpeed's sequence parallelism can be combined with different types of attention implementations to further improve the memory and compute efficiency of long sequence training: + +`Classic attention`: attention mechanism implemented via PyTorch. + +`FlashAttention`: the implementation from [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness](https://arxiv.org/abs/2205.14135). Enabled by `--use-flash-attn`. + +`FlashAttention + Triton`: a of FlashAttention in Triton (tested with triton==2.0.0.dev20221202). Enabled by `--use-flash-attn-triton`. + +For the best performance, we recommend using FlashAttention + Triton. Below are the installation steps. Note that FlashAttention is compatible only with NVIDIA Turing, Ampere, Ada, or Hopper GPUs. + +```bash +# install triton +git clone -b legacy-backend https://github.com/openai/triton +cd triton/python/ +pip install cmake +pip install . +``` + +```bash +# install +cd ${WORK_DIR} +git clone -b v1.0.4 https://github.com/HazyResearch/flash-attention +cd flash-attention +python setup.py install +``` + +You may also want to ensure your model configuration is compliant with FlashAttention's requirements. For instance, to achieve optimal performance, the head size should be divisible by 8. Refer to the document of FlashAttention for more details.
https://api.github.com/repos/microsoft/DeepSpeed/pulls/4200
2023-08-23T17:53:52Z
2023-08-23T18:10:34Z
2023-08-23T18:10:34Z
2023-08-23T18:10:34Z
1,512
microsoft/DeepSpeed
10,536
Dockerfile fonts
diff --git a/docker/dev/Dockerfile b/docker/dev/Dockerfile index f5c9b408b..964eda15f 100644 --- a/docker/dev/Dockerfile +++ b/docker/dev/Dockerfile @@ -8,6 +8,11 @@ RUN apk update && apk add --no-cache \ # install go package. RUN go get github.com/mingrammer/round +# install fonts +RUN apk --no-cache add msttcorefonts-installer fontconfig && \ + update-ms-fonts && \ + fc-cache -f + # add go bin to path. ENV PATH "$PATH:/root/go/bin" @@ -18,4 +23,4 @@ WORKDIR /usr/src/diagrams COPY . . # install python requirements. -RUN pip install black +RUN pip install black graphviz jinja2
Docker file didn't work for me. Needed graphviz, jinja2 python packages, and fonts added. Fixes #230
https://api.github.com/repos/mingrammer/diagrams/pulls/231
2020-07-13T04:39:22Z
2020-07-13T04:50:40Z
2020-07-13T04:50:40Z
2020-07-13T04:50:41Z
187
mingrammer/diagrams
52,554
Support Python 3.8
diff --git a/g4f/Provider/Bing.py b/g4f/Provider/Bing.py index dccfc5b1d0..e03b413f02 100644 --- a/g4f/Provider/Bing.py +++ b/g4f/Provider/Bing.py @@ -408,14 +408,14 @@ def create_message(conversation: Conversation, prompt: str, tone: str, context: 'traceId': os.urandom(16).hex(), 'isStartOfSession': True, 'requestId': request_id, - 'message': Defaults.location | { + 'message': {**Defaults.location, **{ 'author': 'user', 'inputMethod': 'Keyboard', 'text': prompt, 'messageType': 'Chat', 'requestId': request_id, 'messageId': request_id, - }, + }}, "scenario": "SERP", 'tone': tone, 'spokenTextMode': 'None',
https://api.github.com/repos/xtekky/gpt4free/pulls/1390
2023-12-25T00:41:40Z
2023-12-27T15:54:42Z
2023-12-27T15:54:42Z
2023-12-27T15:54:45Z
217
xtekky/gpt4free
37,957
GitHub Workflows security hardening
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 2a1b9a4aa09..12e5426b14f 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -1,8 +1,12 @@ name: Build on: workflow_dispatch +permissions: + contents: read jobs: prepare: + permissions: + contents: write # for push_release runs-on: ubuntu-latest outputs: version_suffix: ${{ steps.version_suffix.outputs.version_suffix }} @@ -69,9 +73,6 @@ jobs: python pyinst.py --onedir (cd ./dist/yt-dlp_linux && zip -r ../yt-dlp_linux.zip .) python pyinst.py - - name: Get SHA2-SUMS - id: get_sha - run: | - name: Upload artifacts uses: actions/upload-artifact@v3 @@ -248,6 +249,8 @@ jobs: publish_release: + permissions: + contents: write # for action-gh-release runs-on: ubuntu-latest needs: [prepare, build_unix, build_windows, build_windows32, build_macos, build_macos_legacy] diff --git a/.github/workflows/core.yml b/.github/workflows/core.yml index d0e890b30ef..e1291862651 100644 --- a/.github/workflows/core.yml +++ b/.github/workflows/core.yml @@ -1,5 +1,8 @@ name: Core Tests on: [push, pull_request] +permissions: + contents: read + jobs: tests: name: Core Tests diff --git a/.github/workflows/download.yml b/.github/workflows/download.yml index cc2da62fae6..2b2387d4f13 100644 --- a/.github/workflows/download.yml +++ b/.github/workflows/download.yml @@ -1,5 +1,8 @@ name: Download Tests on: [push, pull_request] +permissions: + contents: read + jobs: quick: name: Quick Download Tests diff --git a/.github/workflows/quick-test.yml b/.github/workflows/quick-test.yml index 53b74e2c754..8a0ac98bb87 100644 --- a/.github/workflows/quick-test.yml +++ b/.github/workflows/quick-test.yml @@ -1,5 +1,8 @@ name: Quick Test on: [push, pull_request] +permissions: + contents: read + jobs: tests: name: Core Test
**IMPORTANT**: PRs without the template will be CLOSED ### Description of your *pull request* and other information </details> <!-- Explanation of your *pull request* in arbitrary form goes here. Please **make sure the description explains the purpose and effect** of your *pull request* and is worded well enough to be understood. Provide as much **context and examples** as possible --> This PR adds explicit [permissions section](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions) to workflows. This is a security best practice because by default workflows run with [extended set of permissions](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token) (except from `on: pull_request` [from external forks](https://securitylab.github.com/research/github-actions-preventing-pwn-requests/)). By specifying any permission explicitly all others are set to none. By using the principle of least privilege the damage a compromised workflow can do (because of an [injection](https://securitylab.github.com/research/github-actions-untrusted-input/) or compromised third party tool or action) is restricted. It is recommended to have [most strict permissions on the top level](https://github.com/ossf/scorecard/blob/main/docs/checks.md#token-permissions) and grant write permissions on [job level](https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs) case by case. Fixes # <details open><summary>Template</summary> <!-- OPEN is intentional --> <!-- # PLEASE FOLLOW THE GUIDE BELOW - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x]) - Use *Preview* tab to see how your *pull request* will actually look like --> ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [ ] Fix or improvement to an extractor (Make sure to add/update tests) - [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy)) - [x] Core bug fix/improvement - [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/5410
2022-10-31T10:39:30Z
2022-11-10T01:41:07Z
2022-11-10T01:41:07Z
2022-11-10T01:41:07Z
601
yt-dlp/yt-dlp
7,420
don't dropout in eval mode
diff --git a/model.py b/model.py index 0858f80a3..287aadd34 100644 --- a/model.py +++ b/model.py @@ -69,7 +69,7 @@ def forward(self, x): # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T) if self.flash: # efficient attention using Flash Attention CUDA kernels - y = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=self.dropout, is_causal=True) + y = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=self.dropout if self.training else 0, is_causal=True) else: # manual implementation of attention att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
To give the model more chances to figure out if it's being trained or deployed :) (c.f. https://twitter.com/karpathy/status/1635049541534879745)
https://api.github.com/repos/karpathy/nanoGPT/pulls/240
2023-04-11T06:05:14Z
2023-04-13T05:43:59Z
2023-04-13T05:43:59Z
2023-04-13T18:01:40Z
216
karpathy/nanoGPT
40,973
release version 2.1.1
diff --git a/CHANGES.rst b/CHANGES.rst index 8214900cc5..9e740614d0 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -3,7 +3,7 @@ Version 2.1.1 ------------- -Unreleased +Released on 2022-03-30 - Set the minimum required version of importlib_metadata to 3.6.0, which is required on Python < 3.10. :issue:`4502`
https://api.github.com/repos/pallets/flask/pulls/4510
2022-03-30T21:28:41Z
2022-03-30T21:33:50Z
2022-03-30T21:33:50Z
2022-04-14T00:05:39Z
123
pallets/flask
20,356
Code cleanup and small improvements for zoom in inpaint
diff --git a/javascript/zoom.js b/javascript/zoom.js index e3fdcfb70..450a03472 100644 --- a/javascript/zoom.js +++ b/javascript/zoom.js @@ -1,18 +1,5 @@ onUiLoaded(async() => { // Helper functions - // Get active tab - - /** - * Waits for an element to be present in the DOM. - */ - const waitForElement = (id) => new Promise(resolve => { - const checkForElement = () => { - const element = document.querySelector(id); - if (element) return resolve(element); - setTimeout(checkForElement, 100); - }; - checkForElement(); - }); // Detect whether the element has a horizontal scroll bar function hasHorizontalScrollbar(element) { @@ -33,140 +20,40 @@ onUiLoaded(async() => { } } - // Check if hotkey is valid - function isValidHotkey(value) { - const specialKeys = ["Ctrl", "Alt", "Shift", "Disable"]; - return ( - (typeof value === "string" && - value.length === 1 && - /[a-z]/i.test(value)) || - specialKeys.includes(value) - ); - } - - // Normalize hotkey - function normalizeHotkey(hotkey) { - return hotkey.length === 1 ? "Key" + hotkey.toUpperCase() : hotkey; - } - - // Format hotkey for display - function formatHotkeyForDisplay(hotkey) { - return hotkey.startsWith("Key") ? hotkey.slice(3) : hotkey; - } - // Create hotkey configuration with the provided options function createHotkeyConfig(defaultHotkeysConfig) { const result = {}; // Resulting hotkey configuration - for (const key in defaultHotkeysConfig) { result[key] = defaultHotkeysConfig[key]; } - return result; } - // Disables functions in the config object based on the provided list of function names - function disableFunctions(config, disabledFunctions) { - // Bind the hasOwnProperty method to the functionMap object to avoid errors - const hasOwnProperty = - Object.prototype.hasOwnProperty.bind(functionMap); - - // Loop through the disabledFunctions array and disable the corresponding functions in the config object - disabledFunctions.forEach(funcName => { - if (hasOwnProperty(funcName)) { - const key = functionMap[funcName]; - config[key] = "disable"; - } - }); - - // Return the updated config object - return config; - } - - /** - * The restoreImgRedMask function displays a red mask around an image to indicate the aspect ratio. - * If the image display property is set to 'none', the mask breaks. To fix this, the function - * temporarily sets the display property to 'block' and then hides the mask again after 300 milliseconds - * to avoid breaking the canvas. Additionally, the function adjusts the mask to work correctly on - * very long images. - */ - function restoreImgRedMask(elements) { - const mainTabId = getTabId(elements); - - if (!mainTabId) return; - - const mainTab = gradioApp().querySelector(mainTabId); - const img = mainTab.querySelector("img"); - const imageARPreview = gradioApp().querySelector("#imageARPreview"); - - if (!img || !imageARPreview) return; - - imageARPreview.style.transform = ""; - if (parseFloat(mainTab.style.width) > 865) { - const transformString = mainTab.style.transform; - const scaleMatch = transformString.match( - /scale\(([-+]?[0-9]*\.?[0-9]+)\)/ - ); - let zoom = 1; // default zoom - - if (scaleMatch && scaleMatch[1]) { - zoom = Number(scaleMatch[1]); - } - - imageARPreview.style.transformOrigin = "0 0"; - imageARPreview.style.transform = `scale(${zoom})`; - } - - if (img.style.display !== "none") return; - - img.style.display = "block"; - - setTimeout(() => { - img.style.display = "none"; - }, 400); - } - // Default config const defaultHotkeysConfig = { - canvas_hotkey_zoom: "Alt", + canvas_hotkey_zoom: "Shift", canvas_hotkey_adjust: "Ctrl", + canvas_zoom_undo_extra_key: "Ctrl", + canvas_zoom_hotkey_undo: "KeyZ", canvas_hotkey_reset: "KeyR", canvas_hotkey_fullscreen: "KeyS", canvas_hotkey_move: "KeyF", - canvas_hotkey_overlap: "KeyO", - canvas_disabled_functions: [], canvas_show_tooltip: true, canvas_auto_expand: true, - canvas_blur_prompt: false, - }; - - const functionMap = { - "Zoom": "canvas_hotkey_zoom", - "Adjust brush size": "canvas_hotkey_adjust", - "Moving canvas": "canvas_hotkey_move", - "Fullscreen": "canvas_hotkey_fullscreen", - "Reset Zoom": "canvas_hotkey_reset", - "Overlap": "canvas_hotkey_overlap" + canvas_blur_prompt: true, }; // Loading the configuration from opts - const preHotkeysConfig = createHotkeyConfig( + const hotkeysConfig = createHotkeyConfig( defaultHotkeysConfig ); - // Disable functions that are not needed by the user - const hotkeysConfig = disableFunctions( - preHotkeysConfig, - preHotkeysConfig.canvas_disabled_functions - ); - let isMoving = false; - let mouseX, mouseY; let activeElement; const elemData = {}; - function applyZoomAndPan(elemId, isExtension = true) { + function applyZoomAndPan(elemId) { const targetElement = gradioApp().querySelector(elemId); if (!targetElement) { @@ -181,6 +68,7 @@ onUiLoaded(async() => { panX: 0, panY: 0 }; + let fullScreenMode = false; // Create tooltip @@ -211,44 +99,46 @@ onUiLoaded(async() => { action: "Adjust brush size", keySuffix: " + wheel" }, + {configKey: "canvas_zoom_hotkey_undo", action: "Undo last action", keyPrefix: `${hotkeysConfig.canvas_zoom_undo_extra_key} + ` }, {configKey: "canvas_hotkey_reset", action: "Reset zoom"}, { configKey: "canvas_hotkey_fullscreen", action: "Fullscreen mode" }, - {configKey: "canvas_hotkey_move", action: "Move canvas"}, - {configKey: "canvas_hotkey_overlap", action: "Overlap"} + {configKey: "canvas_hotkey_move", action: "Move canvas"} ]; - // Create hotkeys array with disabled property based on the config values - const hotkeys = hotkeysInfo.map(info => { + // Create hotkeys array based on the config values + const hotkeys = hotkeysInfo.map((info) => { const configValue = hotkeysConfig[info.configKey]; - const key = info.keySuffix ? - `${configValue}${info.keySuffix}` : - configValue.charAt(configValue.length - 1); + + let key = configValue.slice(-1); + + if (info.keySuffix) { + key = `${configValue}${info.keySuffix}`; + } + + if (info.keyPrefix && info.keyPrefix !== "None + ") { + key = `${info.keyPrefix}${configValue[3]}`; + } + return { - key, - action: info.action, - disabled: configValue === "disable" + key, + action: info.action, }; - }); - - for (const hotkey of hotkeys) { - if (hotkey.disabled) { - continue; - } - - const p = document.createElement("p"); - p.innerHTML = `<b>${hotkey.key}</b> - ${hotkey.action}`; - tooltipContent.appendChild(p); - } - - // Add information and content elements to the tooltip element - tooltip.appendChild(info); - tooltip.appendChild(tooltipContent); - - // Add a hint element to the target element - toolTipElemnt.appendChild(tooltip); + }); + + hotkeys + .forEach(hotkey => { + const p = document.createElement("p"); + p.innerHTML = `<b>${hotkey.key}</b> - ${hotkey.action}`; + tooltipContent.appendChild(p); + }); + + tooltip.append(info, tooltipContent); + + // Add a hint element to the target element + toolTipElemnt.appendChild(tooltip); } //Show tool tip if setting enable @@ -264,9 +154,7 @@ onUiLoaded(async() => { panY: 0 }; - if (isExtension) { - targetElement.style.overflow = "hidden"; - } + targetElement.style.overflow = "hidden"; targetElement.isZoomed = false; @@ -284,7 +172,7 @@ onUiLoaded(async() => { closeBtn.addEventListener("click", resetZoom); } - if (canvas && isExtension) { + if (canvas) { const parentElement = targetElement.closest('[id^="component-"]'); if ( canvas && @@ -297,16 +185,6 @@ onUiLoaded(async() => { } - if ( - canvas && - !isExtension && - parseFloat(canvas.style.width) > 865 && - parseFloat(targetElement.style.width) > 865 - ) { - fitToElement(); - return; - } - targetElement.style.width = ""; } @@ -372,12 +250,10 @@ onUiLoaded(async() => { targetElement.style.transformOrigin = "0 0"; targetElement.style.transform = `translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px) scale(${newZoomLevel})`; + targetElement.style.overflow = "visible"; toggleOverlap("on"); - if (isExtension) { - targetElement.style.overflow = "visible"; - } - + return newZoomLevel; } @@ -388,6 +264,7 @@ onUiLoaded(async() => { let zoomPosX, zoomPosY; let delta = 0.2; + if (elemData[elemId].zoomLevel > 7) { delta = 0.9; } else if (elemData[elemId].zoomLevel > 2) { @@ -421,12 +298,7 @@ onUiLoaded(async() => { let parentElement; - if (isExtension) { - parentElement = targetElement.closest('[id^="component-"]'); - } else { - parentElement = targetElement.parentElement; - } - + parentElement = targetElement.closest('[id^="component-"]'); // Get element and screen dimensions const elementWidth = targetElement.offsetWidth; @@ -455,6 +327,26 @@ onUiLoaded(async() => { toggleOverlap("off"); } + // Undo last action + function undoLastAction(e) { + let isCtrlPressed = isModifierKey(e, hotkeysConfig.canvas_zoom_undo_extra_key) + const isAuxButton = e.button >= 3; + + if (isAuxButton) { + isCtrlPressed = true + } else { + if (!isModifierKey(e, hotkeysConfig.canvas_zoom_undo_extra_key)) return; + } + + // Move undoBtn query outside the if statement to avoid unnecessary queries + const undoBtn = document.querySelector(`${activeElement} button[aria-label="Undo"]`); + + if ((isCtrlPressed) && undoBtn ) { + e.preventDefault(); + undoBtn.click(); + } + } + /** * This function fits the target element to the screen by calculating * the required scale and offsets. It also updates the global variables @@ -469,13 +361,8 @@ onUiLoaded(async() => { if (!canvas) return; - if (canvas.offsetWidth > 862 || isExtension) { - targetElement.style.width = (canvas.offsetWidth + 2) + "px"; - } - - if (isExtension) { - targetElement.style.overflow = "visible"; - } + targetElement.style.width = (canvas.offsetWidth + 2) + "px"; + targetElement.style.overflow = "visible"; if (fullScreenMode) { resetZoom(); @@ -549,11 +436,11 @@ onUiLoaded(async() => { } } - const hotkeyActions = { [hotkeysConfig.canvas_hotkey_reset]: resetZoom, [hotkeysConfig.canvas_hotkey_overlap]: toggleOverlap, - [hotkeysConfig.canvas_hotkey_fullscreen]: fitToScreen + [hotkeysConfig.canvas_hotkey_fullscreen]: fitToScreen, + [hotkeysConfig.canvas_zoom_hotkey_undo]: undoLastAction, }; const action = hotkeyActions[event.code]; @@ -597,26 +484,27 @@ onUiLoaded(async() => { } targetElement.addEventListener("mousemove", getMousePosition); + targetElement.addEventListener("auxclick", undoLastAction); //observers // Creating an observer with a callback function to handle DOM changes const observer = new MutationObserver((mutationsList, observer) => { for (let mutation of mutationsList) { - // If the style attribute of the canvas has changed, by observation it happens only when the picture changes - if (mutation.type === 'attributes' && mutation.attributeName === 'style' && - mutation.target.tagName.toLowerCase() === 'canvas') { - targetElement.isExpanded = false; - setTimeout(resetZoom, 10); - } + // If the style attribute of the canvas has changed, by observation it happens only when the picture changes + if (mutation.type === 'attributes' && mutation.attributeName === 'style' && + mutation.target.tagName.toLowerCase() === 'canvas') { + targetElement.isExpanded = false; + setTimeout(resetZoom, 10); + } } - }); - - // Apply auto expand if enabled - if (hotkeysConfig.canvas_auto_expand) { + }); + + // Apply auto expand if enabled + if (hotkeysConfig.canvas_auto_expand) { targetElement.addEventListener("mousemove", autoExpand); // Set up an observer to track attribute changes - observer.observe(targetElement, {attributes: true, childList: true, subtree: true}); - } + observer.observe(targetElement, { attributes: true, childList: true, subtree: true }); + } // Handle events only inside the targetElement let isKeyDownHandlerAttached = false; @@ -661,7 +549,7 @@ onUiLoaded(async() => { function handleMoveKeyDown(e) { // Disable key locks to make pasting from the buffer work correctly - if ((e.ctrlKey && e.code === 'KeyV') || (e.ctrlKey && event.code === 'KeyC') || e.code === "F5") { + if ((e.ctrlKey && e.code === 'KeyV') || (e.ctrlKey && e.code === 'KeyC') || e.code === "F5") { return; } @@ -713,11 +601,7 @@ onUiLoaded(async() => { if (isMoving && elemId === activeElement) { updatePanPosition(e.movementX, e.movementY); targetElement.style.pointerEvents = "none"; - - if (isExtension) { - targetElement.style.overflow = "visible"; - } - + targetElement.style.overflow = "visible"; } else { targetElement.style.pointerEvents = "auto"; } @@ -745,18 +629,13 @@ onUiLoaded(async() => { } } - if (isExtension) { - targetElement.addEventListener("mousemove", checkForOutBox); - } - + targetElement.addEventListener("mousemove", checkForOutBox); window.addEventListener('resize', (e) => { resetZoom(); - if (isExtension) { - targetElement.isExpanded = false; - targetElement.isZoomed = false; - } + targetElement.isExpanded = false; + targetElement.isZoomed = false; }); gradioApp().addEventListener("mousemove", handleMoveByKey);
I cleaned up the code that was left from auto1111 and is no longer needed, also made a couple of authoring changes and added the ability to cancel the last action on ctr+z. Before you accept this PR, I'd like you to see if these changes fit, because what I've done is my vision, which may not fit you, and if I've done something that doesn't fit, I have no problem doing a rollback of that thing. ### New things I added the ability to do undo action on Ctrl+z , tested on various scenarios, no problems with it. https://github.com/lllyasviel/Fooocus/assets/22278673/91a74ff3-4ced-41c0-be9c-2a1cc22c3ae6 ### Changes 1) Changed the zoom button from **Alt** to **Shift**. As I think it is much more convenient + no problem with firefox. Originally it was **Shift** , but because someone could use horizontal scrolling, I had to set**Alt** , in your interface there is no horizontal scrolling due to auto-expand 2) Enabled blur_promt, this option removes the focus of the prompt when the user interacts with canvas. Without this change it is necessary to make an additional click that would remove the focus and then work with canvas. https://github.com/lllyasviel/Fooocus/assets/22278673/762fee95-a1e3-491e-9669-0173ba08b082
https://api.github.com/repos/lllyasviel/Fooocus/pulls/1432
2023-12-15T21:19:09Z
2023-12-15T21:27:15Z
2023-12-15T21:27:15Z
2023-12-15T21:40:02Z
3,764
lllyasviel/Fooocus
7,092
Add Water Content Measurement clusters
diff --git a/homeassistant/components/zha/core/channels/measurement.py b/homeassistant/components/zha/core/channels/measurement.py index 19ecc8a633562b..093c04245c46f1 100644 --- a/homeassistant/components/zha/core/channels/measurement.py +++ b/homeassistant/components/zha/core/channels/measurement.py @@ -62,6 +62,30 @@ class RelativeHumidity(ZigbeeChannel): ] +@registries.ZIGBEE_CHANNEL_REGISTRY.register(measurement.SoilMoisture.cluster_id) +class SoilMoisture(ZigbeeChannel): + """Soil Moisture measurement channel.""" + + REPORT_CONFIG = [ + { + "attr": "measured_value", + "config": (REPORT_CONFIG_MIN_INT, REPORT_CONFIG_MAX_INT, 100), + } + ] + + +@registries.ZIGBEE_CHANNEL_REGISTRY.register(measurement.LeafWetness.cluster_id) +class LeafWetness(ZigbeeChannel): + """Leaf Wetness measurement channel.""" + + REPORT_CONFIG = [ + { + "attr": "measured_value", + "config": (REPORT_CONFIG_MIN_INT, REPORT_CONFIG_MAX_INT, 100), + } + ] + + @registries.ZIGBEE_CHANNEL_REGISTRY.register( measurement.TemperatureMeasurement.cluster_id ) diff --git a/homeassistant/components/zha/core/const.py b/homeassistant/components/zha/core/const.py index dd6832e0d6b9ce..de29ac0f9f642a 100644 --- a/homeassistant/components/zha/core/const.py +++ b/homeassistant/components/zha/core/const.py @@ -84,6 +84,8 @@ CHANNEL_EVENT_RELAY = "event_relay" CHANNEL_FAN = "fan" CHANNEL_HUMIDITY = "humidity" +CHANNEL_SOIL_MOISTURE = "soil_moisture" +CHANNEL_LEAF_WETNESS = "leaf_wetness" CHANNEL_IAS_ACE = "ias_ace" CHANNEL_IAS_WD = "ias_wd" CHANNEL_IDENTIFY = "identify" diff --git a/homeassistant/components/zha/core/registries.py b/homeassistant/components/zha/core/registries.py index 8b2c4d11fbfec8..eeee0c5c629567 100644 --- a/homeassistant/components/zha/core/registries.py +++ b/homeassistant/components/zha/core/registries.py @@ -82,6 +82,8 @@ zcl.clusters.measurement.OccupancySensing.cluster_id: BINARY_SENSOR, zcl.clusters.measurement.PressureMeasurement.cluster_id: SENSOR, zcl.clusters.measurement.RelativeHumidity.cluster_id: SENSOR, + zcl.clusters.measurement.SoilMoisture.cluster_id: SENSOR, + zcl.clusters.measurement.LeafWetness.cluster_id: SENSOR, zcl.clusters.measurement.TemperatureMeasurement.cluster_id: SENSOR, zcl.clusters.security.IasZone.cluster_id: BINARY_SENSOR, } diff --git a/homeassistant/components/zha/sensor.py b/homeassistant/components/zha/sensor.py index 2281d5295bca07..365a7d8084fad3 100644 --- a/homeassistant/components/zha/sensor.py +++ b/homeassistant/components/zha/sensor.py @@ -61,9 +61,11 @@ CHANNEL_ELECTRICAL_MEASUREMENT, CHANNEL_HUMIDITY, CHANNEL_ILLUMINANCE, + CHANNEL_LEAF_WETNESS, CHANNEL_POWER_CONFIGURATION, CHANNEL_PRESSURE, CHANNEL_SMARTENERGY_METERING, + CHANNEL_SOIL_MOISTURE, CHANNEL_TEMPERATURE, CHANNEL_THERMOSTAT, DATA_ZHA, @@ -353,6 +355,28 @@ class Humidity(Sensor): _unit = PERCENTAGE +@STRICT_MATCH(channel_names=CHANNEL_SOIL_MOISTURE) +class SoilMoisture(Sensor): + """Soil Moisture sensor.""" + + SENSOR_ATTR = "measured_value" + _device_class = DEVICE_CLASS_HUMIDITY + _divisor = 100 + _state_class = STATE_CLASS_MEASUREMENT + _unit = PERCENTAGE + + +@STRICT_MATCH(channel_names=CHANNEL_LEAF_WETNESS) +class LeafWetness(Sensor): + """Leaf Wetness sensor.""" + + SENSOR_ATTR = "measured_value" + _device_class = DEVICE_CLASS_HUMIDITY + _divisor = 100 + _state_class = STATE_CLASS_MEASUREMENT + _unit = PERCENTAGE + + @STRICT_MATCH(channel_names=CHANNEL_ILLUMINANCE) class Illuminance(Sensor): """Illuminance Sensor.""" diff --git a/tests/components/zha/zha_devices_list.py b/tests/components/zha/zha_devices_list.py index e85f4c270d5766..2ef2e578dce0ad 100644 --- a/tests/components/zha/zha_devices_list.py +++ b/tests/components/zha/zha_devices_list.py @@ -3985,4 +3985,42 @@ SIG_MODEL: "XBee3", SIG_NODE_DESC: b"\x01@\x8e\x1e\x10R\xff\x00\x00,\xff\x00\x00", }, + { + DEV_SIG_DEV_NO: 99, + SIG_ENDPOINTS: { + 1: { + SIG_EP_TYPE: 0x000C, + DEV_SIG_EP_ID: 1, + SIG_EP_INPUT: [0x0000, 0x0001, 0x0402, 0x0408], + SIG_EP_OUTPUT: [], + SIG_EP_PROFILE: 260, + } + }, + DEV_SIG_ENTITIES: [ + "sensor.efektalab_ru_efekta_pws_77665544_power", + "sensor.efektalab_ru_efekta_pws_77665544_temperature", + "sensor.efektalab_ru_efekta_pws_77665544_soil_moisture", + ], + DEV_SIG_ENT_MAP: { + ("sensor", "00:11:22:33:44:55:66:77-1-1"): { + DEV_SIG_CHANNELS: ["power"], + DEV_SIG_ENT_MAP_CLASS: "Battery", + DEV_SIG_ENT_MAP_ID: "sensor.efektalab_ru_efekta_pws_77665544_power", + }, + ("sensor", "00:11:22:33:44:55:66:77-1-1026"): { + DEV_SIG_CHANNELS: ["temperature"], + DEV_SIG_ENT_MAP_CLASS: "Temperature", + DEV_SIG_ENT_MAP_ID: "sensor.efektalab_ru_efekta_pws_77665544_temperature", + }, + ("sensor", "00:11:22:33:44:55:66:77-1-1032"): { + DEV_SIG_CHANNELS: ["soil_moisture"], + DEV_SIG_ENT_MAP_CLASS: "SoilMoisture", + DEV_SIG_ENT_MAP_ID: "sensor.efektalab_ru_efekta_pws_77665544_soil_moisture", + }, + }, + DEV_SIG_EVT_CHANNELS: [], + SIG_MANUFACTURER: "efektalab.ru", + SIG_MODEL: "EFEKTA_PWS", + SIG_NODE_DESC: b"\x02@\x80\x00\x00P\xa0\x00\x00\x00\xa0\x00\x00", + }, ]
<!-- You are amazing! Thanks for contributing to our project! Please, DO NOT DELETE ANY TEXT from this template! (unless instructed). --> ## Proposed change Ads missing Water Content Measurement clusters: Soil Moisture and Leaf Wetness <!-- Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue in the additional information section. --> ## Type of change <!-- What type of change does your PR introduce to Home Assistant? NOTE: Please, check only 1! box! If your PR requires multiple boxes to be checked, you'll most likely need to split it into multiple PRs. This makes things easier and faster to code review. --> - [ ] Dependency upgrade - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New integration (thank you!) - [x] New feature (which adds functionality to an existing integration) - [ ] Breaking change (fix/feature causing existing functionality to break) - [ ] Code quality improvements to existing code or addition of tests ## Additional information <!-- Details are important, and help maintainers processing your PR. Please be sure to fill out additional details, if applicable. --> It requires https://github.com/zigpy/zigpy/pull/844 to be first pulled into the core. The Soil Moisture part was tested with https://github.com/smartboxchannel/Plant-Watering-Sensor-Zigbee, and seems to be working. Leaf Moisture was not tested, but according to the `zcl` specs it should behave in the very same way. I think that I have added new clusters in all the places they are supposed to be. Requires https://github.com/home-assistant/core/pull/59314 to be merged first. ## Checklist <!-- Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code. --> - [x] The code change is tested and works locally. - [x] Local tests pass. **Your PR cannot be merged unless tests pass** - [x] There is no commented out code in this PR. - [x] I have followed the [development checklist][dev-checklist] - [x] The code has been formatted using Black (`black --fast homeassistant tests`) - [ ] Tests have been added to verify that the new code works. If user exposed functionality or configuration variables are added/changed: - [ ] Documentation added/updated for [www.home-assistant.io][docs-repository] If the code communicates with devices, web services, or third-party tools: - [ ] The [manifest file][manifest-docs] has all fields filled out correctly. Updated and included derived files by running: `python3 -m script.hassfest`. - [ ] New or updated dependencies have been added to `requirements_all.txt`. Updated by running `python3 -m script.gen_requirements_all`. - [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description. - [ ] Untested files have been added to `.coveragerc`. The integration reached or maintains the following [Integration Quality Scale][quality-scale]: <!-- The Integration Quality Scale scores an integration on the code quality and user experience. Each level of the quality scale consists of a list of requirements. We highly recommend getting your integration scored! --> - [ ] No score or internal - [ ] 🥈 Silver - [ ] 🥇 Gold - [ ] 🏆 Platinum <!-- This project is very active and we have a high turnover of pull requests. Unfortunately, the number of incoming pull requests is higher than what our reviewers can review and merge so there is a long backlog of pull requests waiting for review. You can help here! By reviewing another pull request, you will help raise the code quality of that pull request and the final review will be faster. This way the general pace of pull request reviews will go up and your wait time will go down. When picking a pull request to review, try to choose one that hasn't yet been reviewed. Thanks for helping out! --> To help with the load of incoming pull requests: - [ ] I have reviewed two other [open pull requests][prs] in this repository. [prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone <!-- Thank you for contributing <3 Below, some useful links you could explore: --> [dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html [manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html [quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html [docs-repository]: https://github.com/home-assistant/home-assistant.io
https://api.github.com/repos/home-assistant/core/pulls/59300
2021-11-07T18:01:27Z
2021-11-08T18:13:36Z
2021-11-08T18:13:35Z
2021-11-09T19:07:14Z
1,683
home-assistant/core
39,037
Update bitsandbytes to 0.39.1
diff --git a/requirements.txt b/requirements.txt index b4ceb313f1..4a31685bf5 100644 --- a/requirements.txt +++ b/requirements.txt @@ -17,8 +17,8 @@ tqdm scipy transformers==4.30.2 git+https://github.com/huggingface/peft@03eb378eb914fbee709ff7c86ba5b1d033b89524 -bitsandbytes==0.39.0; platform_system != "Windows" -https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.39.0-py3-none-any.whl; platform_system == "Windows" +bitsandbytes==0.39.1; platform_system != "Windows" +https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.39.1-py3-none-win_amd64.whl; platform_system == "Windows" llama-cpp-python==0.1.64; platform_system != "Windows" https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.64/llama_cpp_python-0.1.64-cp310-cp310-win_amd64.whl; platform_system == "Windows" https://github.com/PanQiWei/AutoGPTQ/releases/download/v0.2.2/auto_gptq-0.2.2+cu117-cp310-cp310-win_amd64.whl; platform_system == "Windows"
This update of bitsandbytes allows for older GPUs to use `load-in-4bit`. I have not included changes to documentation in this PR as I am not sure of the extent of the compatibility, but I know Pascal works. From bitsandbytes changelog: ``` Bug fixes: Fixed a bug where 8-bit models consumed twice the memory as expected after serialization Deprecated: Kepler binaries (GTX 700s and Tesla K40/K80) are not longer provided via pip and need to be compiled from source. Kepler support might be fully removed in the future. ```
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/2799
2023-06-21T18:01:07Z
2023-06-21T18:04:45Z
2023-06-21T18:04:45Z
2023-06-21T18:09:11Z
342
oobabooga/text-generation-webui
25,938
Delete Please give me pull request .md
diff --git a/Please give me pull request .md b/Please give me pull request .md deleted file mode 100644 index 8e667e826d..0000000000 --- a/Please give me pull request .md +++ /dev/null @@ -1,6 +0,0 @@ -fruits_to_colors = {"apple": "#ff0000", - "lemon": "#ffff00", - "orange": "#ffa500"} - -for key in fruits_to_colors: - print(key, fruits_to_colors[key])
Deleted spam files to keep the repo clean
https://api.github.com/repos/geekcomputers/Python/pulls/1632
2022-07-23T16:07:40Z
2022-07-24T17:47:10Z
2022-07-24T17:47:10Z
2022-07-24T17:47:10Z
120
geekcomputers/Python
31,505
New oasst export dataset loader
diff --git a/backend/import.py b/backend/import.py index 2880c34412..ecea434bab 100644 --- a/backend/import.py +++ b/backend/import.py @@ -16,7 +16,7 @@ from oasst_backend.models.payload_column_type import PayloadContainer from oasst_backend.prompt_repository import PromptRepository from oasst_backend.user_repository import UserRepository -from oasst_backend.utils.tree_export import ExportMessageNode, ExportMessageTree +from oasst_shared.schemas.export import ExportMessageNode, ExportMessageTree from sqlmodel import Session # well known id diff --git a/backend/oasst_backend/utils/tree_export.py b/backend/oasst_backend/utils/tree_export.py index 73beed9b29..06f7a0d55b 100644 --- a/backend/oasst_backend/utils/tree_export.py +++ b/backend/oasst_backend/utils/tree_export.py @@ -11,54 +11,25 @@ from fastapi.encoders import jsonable_encoder from oasst_backend.models import Message from oasst_backend.models.message_tree_state import State as TreeState -from pydantic import BaseModel - - -class LabelAvgValue(BaseModel): - value: float | None - count: int - - -LabelValues = dict[str, LabelAvgValue] - - -class ExportMessageNode(BaseModel): - message_id: str - parent_id: str | None - text: str - role: str - lang: str | None - review_count: int | None - review_result: bool | None - rank: int | None - synthetic: bool | None - model_name: str | None - emojis: dict[str, int] | None - replies: list[ExportMessageNode] | None - labels: LabelValues | None - - @staticmethod - def prep_message_export(message: Message, labels: Optional[LabelValues] = None) -> ExportMessageNode: - return ExportMessageNode( - message_id=str(message.id), - parent_id=str(message.parent_id) if message.parent_id else None, - text=str(message.payload.payload.text), - role=message.role, - lang=message.lang, - review_count=message.review_count, - review_result=message.review_result if message.review_result or message.review_count > 2 else None, - synthetic=message.synthetic, - model_name=message.model_name, - emojis=message.emojis, - rank=message.rank, - labels=labels, - ) - - -class ExportMessageTree(BaseModel): - message_tree_id: str - tree_state: Optional[str] - prompt: Optional[ExportMessageNode] +from oasst_shared.schemas.export import ExportMessageNode, ExportMessageTree, LabelValues + + +def prepare_export_message_node(message: Message, labels: Optional[LabelValues] = None) -> ExportMessageNode: + return ExportMessageNode( + message_id=str(message.id), + parent_id=str(message.parent_id) if message.parent_id else None, + text=str(message.payload.payload.text), + role=message.role, + lang=message.lang, + deleted=message.deleted, + review_count=message.review_count, + review_result=message.review_result if message.review_result or message.review_count > 2 else None, + synthetic=message.synthetic, + model_name=message.model_name, + emojis=message.emojis, + rank=message.rank, + labels=labels, + ) def build_export_tree( @@ -67,7 +38,7 @@ def build_export_tree( messages: list[Message], labels: Optional[dict[UUID, LabelValues]] = None, ) -> ExportMessageTree: - export_messages = [ExportMessageNode.prep_message_export(m, labels.get(m.id) if labels else None) for m in messages] + export_messages = [prepare_export_message_node(m, labels.get(m.id) if labels else None) for m in messages] messages_by_parent = defaultdict(list) for message in export_messages: @@ -133,7 +104,7 @@ def write_messages_to_file( with out_buff as f: for m in messages: - export_message = ExportMessageNode.prep_message_export(m, labels.get(m.id) if labels else None) + export_message = prepare_export_message_node(m, labels.get(m.id) if labels else None) file_data = jsonable_encoder(export_message, exclude_none=True) json.dump(file_data, f) diff --git a/model/model_training/.gitignore b/model/model_training/.gitignore index 5c5c0c1ccc..2b395ff822 100644 --- a/model/model_training/.gitignore +++ b/model/model_training/.gitignore @@ -1,2 +1,3 @@ .cache wandb +saved_model diff --git a/model/model_training/configs/config.yaml b/model/model_training/configs/config.yaml index dcb12e0e9e..5f4bf64f09 100644 --- a/model/model_training/configs/config.yaml +++ b/model/model_training/configs/config.yaml @@ -54,11 +54,12 @@ defaults: oa_dataset_only: datasets: - - oa_private: - split: sft - val_split: 0.0 - fraction: 1 - file: 2023-02-10_oasst_prod.jsonl + - oasst_export: + lang: "en,es,de,fr" + top_k: 2 + input_file_path: 2023-02-19_oasst_ready_with_spam_deleted.jsonl.gz + num_train_epochs: 10 + save_steps: 2000 pythia: learning_rate: 8e-6 @@ -72,6 +73,17 @@ pythia: per_device_eval_batch_size: 4 output_dir: pythia_model +pythia-1B: + learning_rate: 8e-6 + model_name: EleutherAI/pythia-1b-deduped + weight_decay: 0.01 + max_length: 520 + warmup_steps: 1000 + gradient_checkpointing: false + gradient_accumulation_steps: 2 + per_device_train_batch_size: 16 + per_device_eval_batch_size: 32 + galactica-125m: learning_rate: 5e-5 model_name: facebook/galactica-125m diff --git a/model/model_training/custom_datasets/__init__.py b/model/model_training/custom_datasets/__init__.py index b3d2cc7fa1..b8bbdcad37 100644 --- a/model/model_training/custom_datasets/__init__.py +++ b/model/model_training/custom_datasets/__init__.py @@ -1,6 +1,7 @@ """ High level functions for model training """ +from custom_datasets.oasst_dataset import load_oasst_export from custom_datasets.prompt_dialogue import OAPrivate, PrivateInstructionTuning from custom_datasets.qa_datasets import SODA, JokeExplaination, QADataset, SODADialogue, TranslatedQA, WebGPT from custom_datasets.summarization import SummarizationDataset @@ -82,6 +83,8 @@ def get_one_dataset(conf, dataset_name, val_split=0.2, data_path=None, mode="sft dataset = TranslatedQA(data_path) elif dataset_name == "oa_private": dataset = OAPrivate(data_path, **kwargs) + elif dataset_name == "oasst_export": + train, eval = load_oasst_export(data_path=data_path, val_split=val_split, **kwargs) else: raise ValueError(f"Unknown dataset {dataset_name}") diff --git a/model/model_training/custom_datasets/oasst_dataset.py b/model/model_training/custom_datasets/oasst_dataset.py new file mode 100644 index 0000000000..ada2413331 --- /dev/null +++ b/model/model_training/custom_datasets/oasst_dataset.py @@ -0,0 +1,124 @@ +import gzip +import json +from pathlib import Path +from typing import Callable, Optional + +import pydantic +from custom_datasets.formatting import format_pair +from oasst_shared.schemas.export import ExportMessageNode, ExportMessageTree +from torch import Generator, default_generator +from torch.utils.data import Dataset, random_split + + +def _visit_threads_depth_first( + node: ExportMessageNode, + visitor: Callable[[list[ExportMessageNode]], None], + predicate: Optional[Callable[[list[ExportMessageNode]], bool]] = None, + parents: list[ExportMessageNode] = None, +): + parents = parents or [] + if not node: + return + thread = parents + [node] + if predicate is None or predicate(thread): + visitor(thread) + if node.replies: + parents = thread + for c in node.replies: + _visit_threads_depth_first(node=c, visitor=visitor, predicate=predicate, parents=parents) + + +class ListDataset(Dataset): + def __init__(self, data: list): + super().__init__() + self.data = data + + def __len__(self): + return len(self.data) + + def __getitem__(self, index): + return self.data[index] + + +def load_oasst_export( + input_file_path: str | Path, + val_split: float = 0.2, + lang: str = "en", + top_k: Optional[int] = None, + generator: Optional[Generator] = default_generator, + data_path: str | Path = None, +) -> tuple[ListDataset, ListDataset]: + lang_codes = lang.split(",") + + if not isinstance(input_file_path, Path): + input_file_path = Path(input_file_path) + if not input_file_path.is_absolute() and data_path: + if not isinstance(data_path, Path): + data_path = Path(data_path) + input_file_path = data_path / input_file_path + + if input_file_path.suffix == ".gz": + file_in = gzip.open(str(input_file_path), mode="tr", encoding="UTF-8") + else: + file_in = input_file_path.open("r", encoding="UTF-8") + + threads_per_tree = [] + + with file_in: + # read one message tree per line + for line in file_in: + dict_tree = json.loads(line) + + # validate data + tree: ExportMessageTree = pydantic.parse_obj_as(ExportMessageTree, dict_tree) + + if ( + tree.tree_state != "ready_for_export" + or not tree.prompt.review_result + or tree.prompt.lang not in lang_codes + ): + continue + + # extract all threads up to last asssitant reply + threads: list[list[ExportMessageNode]] = [] + + def thread_filter(thread: list[ExportMessageNode]) -> bool: + if any(m.deleted for m in thread): + return False + + if top_k is not None: + for i, m in enumerate(thread): + if m.role == "assistant": + if m.rank is None: + if len(thread[i - 1].replies) > 1: + return False + elif m.rank >= top_k: + return False + return True + + def leaf_filter(thread: list[ExportMessageNode]) -> bool: + return ( + len(thread) > 1 + and not thread[-1].replies + and (thread[-1].role == "assistant" or thread[-2].replies[0] == thread[-1]) + and thread_filter(thread) + ) + + _visit_threads_depth_first(tree.prompt, threads.append, leaf_filter) + for t in threads: + if t[-1].role == "prompter": + t.pop() + + threads_per_tree.append(threads) + + # split on tree basis, messages from same tree must not end up in different splits + trees = ListDataset(threads_per_tree) + splits = random_split(trees, lengths=[1.0 - val_split, val_split], generator=generator) + + def flatten(d: ListDataset) -> ListDataset: + return ListDataset([format_pair([m.text for m in t]) for ts in d for t in ts]) + + train = flatten(splits[0]) + val = flatten(splits[1]) + + return train, val diff --git a/model/model_training/tests/test_oasst_dataset.py b/model/model_training/tests/test_oasst_dataset.py new file mode 100644 index 0000000000..5101d14410 --- /dev/null +++ b/model/model_training/tests/test_oasst_dataset.py @@ -0,0 +1,18 @@ +from argparse import Namespace + +from custom_datasets import get_one_dataset + + +def test_load_oasst_export_dataset(): + config = Namespace( + cache_dir=".cache", + ) + kwargs = { + "lang": "en,es,de,fr", + "top_k": 2, + "input_file_path": "2023-02-19_oasst_ready_with_spam_deleted.jsonl.gz", + } + print("taeiae") + train, val = get_one_dataset(conf=config, dataset_name="oasst_export", **kwargs) + assert len(train) > 9000 + assert len(val) > 2000 diff --git a/oasst-shared/oasst_shared/schemas/export.py b/oasst-shared/oasst_shared/schemas/export.py new file mode 100644 index 0000000000..ac2aa14403 --- /dev/null +++ b/oasst-shared/oasst_shared/schemas/export.py @@ -0,0 +1,36 @@ +from __future__ import annotations + +from typing import Optional + +from pydantic import BaseModel + + +class LabelAvgValue(BaseModel): + value: float | None + count: int + + +LabelValues = dict[str, LabelAvgValue] + + +class ExportMessageNode(BaseModel): + message_id: str + parent_id: str | None + text: str + role: str + lang: str | None + review_count: int | None + review_result: bool | None + deleted: bool | None + rank: int | None + synthetic: bool | None + model_name: str | None + emojis: dict[str, int] | None + replies: list[ExportMessageNode] | None + labels: LabelValues | None + + +class ExportMessageTree(BaseModel): + message_tree_id: str + tree_state: Optional[str] + prompt: Optional[ExportMessageNode]
- support jsonl and jsonl.gz input files - use oasst export classes for parsing (classes reside now in oasst_shared) - extract all usable leaf-nodes (last assistant replies of conversation threads) - allow filtering by language and top_k (ranking results) - split into train/eval (while ensuring that no conversation threads of the same tree end up both in train & eval)
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/1854
2023-02-24T22:52:15Z
2023-02-26T11:56:45Z
2023-02-26T11:56:45Z
2023-02-26T11:56:46Z
3,362
LAION-AI/Open-Assistant
37,601
Break long lines to increase readability in files
diff --git a/scripts/python/__pycache__/cpplint.cpython-36.pyc b/scripts/python/__pycache__/cpplint.cpython-36.pyc new file mode 100644 index 000000000..270ee6d92 Binary files /dev/null and b/scripts/python/__pycache__/cpplint.cpython-36.pyc differ diff --git a/scripts/python/cpplint_wrap.py b/scripts/python/cpplint_wrap.py index 61ffa9174..c94a7bb1a 100644 --- a/scripts/python/cpplint_wrap.py +++ b/scripts/python/cpplint_wrap.py @@ -3,7 +3,12 @@ import sys def main(): - FILTERS='cpplint --verbose=0 --linelength=100 --filter=-legal/copyright,-build/include_order,-build/c++11,-build/namespaces,-build/class,-build/include,-build/include_subdir,-readability/inheritance,-readability/function,-readability/casting,-readability/namespace,-readability/alt_tokens,-readability/braces,-readability/fn_size,-whitespace/comments,-whitespace/braces,-whitespace/empty_loop_body,-whitespace/indent,-whitespace/newline,-runtime/explicit,-runtime/arrays,-runtime/int,-runtime/references,-runtime/string,-runtime/operator,-runtime/printf'.split(' ') + FILTERS=('cpplint --verbose=0 --linelength=100 --filter=-legal/copyright,-build/include_order,' + '-build/c++11,-build/namespaces,-build/class,-build/include,-build/include_subdir,-readability/inheritance,' + '-readability/function,-readability/casting,-readability/namespace,-readability/alt_tokens,' + '-readability/braces,-readability/fn_size,-whitespace/comments,-whitespace/braces,-whitespace/empty_loop_body,' + '-whitespace/indent,-whitespace/newline,-runtime/explicit,-runtime/arrays,-runtime/int,-runtime/references,' + '-runtime/string,-runtime/operator,-runtime/printf').split(' ') result = False files = sys.argv[1:] diff --git a/scripts/python/md-split.py b/scripts/python/md-split.py index befc7b826..a1ba1aa7d 100755 --- a/scripts/python/md-split.py +++ b/scripts/python/md-split.py @@ -18,15 +18,19 @@ def main(): """ This script ended up ugly, so in case somebody wants to reimplement, here is the spec that grew by time. - What it should do it take a markdown file, and split it into more files. A targetfile should have the same number of lines as the original, with source code snippets and markdown non-words removed, for spell-checking. + What it should do it take a markdown file, and split it into more files. A targetfile should have the same + number of lines as the original, with source code snippets and markdown non-words removed, for spell-checking. Each code snipped should go into a separate file in codedir. - Each code snipped should get additional C++ code around it to help compile the line in context, with some heuristic guessing of what is needed around. The wrapping code should have a token in each line allowing other tools to filter out these lines + Each code snipped should get additional C++ code around it to help compile the line in context, with + some heuristic guessing of what is needed around. The wrapping code should have a token in each line allowing + other tools to filter out these lines The name for each file chosen consists os the section id in the markdown document, a counter for the snippet inside the section. - Snippets without code (only comments) or containing lines starting with ??? should not yeld files, but the counter for naming snippets should still increment. + Snippets without code (only comments) or containing lines starting with ??? should not yeld files, + but the counter for naming snippets should still increment. """ parser = argparse.ArgumentParser(description='Split md file into plain text and code blocks') parser.add_argument('sourcefile',
There were some very long lines in Python script files (comments and cpplint execution command), really painful to read, so I decided to break them into several lines in order to increase readability.
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/1057
2017-10-11T07:12:03Z
2017-10-23T18:14:43Z
2017-10-23T18:14:43Z
2017-10-31T15:29:33Z
915
isocpp/CppCoreGuidelines
15,876
Misc fixes
diff --git a/mitmproxy/addons/script.py b/mitmproxy/addons/script.py index decd07596a..a39ce5ce57 100644 --- a/mitmproxy/addons/script.py +++ b/mitmproxy/addons/script.py @@ -148,17 +148,25 @@ def running(self): @command.command("script.run") def script_run(self, flows: typing.Sequence[flow.Flow], path: mtypes.Path) -> None: """ - Run a script on the specified flows. The script is loaded with - default options, and all lifecycle events for each flow are - simulated. + Run a script on the specified flows. The script is configured with + the current options and all lifecycle events for each flow are + simulated. Note that the load event is not invoked. """ - try: - s = Script(path, False) - for f in flows: - for evt, arg in eventsequence.iterate(f): - ctx.master.addons.invoke_addon(s, evt, arg) - except exceptions.OptionsError as e: - script_error_handler(path, e, msg=str(e)) + if not os.path.isfile(path): + ctx.log.error('No such script: %s' % path) + return + mod = load_script(path) + if mod: + with addonmanager.safecall(): + ctx.master.addons.invoke_addon(mod, "running") + ctx.master.addons.invoke_addon( + mod, + "configure", + ctx.options.keys() + ) + for f in flows: + for evt, arg in eventsequence.iterate(f): + ctx.master.addons.invoke_addon(mod, evt, arg) def configure(self, updated): if "scripts" in updated: diff --git a/mitmproxy/controller.py b/mitmproxy/controller.py index 582a6683f8..6e2190663d 100644 --- a/mitmproxy/controller.py +++ b/mitmproxy/controller.py @@ -113,6 +113,8 @@ def ack(self, force=False): def kill(self, force=False): self.send(exceptions.Kill, force) + if self._state == "taken": + self.commit() def send(self, msg, force=False): if self.state not in {"start", "taken"}: diff --git a/mitmproxy/net/tcp.py b/mitmproxy/net/tcp.py index 18429daa6b..2496d47c7b 100644 --- a/mitmproxy/net/tcp.py +++ b/mitmproxy/net/tcp.py @@ -1,4 +1,5 @@ import os +import errno import select import socket import sys @@ -585,6 +586,13 @@ def connection_thread(self, connection, client_address): with self.handler_counter: try: self.handle_client_connection(connection, client_address) + except OSError as e: # pragma: no cover + # This catches situations where the underlying connection is + # closed beneath us. Syscalls on the connection object at this + # point returns EINVAL. If this happens, we close the socket and + # move on. + if not e.errno == errno.EINVAL: + raise except: self.handle_error(connection, client_address) finally: diff --git a/mitmproxy/tools/console/flowlist.py b/mitmproxy/tools/console/flowlist.py index e6bd1693c3..a9e48af415 100644 --- a/mitmproxy/tools/console/flowlist.py +++ b/mitmproxy/tools/console/flowlist.py @@ -39,6 +39,14 @@ class FlowListWalker(urwid.ListWalker): def __init__(self, master): self.master = master + def positions(self, reverse=False): + # The stub implementation of positions can go once this issue is resolved: + # https://github.com/urwid/urwid/issues/294 + ret = range(len(self.master.view)) + if reverse: + return reversed(ret) + return ret + def view_changed(self): self._modified() diff --git a/test/mitmproxy/addons/test_script.py b/test/mitmproxy/addons/test_script.py index 573b36e76b..05472d9a03 100644 --- a/test/mitmproxy/addons/test_script.py +++ b/test/mitmproxy/addons/test_script.py @@ -173,16 +173,14 @@ def test_simple(self): class TestScriptLoader: @pytest.mark.asyncio async def test_script_run(self, tdata): - rp = tdata.path( - "mitmproxy/data/addonscripts/recorder/recorder.py" - ) + rp = tdata.path("mitmproxy/data/addonscripts/recorder/recorder.py") sc = script.ScriptLoader() with taddons.context(sc) as tctx: sc.script_run([tflow.tflow(resp=True)], rp) await tctx.master.await_log("recorder response") debug = [i.msg for i in tctx.master.logs if i.level == "debug"] assert debug == [ - 'recorder load', 'recorder running', 'recorder configure', + 'recorder running', 'recorder configure', 'recorder requestheaders', 'recorder request', 'recorder responseheaders', 'recorder response' ] @@ -192,7 +190,7 @@ async def test_script_run_nonexistent(self): sc = script.ScriptLoader() with taddons.context(sc) as tctx: sc.script_run([tflow.tflow(resp=True)], "/") - assert await tctx.master.await_log("/: No such script") + assert await tctx.master.await_log("No such script") def test_simple(self, tdata): sc = script.ScriptLoader()
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/3116
2018-05-11T23:47:31Z
2018-05-12T01:32:58Z
2018-05-12T01:32:58Z
2018-05-15T21:48:14Z
1,297
mitmproxy/mitmproxy
28,099
Fix MPS cache cleanup
diff --git a/modules/devices.py b/modules/devices.py index c5ad950f621..57e51da30e2 100644 --- a/modules/devices.py +++ b/modules/devices.py @@ -54,8 +54,9 @@ def torch_gc(): with torch.cuda.device(get_cuda_device_string()): torch.cuda.empty_cache() torch.cuda.ipc_collect() - elif has_mps() and hasattr(torch.mps, 'empty_cache'): - torch.mps.empty_cache() + + if has_mps(): + mac_specific.torch_mps_gc() def enable_tf32(): diff --git a/modules/mac_specific.py b/modules/mac_specific.py index 735847f54ff..2c2f15ca422 100644 --- a/modules/mac_specific.py +++ b/modules/mac_specific.py @@ -1,8 +1,12 @@ +import logging + import torch import platform from modules.sd_hijack_utils import CondFunc from packaging import version +log = logging.getLogger() + # before torch version 1.13, has_mps is only available in nightly pytorch and macOS 12.3+, # use check `getattr` and try it for compatibility. @@ -19,9 +23,19 @@ def check_for_mps() -> bool: return False else: return torch.backends.mps.is_available() and torch.backends.mps.is_built() + + has_mps = check_for_mps() +def torch_mps_gc() -> None: + try: + from torch.mps import empty_cache + empty_cache() + except Exception: + log.warning("MPS garbage collection failed", exc_info=True) + + # MPS workaround for https://github.com/pytorch/pytorch/issues/89784 def cumsum_fix(input, cumsum_func, *args, **kwargs): if input.device.type == 'mps':
Importing torch does not import torch.mps so the call failed. This moves the MPS specific cleanup to the `mac_specific` module too. See https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/da8916f92649fc4d947cb46d9d8f8ea1621b2a59#commitcomment-121183933 ## Checklist: - [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) - [x] I have performed a self-review of my own code - [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style) - [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests) - in specific, it Works on My Mac
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/11722
2023-07-10T18:20:08Z
2023-07-11T10:49:02Z
2023-07-11T10:49:02Z
2023-07-11T10:49:03Z
410
AUTOMATIC1111/stable-diffusion-webui
40,074
Bug fix in examples;correct t_total for distributed training;run pred…
diff --git a/examples/run_classifier.py b/examples/run_classifier.py index c6acc091ef070..2c83b4fe497fe 100644 --- a/examples/run_classifier.py +++ b/examples/run_classifier.py @@ -33,6 +33,7 @@ from pytorch_pretrained_bert.tokenization import BertTokenizer from pytorch_pretrained_bert.modeling import BertForSequenceClassification from pytorch_pretrained_bert.optimization import BertAdam +from pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s', datefmt = '%m/%d/%Y %H:%M:%S', @@ -155,8 +156,8 @@ def _create_examples(self, lines, set_type): if i == 0: continue guid = "%s-%s" % (set_type, line[0]) - text_a = line[8]) - text_b = line[9]) + text_a = line[8] + text_b = line[9] label = line[-1] examples.append( InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) @@ -482,7 +483,7 @@ def main(): len(train_examples) / args.train_batch_size / args.gradient_accumulation_steps * args.num_train_epochs) # Prepare model - model = BertForSequenceClassification.from_pretrained(args.bert_model, len(label_list), + model = BertForSequenceClassification.from_pretrained(args.bert_model, cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(args.local_rank)) if args.fp16: model.half() @@ -507,10 +508,13 @@ def main(): {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] + t_total = num_train_steps + if args.local_rank != -1: + t_total = t_total // torch.distributed.get_world_size() optimizer = BertAdam(optimizer_grouped_parameters, lr=args.learning_rate, warmup=args.warmup_proportion, - t_total=num_train_steps) + t_total=t_total) global_step = 0 if args.do_train: @@ -571,7 +575,7 @@ def main(): model.zero_grad() global_step += 1 - if args.do_eval: + if args.do_eval and (args.local_rank == -1 or torch.distributed.get_rank() == 0): eval_examples = processor.get_dev_examples(args.data_dir) eval_features = convert_examples_to_features( eval_examples, label_list, args.max_seq_length, tokenizer) @@ -583,10 +587,8 @@ def main(): all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.long) all_label_ids = torch.tensor([f.label_id for f in eval_features], dtype=torch.long) eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids) - if args.local_rank == -1: - eval_sampler = SequentialSampler(eval_data) - else: - eval_sampler = DistributedSampler(eval_data) + # Run prediction for full data + eval_sampler = SequentialSampler(eval_data) eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=args.eval_batch_size) model.eval() diff --git a/examples/run_squad.py b/examples/run_squad.py index 00d5610afe286..e3213189bfba9 100644 --- a/examples/run_squad.py +++ b/examples/run_squad.py @@ -25,6 +25,7 @@ import math import os import random +import pickle from tqdm import tqdm, trange import numpy as np @@ -35,6 +36,7 @@ from pytorch_pretrained_bert.tokenization import whitespace_tokenize, BasicTokenizer, BertTokenizer from pytorch_pretrained_bert.modeling import BertForQuestionAnswering from pytorch_pretrained_bert.optimization import BertAdam +from pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s', datefmt = '%m/%d/%Y %H:%M:%S', @@ -749,6 +751,10 @@ def main(): type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.") + parser.add_argument("--do_lower_case", + default=True, + action='store_true', + help="Whether to lower case the input text. True for uncased models, False for cased models.") parser.add_argument("--local_rank", type=int, default=-1, @@ -845,20 +851,34 @@ def main(): {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] + t_total = num_train_steps + if args.local_rank != -1: + t_total = t_total // torch.distributed.get_world_size() optimizer = BertAdam(optimizer_grouped_parameters, lr=args.learning_rate, warmup=args.warmup_proportion, - t_total=num_train_steps) + t_total=t_total) global_step = 0 if args.do_train: - train_features = convert_examples_to_features( - examples=train_examples, - tokenizer=tokenizer, - max_seq_length=args.max_seq_length, - doc_stride=args.doc_stride, - max_query_length=args.max_query_length, - is_training=True) + cached_train_features_file = args.train_file+'_{0}_{1}_{2}_{3}'.format( + args.bert_model, str(args.max_seq_length), str(args.doc_stride), str(args.max_query_length)) + train_features = None + try: + with open(cached_train_features_file, "rb") as reader: + train_features = pickle.load(reader) + except: + train_features = convert_examples_to_features( + examples=train_examples, + tokenizer=tokenizer, + max_seq_length=args.max_seq_length, + doc_stride=args.doc_stride, + max_query_length=args.max_query_length, + is_training=True) + if args.local_rank == -1 or torch.distributed.get_rank() == 0: + logger.info(" Saving train features into cached file %s", cached_train_features_file) + with open(cached_train_features_file, "wb") as writer: + train_features = pickle.dump(train_features, writer) logger.info("***** Running training *****") logger.info(" Num orig examples = %d", len(train_examples)) logger.info(" Num split examples = %d", len(train_features)) @@ -913,7 +933,7 @@ def main(): model.zero_grad() global_step += 1 - if args.do_predict: + if args.do_predict and (args.local_rank == -1 or torch.distributed.get_rank() == 0): eval_examples = read_squad_examples( input_file=args.predict_file, is_training=False) eval_features = convert_examples_to_features( @@ -934,10 +954,8 @@ def main(): all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.long) all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long) eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_example_index) - if args.local_rank == -1: - eval_sampler = SequentialSampler(eval_data) - else: - eval_sampler = DistributedSampler(eval_data) + # Run prediction for full data + eval_sampler = SequentialSampler(eval_data) eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=args.predict_batch_size) model.eval()
Bug fix in examples; correct t_total for distributed training; run prediction for full dataset
https://api.github.com/repos/huggingface/transformers/pulls/58
2018-11-27T09:10:10Z
2018-11-28T11:39:46Z
2018-11-28T11:39:46Z
2018-11-28T11:39:46Z
1,836
huggingface/transformers
12,909
fix pybrowsers not found
diff --git a/setup.py b/setup.py index b3fd62178..1ba08c636 100644 --- a/setup.py +++ b/setup.py @@ -48,6 +48,7 @@ def run(self): "grpcio-status==1.48.2", "mock==5.1.0", "pylint==3.0.3", + "pybrowsers", ] extras_require["pyppeteer"] = [
https://api.github.com/repos/geekan/MetaGPT/pulls/782
2024-01-22T07:46:32Z
2024-01-22T08:24:28Z
2024-01-22T08:24:28Z
2024-01-22T08:24:28Z
103
geekan/MetaGPT
16,708
Fix @require_read_token in tests
diff --git a/src/transformers/testing_utils.py b/src/transformers/testing_utils.py index 0efaf77a98a73..fd5974d8d5bb5 100644 --- a/src/transformers/testing_utils.py +++ b/src/transformers/testing_utils.py @@ -38,7 +38,6 @@ from unittest import mock from unittest.mock import patch -import huggingface_hub import urllib3 from transformers import logging as transformers_logging @@ -466,11 +465,11 @@ def require_read_token(fn): """ A decorator that loads the HF token for tests that require to load gated models. """ - token = os.getenv("HF_HUB_READ_TOKEN", None) + token = os.getenv("HF_HUB_READ_TOKEN") @wraps(fn) def _inner(*args, **kwargs): - with patch.object(huggingface_hub.utils._headers, "get_token", return_value=token): + with patch("huggingface_hub.utils._headers.get_token", return_value=token): return fn(*args, **kwargs) return _inner
Related to https://github.com/huggingface/transformers/pull/29242. This PR fixes the `@require_read_token` decorator to correctly patch `get_token`. (currently tests are failing: https://github.com/huggingface/transformers/actions/runs/8091976401/job/22111912351) cc @younesbelkada @ArthurZucker
https://api.github.com/repos/huggingface/transformers/pulls/29367
2024-02-29T09:51:00Z
2024-02-29T10:25:17Z
2024-02-29T10:25:17Z
2024-02-29T10:25:18Z
243
huggingface/transformers
12,526
Add return value in prepare_tensor
diff --git a/model/model_training/utils/utils_rl.py b/model/model_training/utils/utils_rl.py index 0bfa02c39a..cd9e2de492 100644 --- a/model/model_training/utils/utils_rl.py +++ b/model/model_training/utils/utils_rl.py @@ -5,3 +5,4 @@ def prepare_tensor(name: str, input): t = client_util.InferInput(name, input.shape, np_to_triton_dtype(input.dtype)) t.set_data_from_numpy(input) + return t
There is no return value in `prepare_tensor` https://github.com/LAION-AI/Open-Assistant/blob/0fcf3e08fe62295d4696e590005b0f33383342ea/model/model_training/utils/utils_rl.py#L5-L7 so it will raise `None type error` while RLHF training https://github.com/LAION-AI/Open-Assistant/blob/0fcf3e08fe62295d4696e590005b0f33383342ea/model/model_training/utils/ppo_utils.py#L553-L561 https://github.com/LAION-AI/Open-Assistant/blob/0fcf3e08fe62295d4696e590005b0f33383342ea/model/model_training/trainer_rl.py#L93-L99
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/3327
2023-06-08T03:39:15Z
2023-06-08T16:58:40Z
2023-06-08T16:58:40Z
2023-06-08T23:51:47Z
114
LAION-AI/Open-Assistant
37,111
Update links.yml
diff --git a/.github/workflows/ci-testing.yml b/.github/workflows/ci-testing.yml index 7de084fef06..e71a4b8f16a 100644 --- a/.github/workflows/ci-testing.yml +++ b/.github/workflows/ci-testing.yml @@ -158,7 +158,7 @@ jobs: if: always() # This ensures the job runs even if previous jobs fail steps: - name: Check for failure and notify - if: ${{ needs.Benchmarks.result == 'failure' || needs.Tests.result == 'failure' }} # Check if any of the jobs failed + if: (needs.Benchmarks.result == 'failure' || needs.Tests.result == 'failure') && github.repository == 'ultralytics/yolov5' && (github.event_name == 'schedule' || github.event_name == 'push') uses: slackapi/slack-github-action@v1.23.0 with: payload: | diff --git a/.github/workflows/links.yml b/.github/workflows/links.yml index a5413318030..306689f4650 100644 --- a/.github/workflows/links.yml +++ b/.github/workflows/links.yml @@ -1,5 +1,6 @@ # Ultralytics YOLO 🚀, AGPL-3.0 license -# YOLO Continuous Integration (CI) GitHub Actions tests +# YOLO Continuous Integration (CI) GitHub Actions tests broken link checker +# Accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter) name: Check Broken links @@ -18,21 +19,26 @@ jobs: steps: - uses: actions/checkout@v3 - - name: Test Markdown and HTML links - uses: lycheeverse/lychee-action@v1.7.0 + - name: Download and install lychee + run: | + LYCHEE_URL=$(curl -s https://api.github.com/repos/lycheeverse/lychee/releases/latest | grep "browser_download_url" | grep "x86_64-unknown-linux-gnu.tar.gz" | cut -d '"' -f 4) + curl -L $LYCHEE_URL -o lychee.tar.gz + tar xzf lychee.tar.gz + sudo mv lychee /usr/local/bin + + - name: Test Markdown and HTML links with retry + uses: nick-invision/retry@v2 with: - fail: true - # accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter) - args: --accept 429,999 --exclude-loopback --exclude twitter.com --exclude-path '**/ci-testing.yaml' --exclude-mail './**/*.md' './**/*.html' - env: - GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}} + timeout_minutes: 5 + retry_wait_seconds: 60 + max_attempts: 3 + command: lychee --accept 429,999 --exclude-loopback --exclude twitter.com --exclude-path '**/ci.yaml' --exclude-mail --github-token ${{ secrets.GITHUB_TOKEN }} './**/*.md' './**/*.html' - - name: Test Markdown, HTML, YAML, Python and Notebook links + - name: Test Markdown, HTML, YAML, Python and Notebook links with retry if: github.event_name == 'workflow_dispatch' - uses: lycheeverse/lychee-action@v1.7.0 + uses: nick-invision/retry@v2 with: - fail: true - # accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter) - args: --accept 429,999 --exclude-loopback --exclude twitter.com,url.com --exclude-path '**/ci-testing.yaml' --exclude-mail './**/*.md' './**/*.html' './**/*.yml' './**/*.yaml' './**/*.py' './**/*.ipynb' - env: - GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}} + timeout_minutes: 5 + retry_wait_seconds: 60 + max_attempts: 3 + command: lychee --accept 429,999 --exclude-loopback --exclude twitter.com,url.com --exclude-path '**/ci.yaml' --exclude-mail --github-token ${{ secrets.GITHUB_TOKEN }} './**/*.md' './**/*.html' './**/*.yml' './**/*.yaml' './**/*.py' './**/*.ipynb'
<!-- Thank you for submitting a YOLOv5 🚀 Pull Request! We want to make contributing to YOLOv5 as easy and transparent as possible. A few tips to get you started: - Search existing YOLOv5 [PRs](https://github.com/ultralytics/yolov5/pull) to see if a similar PR already exists. - Link this PR to a YOLOv5 [issue](https://github.com/ultralytics/yolov5/issues) to help us understand what bug fix or feature is being implemented. - Provide before and after profiling/inference/training results to help us quantify the improvement your PR provides (if applicable). Please see our ✅ [Contributing Guide](https://docs.ultralytics.com/help/contributing) for more details. Note that Copilot will summarize this PR below, do not modify the 'copilot:all' line. --> <!-- copilot:all --> ### <samp>🤖 Generated by Copilot at a349cf1</samp> ### Summary 📝🛠️🔁 <!-- 1. 📝 - This emoji represents documentation or writing, and can be used for changes that update or improve the documentation of a project, such as the link checking workflow in this case. 2. 🛠️ - This emoji represents tools or fixing, and can be used for changes that improve or update the tools or workflows used in a project, such as the link checking workflow in this case. 3. 🔁 - This emoji represents repeating or retrying, and can be used for changes that add or improve a retry mechanism or a loop in a project, such as the retry mechanism for the link checking workflow in this case. --> Improved the reliability and readability of the link checking workflow for the documentation. Used `lychee` instead of `awesome_bot` and added retries and comments in `.github/workflows/links.yml`. > _To check links in the docs with `lychee`_ > _We added a retry for more guarantee_ > _And to make it more clear_ > _We improved the comment here_ > _So the workflow is easy to see_ ### Walkthrough * Update the comment of the links.yml file to document the workflow and the status codes ([link](https://github.com/ultralytics/yolov5/pull/11463/files?diff=unified&w=0#diff-a618bbaa9618d3ffa70846c5371ca23ea8f71f3370d3aa5be5d2cf39b42b207bL2-R3)) * Replace the deprecated lychee-action with a custom installation of lychee and a retry action ([link](https://github.com/ultralytics/yolov5/pull/11463/files?diff=unified&w=0#diff-a618bbaa9618d3ffa70846c5371ca23ea8f71f3370d3aa5be5d2cf39b42b207bL21-R42)) * Exclude the ci.yaml file from the link checking ([link](https://github.com/ultralytics/yolov5/pull/11463/files?diff=unified&w=0#diff-a618bbaa9618d3ffa70846c5371ca23ea8f71f3370d3aa5be5d2cf39b42b207bL21-R42)) ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Improvements to Continuous Integration (CI) failure notifications and broken link checks in the Ultralytics GitHub workflow. ### 📊 Key Changes - Modified CI failure notification conditions to trigger only for 'ultralytics/yolov5' repository and on `schedule` or `push` events. - Updated the broken link checker to be more resilient by including retry logic and installing the latest lychee tool directly from GitHub releases. ### 🎯 Purpose & Impact - **Enhanced Precision for Notifications**: Failure notifications will be more targeted, reducing noise for contributors and maintainers. 🛎️ - **Increased Reliability of Link Checks**: The adoption of a retry strategy ensures that transient network issues don't cause unnecessary failure alerts. 🔄 - **Up-to-date Tooling**: Using the latest version of lychee helps in accurate detection of broken links thanks to the latest features and fixes. 🆕 These changes can help maintain the quality of the repository by ensuring that contributors are only alerted for pertinent failures and that link integrity in documentation is reliably maintained.
https://api.github.com/repos/ultralytics/yolov5/pulls/11463
2023-05-01T10:03:25Z
2023-05-01T12:33:32Z
2023-05-01T12:33:32Z
2024-01-19T02:02:11Z
1,042
ultralytics/yolov5
25,509
Fix broken links in I.13
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index ef30aca47..74825efd4 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -1620,7 +1620,7 @@ Passing `10` as the `n` argument may be a mistake: the most common convention is This `draw2()` passes the same amount of information to `draw()`, but makes the fact that it is supposed to be a range of `Circle`s explicit. See ???. **Exception**: Use `zstring` and `czstring` to represent a C-style, zero-terminated strings. -But when doing so, use `string_span` from the (GSL)[#GSL] to prevent range errors. +But when doing so, use `string_span` from the [GSL](#GSL) to prevent range errors. ##### Enforcement
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/552
2016-03-15T20:19:56Z
2016-03-20T16:56:24Z
2016-03-20T16:56:24Z
2016-03-20T16:56:24Z
202
isocpp/CppCoreGuidelines
16,109
Add information on updating [certbot|letsencrypt]-auto
diff --git a/docs/contributing.rst b/docs/contributing.rst index de520dc0e5f..5ad4bd9e014 100644 --- a/docs/contributing.rst +++ b/docs/contributing.rst @@ -324,6 +324,48 @@ Steps: 7. Submit the PR. 8. Did your tests pass on Travis? If they didn't, fix any errors. + +Updating certbot-auto and letsencrypt-auto +========================================== +Updating the scripts +-------------------- +Developers should *not* modify the ``certbot-auto`` and ``letsencrypt-auto`` files +in the root directory of the repository. Rather, modify the +``letsencrypt-auto.template`` and associated platform-specific shell scripts in +the ``letsencrypt-auto-source`` and +``letsencrypt-auto-source/pieces/bootstrappers`` directory, respectively. + +Building letsencrypt-auto-source/letsencrypt-auto +------------------------------------------------- +Once changes to any of the aforementioned files have been made, the +``letesncrypt-auto-source/letsencrypt-auto`` script should be updated. In lieu of +manually updating this script, run the build script, which lives at +``letsencrypt-auto-source/build.py``: + +.. code-block:: shell + + python letsencrypt-auto-source/build.py + +Running ``build.py`` will update the ``letsencrypt-auto-source/letsencrypt-auto`` +script. Note that the ``certbot-auto`` and ``letsencrypt-auto`` scripts in the root +directory of the repository will remain **unchanged** after this script is run. +Your changes will be propagated to these files during the next release of +Certbot. + +Opening a PR +------------ +When opening a PR, ensure that the following files are committed: + +1. ``letsencrypt-auto-source/letsencrypt-auto.template`` and + ``letsencrypt-auto-source/pieces/bootstrappers/*`` +2. ``letsencrypt-auto-source/letsencrypt-auto`` (generated by ``build.py``) + +It might also be a good idea to double check that **no** changes were +inadvertently made to the ``certbot-auto`` or ``letsencrypt-auto`` scripts in the +root of the repository. These scripts will be updated by the core developers +during the next release. + + Updating the documentation ==========================
Added instructions on how to update `letsencrypt-auto`. Included information on running `build.py` and stressed that `certbot-auto`/`letsencrypt-auto` in the root directory shouldn't be updated; hopefully this helps cut down on > thanks for the PR... but please don't modify `certbot-auto`/`letsencrypt-auto` directly responses to PRs. Fixes #3703
https://api.github.com/repos/certbot/certbot/pulls/3983
2017-01-07T16:20:14Z
2017-01-08T00:45:45Z
2017-01-08T00:45:44Z
2017-01-10T19:40:25Z
519
certbot/certbot
3,255
boardd: use std::atomic for ignition to ensure thread-safety
diff --git a/selfdrive/boardd/boardd.cc b/selfdrive/boardd/boardd.cc index 795bf3bf48deb1..0e05fad39364de 100644 --- a/selfdrive/boardd/boardd.cc +++ b/selfdrive/boardd/boardd.cc @@ -38,10 +38,10 @@ Panda * panda = NULL; std::atomic<bool> safety_setter_thread_running(false); +std::atomic<bool> ignition(false); bool spoofing_started = false; bool fake_send = false; bool connected_once = false; -bool ignition = false; ExitHandler do_exit; struct tm get_time(){
ignition access by multiple threads.
https://api.github.com/repos/commaai/openpilot/pulls/19882
2021-01-22T22:14:41Z
2021-01-22T23:26:28Z
2021-01-22T23:26:28Z
2021-01-22T23:37:10Z
136
commaai/openpilot
9,502
fixbug: unit test
diff --git a/.gitignore b/.gitignore index 6dd3608f1..a6f45d894 100644 --- a/.gitignore +++ b/.gitignore @@ -175,3 +175,4 @@ htmlcov.* *.pkl *-structure.csv *-structure.json +*.dot \ No newline at end of file diff --git a/tests/metagpt/actions/test_skill_action.py b/tests/metagpt/actions/test_skill_action.py index 0e0d5d5aa..529ed632a 100644 --- a/tests/metagpt/actions/test_skill_action.py +++ b/tests/metagpt/actions/test_skill_action.py @@ -6,6 +6,7 @@ @File : test_skill_action.py @Desc : Unit tests. """ + import pytest from metagpt.actions.skill_action import ArgumentsParingAction, SkillAction @@ -47,7 +48,11 @@ async def test_parser(self): assert args.get("size_type") == "512x512" @pytest.mark.asyncio - async def test_parser_action(self): + async def test_parser_action(self, mocker): + # mock + mock_text_2_image = mocker.patch("metagpt.learn.text_to_image") + mock_text_2_image.return_value = "https://mock.com/xxx" + parser_action = ArgumentsParingAction(skill=self.skill, ask="Draw an apple") rsp = await parser_action.run() assert rsp @@ -80,7 +85,8 @@ async def test_find_and_call_function_error(self): @pytest.mark.asyncio async def test_skill_action_error(self): action = SkillAction(skill=self.skill, args={}) - await action.run() + rsp = await action.run() + assert "Error" in rsp.content if __name__ == "__main__": diff --git a/tests/metagpt/learn/test_text_to_image.py b/tests/metagpt/learn/test_text_to_image.py index 760b9d09c..1485df5c6 100644 --- a/tests/metagpt/learn/test_text_to_image.py +++ b/tests/metagpt/learn/test_text_to_image.py @@ -12,10 +12,18 @@ from metagpt.config import CONFIG from metagpt.learn.text_to_image import text_to_image +from metagpt.tools.metagpt_text_to_image import MetaGPTText2Image +from metagpt.tools.openai_text_to_image import OpenAIText2Image +from metagpt.utils.s3 import S3 @pytest.mark.asyncio -async def test_metagpt_llm(): +async def test_text_to_image(mocker): + # mock + mocker.patch.object(MetaGPTText2Image, "text_2_image", return_value=b"mock MetaGPTText2Image") + mocker.patch.object(OpenAIText2Image, "text_2_image", return_value=b"mock OpenAIText2Image") + mocker.patch.object(S3, "cache", return_value="http://mock/s3") + # Prerequisites assert CONFIG.METAGPT_TEXT_TO_IMAGE_MODEL_URL assert CONFIG.OPENAI_API_KEY
**Features** - Add mock to unit tests
https://api.github.com/repos/geekan/MetaGPT/pulls/708
2024-01-08T06:47:23Z
2024-01-08T08:17:21Z
2024-01-08T08:17:21Z
2024-01-19T11:18:55Z
705
geekan/MetaGPT
16,756
dns-rfc2136: add test coverage for PR #9672
diff --git a/certbot-dns-rfc2136/certbot_dns_rfc2136/_internal/tests/dns_rfc2136_test.py b/certbot-dns-rfc2136/certbot_dns_rfc2136/_internal/tests/dns_rfc2136_test.py index 3a82f1b6557..d39afccaa39 100644 --- a/certbot-dns-rfc2136/certbot_dns_rfc2136/_internal/tests/dns_rfc2136_test.py +++ b/certbot-dns-rfc2136/certbot_dns_rfc2136/_internal/tests/dns_rfc2136_test.py @@ -40,8 +40,19 @@ def setUp(self): self.mock_client = mock.MagicMock() # _get_rfc2136_client | pylint: disable=protected-access + self.orig_get_client = self.auth._get_rfc2136_client self.auth._get_rfc2136_client = mock.MagicMock(return_value=self.mock_client) + def test_get_client_default_conf_values(self): + # algorithm and sign_query are intentionally absent to test that the default (None) + # value does not crash Certbot. + creds = { "server": SERVER, "port": PORT, "name": NAME, "secret": SECRET } + self.auth.credentials = mock.MagicMock() + self.auth.credentials.conf = lambda key: creds.get(key, None) + client = self.orig_get_client() + assert client.algorithm == self.auth.ALGORITHMS["HMAC-MD5"] + assert client.sign_query == False + @test_util.patch_display_util() def test_perform(self, unused_mock_get_utility): self.auth.perform([self.achall]) @@ -100,7 +111,7 @@ def setUp(self): from certbot_dns_rfc2136._internal.dns_rfc2136 import _RFC2136Client self.rfc2136_client = _RFC2136Client(SERVER, PORT, NAME, SECRET, dns.tsig.HMAC_MD5, - TIMEOUT) + False, TIMEOUT) @mock.patch("dns.query.tcp") def test_add_txt_record(self, query_mock): @@ -177,14 +188,17 @@ def test_find_domain_wraps_errors(self): self.rfc2136_client._find_domain('foo.bar.'+DOMAIN) @mock.patch("dns.query.tcp") - def test_query_soa_found(self, query_mock): + @mock.patch("dns.message.make_query") + def test_query_soa_found(self, mock_make_query, query_mock): query_mock.return_value = mock.MagicMock(answer=[mock.MagicMock()], flags=dns.flags.AA) query_mock.return_value.rcode.return_value = dns.rcode.NOERROR + mock_make_query.return_value = mock.MagicMock() # _query_soa | pylint: disable=protected-access result = self.rfc2136_client._query_soa(DOMAIN) query_mock.assert_called_with(mock.ANY, SERVER, TIMEOUT, PORT) + mock_make_query.return_value.use_tsig.assert_not_called() assert result @mock.patch("dns.query.tcp") @@ -218,6 +232,17 @@ def test_query_soa_fallback_to_udp(self, tcp_mock, udp_mock): udp_mock.assert_called_with(mock.ANY, SERVER, TIMEOUT, PORT) assert result + @mock.patch("dns.query.tcp") + @mock.patch("dns.message.make_query") + def test_query_soa_signed(self, mock_make_query, unused_mock_query): + mock_make_query.return_value = mock.MagicMock() + self.rfc2136_client.sign_query = True + self.rfc2136_client.algorithm = "alg0" + + self.rfc2136_client._query_soa(DOMAIN) + + mock_make_query.return_value.use_tsig.assert_called_with(mock.ANY, algorithm="alg0") + if __name__ == "__main__": sys.exit(pytest.main(sys.argv[1:] + [__file__])) # pragma: no cover
I noticed slightly too late that the unit tests were not updated during #9672. Here is some test coverage for the changes.
https://api.github.com/repos/certbot/certbot/pulls/9684
2023-04-25T01:55:43Z
2023-05-08T21:34:41Z
2023-05-08T21:34:41Z
2023-05-08T21:34:42Z
894
certbot/certbot
2,572
[OpenAI Extension] Add 'max_logits' parameter in logits endpoint
diff --git a/extensions/openai/logits.py b/extensions/openai/logits.py index 9d2fe41cff..357e70fa60 100644 --- a/extensions/openai/logits.py +++ b/extensions/openai/logits.py @@ -8,4 +8,4 @@ def _get_next_logits(body): state = process_parameters(body) if use_samplers else {} state['stream'] = True - return get_next_logits(body['prompt'], state, use_samplers, "", return_dict=True) + return get_next_logits(body['prompt'], state, use_samplers, "", top_logits=body['top_logits'], return_dict=True) diff --git a/extensions/openai/typing.py b/extensions/openai/typing.py index 47ddd789c7..332a8c28ed 100644 --- a/extensions/openai/typing.py +++ b/extensions/openai/typing.py @@ -1,6 +1,6 @@ import json import time -from typing import List +from typing import Dict, List from pydantic import BaseModel, Field @@ -156,6 +156,7 @@ class TokenCountResponse(BaseModel): class LogitsRequestParams(BaseModel): prompt: str use_samplers: bool = False + top_logits: int | None = 50 frequency_penalty: float | None = 0 max_tokens: int | None = 16 presence_penalty: float | None = 0 @@ -168,7 +169,7 @@ class LogitsRequest(GenerationOptions, LogitsRequestParams): class LogitsResponse(BaseModel): - logits: dict + logits: Dict[str, float] class ModelInfoResponse(BaseModel): diff --git a/modules/logits.py b/modules/logits.py index 5d0d321087..e12cf6e785 100644 --- a/modules/logits.py +++ b/modules/logits.py @@ -8,7 +8,7 @@ global_scores = None -def get_next_logits(prompt, state, use_samplers, previous, return_dict=False): +def get_next_logits(prompt, state, use_samplers, previous, top_logits=50, return_dict=False): if shared.model is None: logger.error("No model is loaded! Select one in the Model tab.") return 'Error: No model is loaded1 Select one in the Model tab.', previous @@ -50,8 +50,7 @@ def get_next_logits(prompt, state, use_samplers, previous, return_dict=False): scores = output['logits'][-1][-1] probs = torch.softmax(scores, dim=-1, dtype=torch.float) - topk_values, topk_indices = torch.topk(probs, k=50, largest=True, sorted=True) - topk_values = [f"{float(i):.5f}" for i in topk_values] + topk_values, topk_indices = torch.topk(probs, k=top_logits, largest=True, sorted=True) if is_non_hf_exllamav1 or is_non_hf_llamacpp: topk_indices = [i.expand((1, 1)) for i in topk_indices] @@ -61,12 +60,14 @@ def get_next_logits(prompt, state, use_samplers, previous, return_dict=False): tokens = [shared.tokenizer.decode(i) for i in topk_indices] if return_dict: + topk_values = [float(i) for i in topk_values] output = {} for row in list(zip(topk_values, tokens)): output[row[1]] = row[0] return output else: + topk_values = [f"{float(i):.5f}" for i in topk_values] output = '' for row in list(zip(topk_values, tokens)): output += f"{row[0]} - {repr(row[1])}\n"
## Checklist: - [x] I have read the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines). ## Fixes: 1. Add 'max_logits' parameter in logits endpoint (default: 50) Before, it only returned a fixed logit amount of 50 top tokens. Now, users can choose how many logits of top tokens they want to get. 2. Fix the API to return logits as float value Before, the type of logit value was strings (because of compatibility with Gradio?) Now, the API endpoint returns logit values as float, while retaining compatibility with Gradio Frontend.
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/4916
2023-12-14T03:08:00Z
2023-12-15T03:22:43Z
2023-12-15T03:22:43Z
2023-12-15T03:22:44Z
865
oobabooga/text-generation-webui
26,363
Improve UI for Mac OS.
diff --git a/launcher/web_ui/css/style.css b/launcher/web_ui/css/style.css index 21a4a468d7..c659b23e15 100644 --- a/launcher/web_ui/css/style.css +++ b/launcher/web_ui/css/style.css @@ -1,5 +1,5 @@ html, body { - font-family: "Segoe UI", "Microsoft YaHei"; + font-family: "Microsoft YaHei", "Segoe UI Light", "Segoe UI", "Heiti SC"; margin: 0; padding: 0; } @@ -8,7 +8,7 @@ button, input, select, textarea { - font-family: "Segoe UI", "Microsoft YaHei" !important; + font-family: "Microsoft YaHei", "Segoe UI Light", "Segoe UI", "Heiti SC" !important; } a {
改进了下 Mac 的字体, 默认的字体太挫了.
https://api.github.com/repos/XX-net/XX-Net/pulls/1624
2015-12-17T05:29:59Z
2015-12-17T06:27:51Z
2015-12-17T06:27:51Z
2015-12-17T06:27:51Z
197
XX-net/XX-Net
17,076
Bump docker/build-push-action from 4 to 5
diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml index 77a7c972028..3f6d9ed3978 100644 --- a/.github/workflows/docker.yml +++ b/.github/workflows/docker.yml @@ -32,7 +32,7 @@ jobs: password: ${{ secrets.DOCKERHUB_TOKEN }} - name: Build and push arm64 image - uses: docker/build-push-action@v4 + uses: docker/build-push-action@v5 continue-on-error: true with: context: . @@ -42,7 +42,7 @@ jobs: tags: ultralytics/yolov5:latest-arm64 - name: Build and push CPU image - uses: docker/build-push-action@v4 + uses: docker/build-push-action@v5 continue-on-error: true with: context: . @@ -51,7 +51,7 @@ jobs: tags: ultralytics/yolov5:latest-cpu - name: Build and push GPU image - uses: docker/build-push-action@v4 + uses: docker/build-push-action@v5 continue-on-error: true with: context: .
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 4 to 5. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/docker/build-push-action/releases">docker/build-push-action's releases</a>.</em></p> <blockquote> <h2>v5.0.0</h2> <ul> <li>Node 20 as default runtime (requires <a href="https://github.com/actions/runner/releases/tag/v2.308.0">Actions Runner v2.308.0</a> or later) by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a href="https://redirect.github.com/docker/build-push-action/pull/954">docker/build-push-action#954</a></li> <li>Bump <code>@​actions/core</code> from 1.10.0 to 1.10.1 in <a href="https://redirect.github.com/docker/build-push-action/pull/959">docker/build-push-action#959</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v4.2.1...v5.0.0">https://github.com/docker/build-push-action/compare/v4.2.1...v5.0.0</a></p> <h2>v4.2.1</h2> <blockquote> <p><strong>Note</strong></p> <p>Buildx v0.10 enables support for a minimal <a href="https://slsa.dev/provenance/">SLSA Provenance</a> attestation, which requires support for <a href="https://github.com/opencontainers/image-spec">OCI-compliant</a> multi-platform images. This may introduce issues with registry and runtime support (e.g. <a href="https://redirect.github.com/docker/buildx/issues/1533">Google Cloud Run and AWS Lambda</a>). You can optionally disable the default provenance attestation functionality using <code>provenance: false</code>.</p> </blockquote> <ul> <li>warn if docker config can't be parsed by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a href="https://redirect.github.com/docker/build-push-action/pull/957">docker/build-push-action#957</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v4.2.0...v4.2.1">https://github.com/docker/build-push-action/compare/v4.2.0...v4.2.1</a></p> <h2>v4.2.0</h2> <blockquote> <p><strong>Note</strong></p> <p>Buildx v0.10 enables support for a minimal <a href="https://slsa.dev/provenance/">SLSA Provenance</a> attestation, which requires support for <a href="https://github.com/opencontainers/image-spec">OCI-compliant</a> multi-platform images. This may introduce issues with registry and runtime support (e.g. <a href="https://redirect.github.com/docker/buildx/issues/1533">Google Cloud Run and AWS Lambda</a>). You can optionally disable the default provenance attestation functionality using <code>provenance: false</code>.</p> </blockquote> <ul> <li>display proxy configuration by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a href="https://redirect.github.com/docker/build-push-action/pull/872">docker/build-push-action#872</a></li> <li>chore(deps): Bump <code>@​docker/actions-toolkit</code> from 0.6.0 to 0.8.0 in <a href="https://redirect.github.com/docker/build-push-action/pull/930">docker/build-push-action#930</a></li> <li>chore(deps): Bump word-wrap from 1.2.3 to 1.2.5 in <a href="https://redirect.github.com/docker/build-push-action/pull/925">docker/build-push-action#925</a></li> <li>chore(deps): Bump semver from 6.3.0 to 6.3.1 in <a href="https://redirect.github.com/docker/build-push-action/pull/902">docker/build-push-action#902</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v4.1.1...v4.2.0">https://github.com/docker/build-push-action/compare/v4.1.1...v4.2.0</a></p> <h2>v4.1.1</h2> <blockquote> <p><strong>Note</strong></p> <p>Buildx v0.10 enables support for a minimal <a href="https://slsa.dev/provenance/">SLSA Provenance</a> attestation, which requires support for <a href="https://github.com/opencontainers/image-spec">OCI-compliant</a> multi-platform images. This may introduce issues with registry and runtime support (e.g. <a href="https://redirect.github.com/docker/buildx/issues/1533">Google Cloud Run and AWS Lambda</a>). You can optionally disable the default provenance attestation functionality using <code>provenance: false</code>.</p> </blockquote> <ul> <li>Bump <code>@​docker/actions-toolkit</code> from 0.3.0 to 0.5.0 by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a href="https://redirect.github.com/docker/build-push-action/pull/880">docker/build-push-action#880</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v4.1.0...v4.1.1">https://github.com/docker/build-push-action/compare/v4.1.0...v4.1.1</a></p> <h2>v4.1.0</h2> <blockquote> <p><strong>Note</strong></p> <p>Buildx v0.10 enables support for a minimal <a href="https://slsa.dev/provenance/">SLSA Provenance</a> attestation, which requires support for <a href="https://github.com/opencontainers/image-spec">OCI-compliant</a> multi-platform images. This may introduce issues with registry and runtime support (e.g. <a href="https://redirect.github.com/docker/buildx/issues/1533">Google Cloud Run and AWS Lambda</a>). You can optionally disable the default provenance attestation functionality using <code>provenance: false</code>.</p> </blockquote> <ul> <li>Switch to actions-toolkit implementation by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a href="https://redirect.github.com/docker/build-push-action/pull/811">docker/build-push-action#811</a> <a href="https://redirect.github.com/docker/build-push-action/pull/838">docker/build-push-action#838</a> <a href="https://redirect.github.com/docker/build-push-action/pull/855">docker/build-push-action#855</a> <a href="https://redirect.github.com/docker/build-push-action/pull/860">docker/build-push-action#860</a> <a href="https://redirect.github.com/docker/build-push-action/pull/875">docker/build-push-action#875</a></li> <li>e2e: quay.io by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a href="https://redirect.github.com/docker/build-push-action/pull/799">docker/build-push-action#799</a> <a href="https://redirect.github.com/docker/build-push-action/pull/805">docker/build-push-action#805</a></li> <li>e2e: local harbor and nexus by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a href="https://redirect.github.com/docker/build-push-action/pull/800">docker/build-push-action#800</a></li> <li>e2e: add artifactory container registry to test against by <a href="https://github.com/jedevc"><code>@​jedevc</code></a> in <a href="https://redirect.github.com/docker/build-push-action/pull/804">docker/build-push-action#804</a></li> <li>e2e: add distribution tests by <a href="https://github.com/jedevc"><code>@​jedevc</code></a> in <a href="https://redirect.github.com/docker/build-push-action/pull/814">docker/build-push-action#814</a> <a href="https://redirect.github.com/docker/build-push-action/pull/815">docker/build-push-action#815</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v4.0.0...v4.1.0">https://github.com/docker/build-push-action/compare/v4.0.0...v4.1.0</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/docker/build-push-action/commit/0565240e2d4ab88bba5387d719585280857ece09"><code>0565240</code></a> Merge pull request <a href="https://redirect.github.com/docker/build-push-action/issues/959">#959</a> from docker/dependabot/npm_and_yarn/actions/core-1.10.1</li> <li><a href="https://github.com/docker/build-push-action/commit/3ab07f880128dd3b47d7764b661d608b1e37712a"><code>3ab07f8</code></a> chore: update generated content</li> <li><a href="https://github.com/docker/build-push-action/commit/b9e7e4daec1dd1fed28b226354d2eef8aa92ca38"><code>b9e7e4d</code></a> chore(deps): Bump <code>@​actions/core</code> from 1.10.0 to 1.10.1</li> <li><a href="https://github.com/docker/build-push-action/commit/04d1a3b0491bb1fbd0843d1fea3390e385bf2252"><code>04d1a3b</code></a> Merge pull request <a href="https://redirect.github.com/docker/build-push-action/issues/954">#954</a> from crazy-max/update-node20</li> <li><a href="https://github.com/docker/build-push-action/commit/1a4d1a13fb219ebf616f93930a8c4c6a9ff24155"><code>1a4d1a1</code></a> chore: node 20 as default runtime</li> <li><a href="https://github.com/docker/build-push-action/commit/675965c0e16f1a0f94ecafff969d8c966f92c17b"><code>675965c</code></a> chore: update generated content</li> <li><a href="https://github.com/docker/build-push-action/commit/58ee34cb6bad9fc3b471453afb4ed741cb0e6ff3"><code>58ee34c</code></a> chore: fix author in package.json</li> <li><a href="https://github.com/docker/build-push-action/commit/c97c4060bdc51e97b1b2a972eab2f77d6ae8e57a"><code>c97c406</code></a> fix ProxyConfig type when checking length</li> <li><a href="https://github.com/docker/build-push-action/commit/47d5369e0b15ff3b951d5787a265fbecf0fc2bac"><code>47d5369</code></a> vendor: bump <code>@​docker/actions-toolkit</code> from 0.8.0 to 0.12.0</li> <li><a href="https://github.com/docker/build-push-action/commit/8895c7468fbe88881dcc4c5b416553e604722cf2"><code>8895c74</code></a> chore: update dev dependencies</li> <li>Additional commits viewable in <a href="https://github.com/docker/build-push-action/compare/v4...v5">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Update Docker GitHub Actions to use newer build-push-action. ### 📊 Key Changes - Upgraded `docker/build-push-action` from version 4 to version 5 for arm64, CPU, and GPU image workflows. ### 🎯 Purpose & Impact - 🚀 **Purpose**: The primary reason for this update is to leverage improvements and new features offered by the newer version of the `build-push-action`. - ✨ **Impact**: Users can expect more reliable and potentially faster Docker image builds for YOLOv5, enhancing the overall automation process. It might also include better error handling, improved logging, or new functionalities that benefit the CI/CD pipeline.
https://api.github.com/repos/ultralytics/yolov5/pulls/12135
2023-09-18T04:28:02Z
2023-09-18T12:55:29Z
2023-09-18T12:55:29Z
2024-01-19T01:12:48Z
281
ultralytics/yolov5
25,360
Candidate 2.7.2
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md index 900948fc635..ba42ee45fb3 100644 --- a/certbot/CHANGELOG.md +++ b/certbot/CHANGELOG.md @@ -14,6 +14,14 @@ Certbot adheres to [Semantic Versioning](https://semver.org/). ### Fixed +* + +More details about these changes can be found on our GitHub repo. + +## 2.7.2 - 2023-10-19 + +### Fixed + * `certbot-dns-ovh` plugin now requires `lexicon>=3.15.1` to ensure a consistent behavior with OVH APIs. * Fixed a bug where argument sources weren't correctly detected in abbreviated arguments, short arguments, and some other circumstances diff --git a/certbot/docs/cli-help.txt b/certbot/docs/cli-help.txt index e795871284b..a5d98954a26 100644 --- a/certbot/docs/cli-help.txt +++ b/certbot/docs/cli-help.txt @@ -36,7 +36,7 @@ manage your account: --agree-tos Agree to the ACME server's Subscriber Agreement -m EMAIL Email address for important account notifications -optional arguments: +options: -h, --help show this help message and exit -c CONFIG_FILE, --config CONFIG_FILE path to config file (default: /etc/letsencrypt/cli.ini @@ -122,7 +122,7 @@ optional arguments: case, and to know when to deprecate support for past Python versions and flags. If you wish to hide this information from the Let's Encrypt server, set this to - "". (default: CertbotACMEClient/2.7.1 (certbot; + "". (default: CertbotACMEClient/2.7.2 (certbot; OS_NAME OS_VERSION) Authenticator/XXX Installer/YYY (SUBCOMMAND; flags: FLAGS) Py/major.minor.patchlevel). The flags encoded in the user agent are: --duplicate,
This PR should be merged and not squashed.
https://api.github.com/repos/certbot/certbot/pulls/9809
2023-10-19T23:52:38Z
2023-10-20T00:49:02Z
2023-10-20T00:49:02Z
2023-10-20T00:49:03Z
476
certbot/certbot
3,581
Add ISY994 variables as number entities
diff --git a/.coveragerc b/.coveragerc index 93f7434634b0e8..f9ca672e8eb16b 100644 --- a/.coveragerc +++ b/.coveragerc @@ -607,6 +607,7 @@ omit = homeassistant/components/isy994/helpers.py homeassistant/components/isy994/light.py homeassistant/components/isy994/lock.py + homeassistant/components/isy994/number.py homeassistant/components/isy994/sensor.py homeassistant/components/isy994/services.py homeassistant/components/isy994/switch.py diff --git a/homeassistant/components/isy994/__init__.py b/homeassistant/components/isy994/__init__.py index be2948c7aa2b4c..a8b3d4e239e9c1 100644 --- a/homeassistant/components/isy994/__init__.py +++ b/homeassistant/components/isy994/__init__.py @@ -16,6 +16,7 @@ CONF_PASSWORD, CONF_USERNAME, EVENT_HOMEASSISTANT_STOP, + Platform, ) from homeassistant.core import Event, HomeAssistant, callback from homeassistant.exceptions import ConfigEntryAuthFailed, ConfigEntryNotReady @@ -141,7 +142,9 @@ async def async_setup_entry( for platform in PROGRAM_PLATFORMS: hass_isy_data[ISY994_PROGRAMS][platform] = [] - hass_isy_data[ISY994_VARIABLES] = [] + hass_isy_data[ISY994_VARIABLES] = {} + hass_isy_data[ISY994_VARIABLES][Platform.NUMBER] = [] + hass_isy_data[ISY994_VARIABLES][Platform.SENSOR] = [] isy_config = entry.data isy_options = entry.options @@ -212,7 +215,12 @@ async def async_setup_entry( _categorize_nodes(hass_isy_data, isy.nodes, ignore_identifier, sensor_identifier) _categorize_programs(hass_isy_data, isy.programs) + # Categorize variables call to be removed with variable sensors in 2023.5.0 _categorize_variables(hass_isy_data, isy.variables, variable_identifier) + # Gather ISY Variables to be added. Identifier used to enable by default. + numbers = hass_isy_data[ISY994_VARIABLES][Platform.NUMBER] + for vtype, vname, vid in isy.variables.children: + numbers.append((isy.variables[vtype][vid], variable_identifier in vname)) if isy.configuration[ISY_CONF_NETWORKING]: for resource in isy.networking.nobjs: hass_isy_data[ISY994_NODES][PROTO_NETWORK_RESOURCE].append(resource) diff --git a/homeassistant/components/isy994/const.py b/homeassistant/components/isy994/const.py index 402086ddec11bb..3df11f078eacab 100644 --- a/homeassistant/components/isy994/const.py +++ b/homeassistant/components/isy994/const.py @@ -82,6 +82,7 @@ Platform.FAN, Platform.LIGHT, Platform.LOCK, + Platform.NUMBER, Platform.SENSOR, Platform.SWITCH, ] @@ -307,6 +308,14 @@ FILTER_INSTEON_TYPE: ["4.8", TYPE_CATEGORY_CLIMATE], FILTER_ZWAVE_CAT: ["140"], }, + Platform.NUMBER: { + # No devices automatically sorted as numbers at this time. + FILTER_UOM: [], + FILTER_STATES: [], + FILTER_NODE_DEF_ID: [], + FILTER_INSTEON_TYPE: [], + FILTER_ZWAVE_CAT: [], + }, } UOM_FRIENDLY_NAME = { diff --git a/homeassistant/components/isy994/helpers.py b/homeassistant/components/isy994/helpers.py index 54d2890c84c499..cc602a49777d03 100644 --- a/homeassistant/components/isy994/helpers.py +++ b/homeassistant/components/isy994/helpers.py @@ -376,8 +376,9 @@ def _categorize_variables( except KeyError as err: _LOGGER.error("Error adding ISY Variables: %s", err) return + variable_entities = hass_isy_data[ISY994_VARIABLES] for vtype, vname, vid in var_to_add: - hass_isy_data[ISY994_VARIABLES].append((vname, variables[vtype][vid])) + variable_entities[Platform.SENSOR].append((vname, variables[vtype][vid])) async def migrate_old_unique_ids( diff --git a/homeassistant/components/isy994/number.py b/homeassistant/components/isy994/number.py new file mode 100644 index 00000000000000..064b6c6e60aeaf --- /dev/null +++ b/homeassistant/components/isy994/number.py @@ -0,0 +1,162 @@ +"""Support for ISY number entities.""" +from __future__ import annotations + +from typing import Any + +from pyisy import ISY +from pyisy.helpers import EventListener, NodeProperty +from pyisy.variables import Variable + +from homeassistant.components.number import NumberEntity, NumberEntityDescription +from homeassistant.config_entries import ConfigEntry +from homeassistant.const import Platform +from homeassistant.core import HomeAssistant, callback +from homeassistant.helpers.device_registry import DeviceEntryType +from homeassistant.helpers.entity import DeviceInfo, EntityCategory +from homeassistant.helpers.entity_platform import AddEntitiesCallback + +from . import _async_isy_to_configuration_url +from .const import ( + DOMAIN as ISY994_DOMAIN, + ISY994_ISY, + ISY994_VARIABLES, + ISY_CONF_FIRMWARE, + ISY_CONF_MODEL, + ISY_CONF_NAME, + ISY_CONF_UUID, + MANUFACTURER, +) +from .helpers import convert_isy_value_to_hass + +ISY_MAX_SIZE = (2**32) / 2 + + +async def async_setup_entry( + hass: HomeAssistant, + config_entry: ConfigEntry, + async_add_entities: AddEntitiesCallback, +) -> None: + """Set up ISY/IoX number entities from config entry.""" + hass_isy_data = hass.data[ISY994_DOMAIN][config_entry.entry_id] + isy: ISY = hass_isy_data[ISY994_ISY] + uuid = isy.configuration[ISY_CONF_UUID] + entities: list[ISYVariableNumberEntity] = [] + + for node, enable_by_default in hass_isy_data[ISY994_VARIABLES][Platform.NUMBER]: + step = 10 ** (-1 * node.prec) + min_max = ISY_MAX_SIZE / (10**node.prec) + description = NumberEntityDescription( + key=node.address, + name=node.name, + icon="mdi:counter", + entity_registry_enabled_default=enable_by_default, + native_unit_of_measurement=None, + native_step=step, + native_min_value=-min_max, + native_max_value=min_max, + ) + description_init = NumberEntityDescription( + key=f"{node.address}_init", + name=f"{node.name} Initial Value", + icon="mdi:counter", + entity_registry_enabled_default=False, + native_unit_of_measurement=None, + native_step=step, + native_min_value=-min_max, + native_max_value=min_max, + entity_category=EntityCategory.CONFIG, + ) + + entities.append( + ISYVariableNumberEntity( + node, + unique_id=f"{uuid}_{node.address}", + description=description, + ) + ) + entities.append( + ISYVariableNumberEntity( + node=node, + unique_id=f"{uuid}_{node.address}_init", + description=description_init, + init_entity=True, + ) + ) + + async_add_entities(entities) + + +class ISYVariableNumberEntity(NumberEntity): + """Representation of an ISY variable as a number entity device.""" + + _attr_has_entity_name = True + _attr_should_poll = False + _init_entity: bool + _node: Variable + entity_description: NumberEntityDescription + + def __init__( + self, + node: Variable, + unique_id: str, + description: NumberEntityDescription, + init_entity: bool = False, + ) -> None: + """Initialize the ISY variable number.""" + self._node = node + self._name = description.name + self.entity_description = description + self._change_handler: EventListener | None = None + + # Two entities are created for each variable, one for current value and one for initial. + # Initial value entities are disabled by default + self._init_entity = init_entity + + self._attr_unique_id = unique_id + + url = _async_isy_to_configuration_url(node.isy) + config = node.isy.configuration + self._attr_device_info = DeviceInfo( + identifiers={ + ( + ISY994_DOMAIN, + f"{config[ISY_CONF_UUID]}_variables", + ) + }, + manufacturer=MANUFACTURER, + name=f"{config[ISY_CONF_NAME]} Variables", + model=config[ISY_CONF_MODEL], + sw_version=config[ISY_CONF_FIRMWARE], + configuration_url=url, + via_device=(ISY994_DOMAIN, config[ISY_CONF_UUID]), + entry_type=DeviceEntryType.SERVICE, + ) + + async def async_added_to_hass(self) -> None: + """Subscribe to the node change events.""" + self._change_handler = self._node.status_events.subscribe(self.async_on_update) + + @callback + def async_on_update(self, event: NodeProperty) -> None: + """Handle the update event from the ISY Node.""" + self.async_write_ha_state() + + @property + def native_value(self) -> float | int | None: + """Return the state of the variable.""" + return convert_isy_value_to_hass( + self._node.init if self._init_entity else self._node.status, + "", + self._node.prec, + ) + + @property + def extra_state_attributes(self) -> dict[str, Any]: + """Get the state attributes for the device.""" + return { + "last_edited": self._node.last_edited, + } + + async def async_set_native_value(self, value: float) -> None: + """Set new value.""" + await self._node.set_value(value, init=self._init_entity) diff --git a/homeassistant/components/isy994/sensor.py b/homeassistant/components/isy994/sensor.py index 727600edea2802..e3e812d1b261d2 100644 --- a/homeassistant/components/isy994/sensor.py +++ b/homeassistant/components/isy994/sensor.py @@ -132,7 +132,7 @@ async def async_setup_entry( # Any node in SENSOR_AUX can potentially have communication errors entities.append(ISYAuxSensorEntity(node, PROP_COMMS_ERROR, False)) - for vname, vobj in hass_isy_data[ISY994_VARIABLES]: + for vname, vobj in hass_isy_data[ISY994_VARIABLES][Platform.SENSOR]: entities.append(ISYSensorVariableEntity(vname, vobj)) await migrate_old_unique_ids(hass, Platform.SENSOR, entities) @@ -269,6 +269,9 @@ def name(self) -> str: class ISYSensorVariableEntity(ISYEntity, SensorEntity): """Representation of an ISY variable as a sensor device.""" + # Depreceted sensors, will be removed in 2023.5.0 + _attr_entity_registry_enabled_default = False + def __init__(self, vname: str, vobj: object) -> None: """Initialize the ISY binary sensor program.""" super().__init__(vobj) diff --git a/homeassistant/components/isy994/services.py b/homeassistant/components/isy994/services.py index ff7fdb965c71d5..bd49478905f19d 100644 --- a/homeassistant/components/isy994/services.py +++ b/homeassistant/components/isy994/services.py @@ -307,6 +307,18 @@ async def async_set_variable_service_handler(service: ServiceCall) -> None: variable = isy.variables.vobjs[vtype].get(address) if variable is not None: await variable.set_value(value, init) + entity_registry = er.async_get(hass) + async_log_deprecated_service_call( + hass, + call=service, + alternate_service="number.set_value", + alternate_target=entity_registry.async_get_entity_id( + Platform.NUMBER, + DOMAIN, + f"{isy.configuration[ISY_CONF_UUID]}_{address}{'_init' if init else ''}", + ), + breaks_in_ha_version="2023.5.0", + ) return _LOGGER.error("Could not set variable value; not found or enabled on the ISY") diff --git a/homeassistant/components/isy994/services.yaml b/homeassistant/components/isy994/services.yaml index 8d1aa8c58ef5cb..c9daa828970542 100644 --- a/homeassistant/components/isy994/services.yaml +++ b/homeassistant/components/isy994/services.yaml @@ -184,8 +184,8 @@ system_query: selector: text: set_variable: - name: Set variable - description: Set an ISY variable's current or initial value. Variables can be set by either type/address or by name. + name: Set variable (Deprecated) + description: "Set an ISY variable's current or initial value. Variables can be set by either type/address or by name. Deprecated: Use number entities instead." fields: address: name: Address
<!-- You are amazing! Thanks for contributing to our project! Please, DO NOT DELETE ANY TEXT from this template! (unless instructed). --> ## Breaking change <!-- If your PR contains a breaking change for existing users, it is important to tell them what breaks, how to make it work again and why we did this. This piece of text is published with the release notes, so it helps if you write it towards our users, not us. Note: Remove this section if this PR is NOT a breaking change. --> ISY/IoX Variables have been moved from `sensor` entities to `number` entities; the existing `sensor` entities are deprecated and will be removed in a future release. The `isy994.set_variable` service has been deprecated in favor of using the `number` entities to directly set the variable values. Please update any dashboards and automations that may be using these entities or service. ## Proposed change <!-- Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue in the additional information section. --> Add the `number` platform to ISY994 and move ISY/IoX Variables from `sensor` with a custom service to the new platform. There is an existing Config Option for choosing which Variables are added to Home Assistant. This is now used to determine which entities are enabled by default, but all Variables can now be imported. For each Variable in the ISY/IoX, 2 entities are created; one for the current value and one for the initial value. The initial value entities are always disabled by default. ## Type of change <!-- What type of change does your PR introduce to Home Assistant? NOTE: Please, check only 1! box! If your PR requires multiple boxes to be checked, you'll most likely need to split it into multiple PRs. This makes things easier and faster to code review. --> - [ ] Dependency upgrade - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New integration (thank you!) - [x] New feature (which adds functionality to an existing integration) - [x] Deprecation (breaking change to happen in the future) - [ ] Breaking change (fix/feature causing existing functionality to break) - [ ] Code quality improvements to existing code or addition of tests ## Additional information <!-- Details are important, and help maintainers processing your PR. Please be sure to fill out additional details, if applicable. --> - This PR fixes or closes issue: fixes # - This PR is related to issue: - Link to documentation pull request: https://github.com/home-assistant/home-assistant.io/pull/25650 ## Checklist <!-- Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code. --> - [x] The code change is tested and works locally. - [x] Local tests pass. **Your PR cannot be merged unless tests pass** - [ ] There is no commented out code in this PR. - [x] I have followed the [development checklist][dev-checklist] - [x] The code has been formatted using Black (`black --fast homeassistant tests`) - [ ] Tests have been added to verify that the new code works. If user exposed functionality or configuration variables are added/changed: - [x] Documentation added/updated for [www.home-assistant.io][docs-repository] If the code communicates with devices, web services, or third-party tools: - [x] The [manifest file][manifest-docs] has all fields filled out correctly. Updated and included derived files by running: `python3 -m script.hassfest`. - [ ] New or updated dependencies have been added to `requirements_all.txt`. Updated by running `python3 -m script.gen_requirements_all`. - [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description. - [ ] Untested files have been added to `.coveragerc`. <!-- This project is very active and we have a high turnover of pull requests. Unfortunately, the number of incoming pull requests is higher than what our reviewers can review and merge so there is a long backlog of pull requests waiting for review. You can help here! By reviewing another pull request, you will help raise the code quality of that pull request and the final review will be faster. This way the general pace of pull request reviews will go up and your wait time will go down. When picking a pull request to review, try to choose one that hasn't yet been reviewed. Thanks for helping out! --> To help with the load of incoming pull requests: - [ ] I have reviewed two other [open pull requests][prs] in this repository. [prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure <!-- Thank you for contributing <3 Below, some useful links you could explore: --> [dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html [manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html [quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html [docs-repository]: https://github.com/home-assistant/home-assistant.io
https://api.github.com/repos/home-assistant/core/pulls/85511
2023-01-09T11:02:37Z
2023-01-10T22:29:11Z
2023-01-10T22:29:11Z
2023-01-12T03:01:52Z
3,148
home-assistant/core
39,484
Solution for Problem Euler 56
diff --git a/project_euler/problem_56/__init__.py b/project_euler/problem_56/__init__.py new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/project_euler/problem_56/sol1.py b/project_euler/problem_56/sol1.py new file mode 100644 index 000000000000..194a7a37af43 --- /dev/null +++ b/project_euler/problem_56/sol1.py @@ -0,0 +1,26 @@ + + +def maximum_digital_sum(a: int, b: int) -> int: + """ + Considering natural numbers of the form, a**b, where a, b < 100, + what is the maximum digital sum? + :param a: + :param b: + :return: + >>> maximum_digital_sum(10,10) + 45 + + >>> maximum_digital_sum(100,100) + 972 + + >>> maximum_digital_sum(100,200) + 1872 + """ + + # RETURN the MAXIMUM from the list of SUMs of the list of INT converted from STR of BASE raised to the POWER + return max([sum([int(x) for x in str(base**power)]) for base in range(a) for power in range(b)]) + +#Tests +if __name__ == "__main__": + import doctest + doctest.testmod()
Solution for problem Euler 56
https://api.github.com/repos/TheAlgorithms/Python/pulls/1131
2019-08-13T14:53:49Z
2019-08-13T17:16:11Z
2019-08-13T17:16:11Z
2019-08-13T17:16:11Z
337
TheAlgorithms/Python
29,578
Add Google Docs API
diff --git a/README.md b/README.md index 79d51c1df7..959772cb6e 100644 --- a/README.md +++ b/README.md @@ -396,6 +396,7 @@ API | Description | Auth | HTTPS | CORS | | [Gitlab](https://docs.gitlab.com/ee/api/) | Automate GitLab interaction programmatically | `OAuth` | Yes | Unknown | | [Gitter](https://developer.gitter.im/docs/welcome) | Chat for Developers | `OAuth` | Yes | Unknown | | [Glitterly](https://developers.glitterly.app) | Image generation API | `apiKey` | Yes | Yes | +| [Google Docs](https://developers.google.com/docs/api/reference/rest) | API to read, write, and format Google Docs documents | `OAuth` | Yes | Unknown | | [Google Sheets](https://developers.google.com/sheets/api/reference/rest) | API to read, write, and format Google Sheets data | `OAuth` | Yes | Unknown | | [Gorest](https://gorest.co.in/) | Online REST API for Testing and Prototyping | `OAuth` | Yes | Unknown | | [Hexabin](https://hexabin.herokuapp.com/) | Convert and retrieve hexadecimal, binary, decimal, and octal values with ease | No | No | Unknown |
- [x] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md) - [x] My addition is ordered alphabetically - [x] My submission has a useful description - [x] The description does not have more than 100 characters - [x] The description does not end with punctuation - [x] Each table column is padded with one space on either side - [x] I have searched the repository for any relevant issues or pull requests - [x] Any category I am creating has the minimum requirement of 3 items - [x] All changes have been [squashed][squash-link] into a single commit [squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit> Adding Google Docs API to development category because Google Sheets API is also in this category
https://api.github.com/repos/public-apis/public-apis/pulls/2293
2021-10-05T08:37:49Z
2021-10-10T18:41:12Z
2021-10-10T18:41:12Z
2021-10-10T18:41:12Z
289
public-apis/public-apis
35,577
Fix `clipnorm` for TF models with `Embedding` layer(s).
diff --git a/keras/backend/tensorflow/optimizer.py b/keras/backend/tensorflow/optimizer.py index 7ce1707b119..967c4e4d2a8 100644 --- a/keras/backend/tensorflow/optimizer.py +++ b/keras/backend/tensorflow/optimizer.py @@ -163,3 +163,8 @@ def _distributed_tf_increment_grad_acc( grads, accumulators, ) + + def _clip_by_norm(self, values, axes=None): + # We need to use TF-specific OP to support the case, + # when `values` are `tf.IndexedSlices`. + return tf.clip_by_norm(values, self.clipnorm, axes) diff --git a/keras/optimizers/adam_test.py b/keras/optimizers/adam_test.py index 4f33f4afd7f..1dcc876a1dd 100644 --- a/keras/optimizers/adam_test.py +++ b/keras/optimizers/adam_test.py @@ -84,3 +84,21 @@ def test_ema(self): x = keras.ops.zeros((1, 5)) y = keras.ops.zeros((1, 10)) model.fit(x, y) + + @pytest.mark.skipif( + backend.backend() != "tensorflow", + reason="The IndexedSlices test can only run with TF backend.", + ) + def test_clipnorm_indexed_slices(self): + # https://github.com/keras-team/keras/issues/18985 + model = keras.Sequential( + [ + keras.layers.Embedding(10, 4), + keras.layers.Flatten(), + keras.layers.Dense(2), + ] + ) + model.compile(optimizer=Adam(clipnorm=100), loss="mse") + x = keras.ops.ones((8, 5)) + y = keras.ops.zeros((8, 2)) + model.fit(x, y, verbose=0) diff --git a/keras/optimizers/base_optimizer.py b/keras/optimizers/base_optimizer.py index f3cca634d1b..9757cae437b 100644 --- a/keras/optimizers/base_optimizer.py +++ b/keras/optimizers/base_optimizer.py @@ -589,7 +589,7 @@ def _clip_gradients(self, grads): if g is None: clipped_grads.append(g) else: - clipped_grads.append(clip_by_norm(g, self.clipnorm)) + clipped_grads.append(self._clip_by_norm(g)) return clipped_grads if self.global_clipnorm and self.global_clipnorm > 0: @@ -796,6 +796,19 @@ def __setattr__(self, name, value): value = self._tracker.track(value) return super().__setattr__(name, value) + def _clip_by_norm(self, values, axes=None): + # Calculate L2-norm, clip elements by ratio of clip_norm to L2-norm + l2sum = ops.sum(ops.square(values), axes, keepdims=True) + pred = l2sum > 0 + # Two-tap tf.where trick to bypass NaN gradients + l2sum_safe = ops.where(pred, l2sum, ops.ones_like(l2sum)) + l2norm = ops.where(pred, ops.sqrt(l2sum_safe), l2sum) + intermediate = ops.multiply(values, self.clipnorm) + values_clip = ops.convert_to_tensor(intermediate) / ops.maximum( + l2norm, self.clipnorm + ) + return values_clip + base_optimizer_keyword_args = """name: String. The name to use for momentum accumulator weights created by @@ -845,20 +858,6 @@ def __setattr__(self, name, value): """ -def clip_by_norm(values, clip_norm, axes=None): - # Calculate L2-norm, clip elements by ratio of clip_norm to L2-norm - l2sum = ops.sum(values * values, axes, keepdims=True) - pred = l2sum > 0 - # Two-tap tf.where trick to bypass NaN gradients - l2sum_safe = ops.where(pred, l2sum, ops.ones_like(l2sum)) - l2norm = ops.where(pred, ops.sqrt(l2sum_safe), l2sum) - intermediate = values * clip_norm - values_clip = ops.convert_to_tensor(intermediate) / ops.maximum( - l2norm, clip_norm - ) - return values_clip - - def global_norm(value_list): """Computes the global norm of multiple tensors.""" squared_norms = []
Fix for the issue described in https://github.com/keras-team/keras/issues/18985
https://api.github.com/repos/keras-team/keras/pulls/18986
2023-12-22T18:53:22Z
2023-12-24T06:25:46Z
2023-12-24T06:25:46Z
2023-12-24T21:00:42Z
1,042
keras-team/keras
47,162
pre-commit: autoupdate hooks
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index b777bc4368741c..471d2855f1a150 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -35,7 +35,7 @@ repos: args: ['--explicit-package-bases'] exclude: '^(third_party/)|(cereal/)|(opendbc/)|(panda/)|(laika/)|(laika_repo/)|(rednose/)|(rednose_repo/)|(tinygrad/)|(tinygrad_repo/)|(xx/)' - repo: https://github.com/astral-sh/ruff-pre-commit - rev: v0.0.284 + rev: v0.0.285 hooks: - id: ruff exclude: '^(third_party/)|(cereal/)|(rednose/)|(panda/)|(laika/)|(laika_repo/)|(rednose_repo/)|(tinygrad/)|(tinygrad_repo/)' @@ -61,7 +61,7 @@ repos: language: script pass_filenames: false - repo: https://github.com/python-poetry/poetry - rev: '1.5.0' + rev: '1.6.0' hooks: - id: poetry-check - id: poetry-lock
Automated changes by [create-pull-request](https://github.com/peter-evans/create-pull-request) GitHub action
https://api.github.com/repos/commaai/openpilot/pulls/29524
2023-08-22T15:05:54Z
2023-08-22T17:39:46Z
2023-08-22T17:39:46Z
2023-08-22T17:39:47Z
304
commaai/openpilot
9,081
+ Saul
diff --git a/README.md b/README.md index 5cc2b877..4d4fad7c 100644 --- a/README.md +++ b/README.md @@ -339,6 +339,7 @@ For a list of free machine learning books available for download, go [here](http * [Weka](http://www.cs.waikato.ac.nz/ml/weka/) - Weka is a collection of machine learning algorithms for data mining tasks * [LBJava](https://github.com/IllinoisCogComp/lbjava/) - Learning Based Java is a modeling language for the rapid development of software systems, offers a convenient, declarative syntax for classifier and constraint definition directly in terms of the objects in the programmer's application. + #### Speech Recognition * [CMU Sphinx](http://cmusphinx.sourceforge.net/) - Open Source Toolkit For Speech Recognition purely based on Java speech recognition library. @@ -1121,6 +1122,7 @@ on MNIST digits[DEEP LEARNING] * [H2O Sparkling Water](https://github.com/h2oai/sparkling-water) - H2O and Spark interoperability. * [FlinkML in Apache Flink](https://ci.apache.org/projects/flink/flink-docs-master/apis/batch/libs/ml/index.html) - Distributed machine learning library in Flink * [DynaML](https://github.com/mandar2812/DynaML) - Scala Library/REPL for Machine Learning Research +* [Saul](https://github.com/IllinoisCogComp/saul/) - Flexible Declarative Learning-Based Programming. <a name="swift" /> ## Swift
... will go public tonight.
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/292
2016-06-22T23:00:16Z
2016-06-23T14:41:53Z
2016-06-23T14:41:53Z
2016-06-23T14:41:56Z
361
josephmisiti/awesome-machine-learning
51,909
Doc: Update pretrained_models wording
diff --git a/docs/source/pretrained_models.rst b/docs/source/pretrained_models.rst index 869143b91777a..c25c58031f80c 100644 --- a/docs/source/pretrained_models.rst +++ b/docs/source/pretrained_models.rst @@ -13,10 +13,9 @@ Pretrained models ======================================================================================================================= -Here is the full list of the currently provided pretrained models together with a short presentation of each model. +Here is a partial list of some of the available pretrained models together with a short presentation of each model. -For a list that includes all community-uploaded models, refer to `https://huggingface.co/models -<https://huggingface.co/models>`__. +For the full list, refer to `https://huggingface.co/models <https://huggingface.co/models>`__. +--------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Architecture | Model id | Details of the model |
To clarify things cf. this tweet for instance https://twitter.com/RTomMcCoy/status/1349094111505211395
https://api.github.com/repos/huggingface/transformers/pulls/9545
2021-01-12T20:51:45Z
2021-01-13T10:58:06Z
2021-01-13T10:58:06Z
2021-01-13T10:58:07Z
212
huggingface/transformers
12,926
Pin the libxml2 version in CI as a newer one breaks lxml.
diff --git a/.github/workflows/tests-ubuntu.yml b/.github/workflows/tests-ubuntu.yml index 521d7ae70a5..57188bd631b 100644 --- a/.github/workflows/tests-ubuntu.yml +++ b/.github/workflows/tests-ubuntu.yml @@ -57,7 +57,8 @@ jobs: if: matrix.python-version == 'pypy3' || contains(matrix.env.TOXENV, 'pinned') run: | sudo apt-get update - sudo apt-get install libxml2-dev libxslt-dev + # libxml2 2.9.12 from ondrej/php PPA breaks lxml so we pin it to the bionic-updates repo version + sudo apt-get install libxml2-dev/bionic-updates libxslt-dev - name: Run tests env: ${{ matrix.env }}
[libxml2 bug report](https://gitlab.gnome.org/GNOME/libxml2/-/issues/255) [lxml bug report](https://bugs.launchpad.net/lxml/+bug/1928795)
https://api.github.com/repos/scrapy/scrapy/pulls/5208
2021-07-16T11:48:18Z
2021-07-16T12:28:33Z
2021-07-16T12:28:32Z
2023-05-08T10:28:33Z
193
scrapy/scrapy
34,860
Fix dns rfc2136 (#7142)
diff --git a/CHANGELOG.md b/CHANGELOG.md index 95833ad7bb2..4daa6f9284c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,23 @@ Certbot adheres to [Semantic Versioning](https://semver.org/). +## 0.35.1 - master + +### Fixed + +* Support for specifying an authoritative base domain in our dns-rfc2136 plugin + has been removed. This feature was added in our last release but had a bug + which caused the plugin to fail so the feature has been removed until it can + be added properly. + +Despite us having broken lockstep, we are continuing to release new versions of +all Certbot components during releases for the time being, however, the only +package with changes other than its version number was: + +* certbot-dns-rfc2136 + +More details about these changes can be found on our GitHub repo. + ## 0.35.0 - 2019-06-05 ### Added diff --git a/certbot-dns-rfc2136/certbot_dns_rfc2136/__init__.py b/certbot-dns-rfc2136/certbot_dns_rfc2136/__init__.py index cebff2841d3..12b360959ff 100644 --- a/certbot-dns-rfc2136/certbot_dns_rfc2136/__init__.py +++ b/certbot-dns-rfc2136/certbot_dns_rfc2136/__init__.py @@ -21,8 +21,8 @@ ----------- Use of this plugin requires a configuration file containing the target DNS -server, optional authorative domain and optional port that supports RFC 2136 Dynamic Updates, -the name of the TSIG key, the TSIG key secret itself and the algorithm used if it's +server and optional port that supports RFC 2136 Dynamic Updates, the name +of the TSIG key, the TSIG key secret itself and the algorithm used if it's different to HMAC-MD5. .. code-block:: ini @@ -33,8 +33,6 @@ dns_rfc2136_server = 192.0.2.1 # Target DNS port dns_rfc2136_port = 53 - # Authorative domain (optional, will try to auto-detect if missing) - dns_rfc2136_base_domain = example.com # TSIG key name dns_rfc2136_name = keyname. # TSIG key secret diff --git a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py index 5db8c3020a3..2061374e0e8 100644 --- a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py +++ b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py @@ -79,33 +79,25 @@ def _cleanup(self, _domain, validation_name, validation): self._get_rfc2136_client().del_txt_record(validation_name, validation) def _get_rfc2136_client(self): - key = _RFC2136Key(self.credentials.conf('name'), - self.credentials.conf('secret'), - self.ALGORITHMS.get(self.credentials.conf('algorithm'), - dns.tsig.HMAC_MD5)) return _RFC2136Client(self.credentials.conf('server'), int(self.credentials.conf('port') or self.PORT), - key, - self.credentials.conf('base-domain')) + self.credentials.conf('name'), + self.credentials.conf('secret'), + self.ALGORITHMS.get(self.credentials.conf('algorithm'), + dns.tsig.HMAC_MD5)) -class _RFC2136Key(object): - def __init__(self, name, secret, algorithm): - self.name = name - self.secret = secret - self.algorithm = algorithm class _RFC2136Client(object): """ Encapsulates all communication with the target DNS server. """ - def __init__(self, server, port, base_domain, key): + def __init__(self, server, port, key_name, key_secret, key_algorithm): self.server = server self.port = port self.keyring = dns.tsigkeyring.from_text({ - key.name: key.secret + key_name: key_secret }) - self.algorithm = key.algorithm - self.base_domain = base_domain + self.algorithm = key_algorithm def add_txt_record(self, record_name, record_content, record_ttl): """ @@ -179,33 +171,23 @@ def del_txt_record(self, record_name, record_content): def _find_domain(self, record_name): """ - If 'base_domain' option is specified check if the requested domain matches this base domain - and return it. If not explicitly specified find the closest domain with an SOA record for - the given domain name. + Find the closest domain with an SOA record for a given domain name. - :param str record_name: The record name for which to find the base domain. + :param str record_name: The record name for which to find the closest SOA record. :returns: The domain, if found. :rtype: str :raises certbot.errors.PluginError: if no SOA record can be found. """ - if self.base_domain: - if not record_name.endswith(self.base_domain): - raise errors.PluginError('Requested domain {0} does not match specified base ' - 'domain {1}.' - .format(record_name, self.base_domain)) - else: - return self.base_domain - else: - domain_name_guesses = dns_common.base_domain_name_guesses(record_name) + domain_name_guesses = dns_common.base_domain_name_guesses(record_name) - # Loop through until we find an authoritative SOA record - for guess in domain_name_guesses: - if self._query_soa(guess): - return guess + # Loop through until we find an authoritative SOA record + for guess in domain_name_guesses: + if self._query_soa(guess): + return guess - raise errors.PluginError('Unable to determine base domain for {0} using names: {1}.' - .format(record_name, domain_name_guesses)) + raise errors.PluginError('Unable to determine base domain for {0} using names: {1}.' + .format(record_name, domain_name_guesses)) def _query_soa(self, domain_name): """ diff --git a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136_test.py b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136_test.py index bed3445b69e..d800f1ec7c2 100644 --- a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136_test.py +++ b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136_test.py @@ -73,12 +73,9 @@ def test_valid_algorithm_passes(self): class RFC2136ClientTest(unittest.TestCase): def setUp(self): - from certbot_dns_rfc2136.dns_rfc2136 import _RFC2136Client, _RFC2136Key + from certbot_dns_rfc2136.dns_rfc2136 import _RFC2136Client - self.rfc2136_client = _RFC2136Client(SERVER, - PORT, - None, - _RFC2136Key(NAME, SECRET, dns.tsig.HMAC_MD5)) + self.rfc2136_client = _RFC2136Client(SERVER, PORT, NAME, SECRET, dns.tsig.HMAC_MD5) @mock.patch("dns.query.tcp") def test_add_txt_record(self, query_mock): @@ -165,28 +162,6 @@ def test_find_domain_wraps_errors(self): self.rfc2136_client._find_domain, 'foo.bar.'+DOMAIN) - def test_find_domain_with_base(self): - # _query_soa | pylint: disable=protected-access - self.rfc2136_client._query_soa = mock.MagicMock(side_effect=[False, False, True]) - self.rfc2136_client.base_domain = 'bar.' + DOMAIN - - # _find_domain | pylint: disable=protected-access - domain = self.rfc2136_client._find_domain('foo.bar.' + DOMAIN) - - self.assertTrue(domain == 'bar.' + DOMAIN) - - def test_find_domain_with_wrong_base(self): - - # _query_soa | pylint: disable=protected-access - self.rfc2136_client._query_soa = mock.MagicMock(side_effect=[False, False, True]) - self.rfc2136_client.base_domain = 'wrong.' + DOMAIN - - self.assertRaises( - errors.PluginError, - # _find_domain | pylint: disable=protected-access - self.rfc2136_client._find_domain, - 'foo.bar.' + DOMAIN) - @mock.patch("dns.query.udp") def test_query_soa_found(self, query_mock): query_mock.return_value = mock.MagicMock(answer=[mock.MagicMock()], flags=dns.flags.AA)
Adds #7142 to the point release branch.
https://api.github.com/repos/certbot/certbot/pulls/7143
2019-06-10T21:00:18Z
2019-06-10T21:13:00Z
2019-06-10T21:13:00Z
2019-06-10T21:13:03Z
2,154
certbot/certbot
2,334
[Lynda] Extract course description
diff --git a/youtube_dl/extractor/lynda.py b/youtube_dl/extractor/lynda.py index 86d47266f80..c1bca56788d 100644 --- a/youtube_dl/extractor/lynda.py +++ b/youtube_dl/extractor/lynda.py @@ -246,5 +246,6 @@ def _real_extract(self, url): % unaccessible_videos + self._ACCOUNT_CREDENTIALS_HINT) course_title = course.get('Title') + course_description = course.get('Description') - return self.playlist_result(entries, course_id, course_title) + return self.playlist_result(entries, course_id, course_title, course_description)
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/9747
2016-06-10T17:19:14Z
2016-06-10T19:34:58Z
2016-06-10T19:34:58Z
2016-06-10T23:06:01Z
157
ytdl-org/youtube-dl
49,898
Bloom Filter
diff --git a/data_structures/hashing/bloom_filter.py b/data_structures/hashing/bloom_filter.py new file mode 100644 index 000000000000..7fd0985bdc33 --- /dev/null +++ b/data_structures/hashing/bloom_filter.py @@ -0,0 +1,105 @@ +""" +See https://en.wikipedia.org/wiki/Bloom_filter + +The use of this data structure is to test membership in a set. +Compared to Python's built-in set() it is more space-efficient. +In the following example, only 8 bits of memory will be used: +>>> bloom = Bloom(size=8) + +Initially, the filter contains all zeros: +>>> bloom.bitstring +'00000000' + +When an element is added, two bits are set to 1 +since there are 2 hash functions in this implementation: +>>> "Titanic" in bloom +False +>>> bloom.add("Titanic") +>>> bloom.bitstring +'01100000' +>>> "Titanic" in bloom +True + +However, sometimes only one bit is added +because both hash functions return the same value +>>> bloom.add("Avatar") +>>> "Avatar" in bloom +True +>>> bloom.format_hash("Avatar") +'00000100' +>>> bloom.bitstring +'01100100' + +Not added elements should return False ... +>>> not_present_films = ("The Godfather", "Interstellar", "Parasite", "Pulp Fiction") +>>> { +... film: bloom.format_hash(film) for film in not_present_films +... } # doctest: +NORMALIZE_WHITESPACE +{'The Godfather': '00000101', + 'Interstellar': '00000011', + 'Parasite': '00010010', + 'Pulp Fiction': '10000100'} +>>> any(film in bloom for film in not_present_films) +False + +but sometimes there are false positives: +>>> "Ratatouille" in bloom +True +>>> bloom.format_hash("Ratatouille") +'01100000' + +The probability increases with the number of elements added. +The probability decreases with the number of bits in the bitarray. +>>> bloom.estimated_error_rate +0.140625 +>>> bloom.add("The Godfather") +>>> bloom.estimated_error_rate +0.25 +>>> bloom.bitstring +'01100101' +""" +from hashlib import md5, sha256 + +HASH_FUNCTIONS = (sha256, md5) + + +class Bloom: + def __init__(self, size: int = 8) -> None: + self.bitarray = 0b0 + self.size = size + + def add(self, value: str) -> None: + h = self.hash_(value) + self.bitarray |= h + + def exists(self, value: str) -> bool: + h = self.hash_(value) + return (h & self.bitarray) == h + + def __contains__(self, other: str) -> bool: + return self.exists(other) + + def format_bin(self, bitarray: int) -> str: + res = bin(bitarray)[2:] + return res.zfill(self.size) + + @property + def bitstring(self) -> str: + return self.format_bin(self.bitarray) + + def hash_(self, value: str) -> int: + res = 0b0 + for func in HASH_FUNCTIONS: + position = ( + int.from_bytes(func(value.encode()).digest(), "little") % self.size + ) + res |= 2**position + return res + + def format_hash(self, value: str) -> str: + return self.format_bin(self.hash_(value)) + + @property + def estimated_error_rate(self) -> float: + n_ones = bin(self.bitarray).count("1") + return (n_ones / self.size) ** len(HASH_FUNCTIONS)
### Describe your change: * [X] Add an algorithm? * [ ] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [X] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [X] This pull request is all my own work -- I have not plagiarized. * [X] I know that pull requests will not be merged if they fail the automated tests. * [X] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [X] All new Python files are placed inside an existing directory. * [X] All filenames are in all lowercase characters with no spaces or dashes. * [X] All functions and variable names follow Python naming conventions. * [X] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [X] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [X] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
https://api.github.com/repos/TheAlgorithms/Python/pulls/8615
2023-04-06T12:13:59Z
2023-04-08T17:39:24Z
2023-04-08T17:39:24Z
2023-04-08T17:39:28Z
908
TheAlgorithms/Python
30,113
Tune transfer buffer size
diff --git a/src/common/state/ray_config.h b/src/common/state/ray_config.h index 2bd953c48c5f8..19507b8362d88 100644 --- a/src/common/state/ray_config.h +++ b/src/common/state/ray_config.h @@ -97,7 +97,7 @@ class RayConfig { local_scheduler_fetch_request_size_(10000), kill_worker_timeout_milliseconds_(100), manager_timeout_milliseconds_(1000), - buf_size_(4096), + buf_size_(80 * 1024), max_time_for_handler_milliseconds_(1000), size_limit_(100), num_elements_limit_(1000),
This PR is supposed to address one problem that we have seen while testing Ray's data plane, namely that data transfers are not using the full network bandwidth. This PR addresses the problem by resizing the transfer buffer size. The new value has been determined by running the following test on EC2: Two m5.4xlarge nodes were started, `iperf` gave the following measurement for the network speed between them: ``` [ ID] Interval Transfer Bandwidth [ 3] 0.0- 0.9 sec 1.00 GBytes 9.71 Gbits/sec ``` For different values of the `buf_size`, I put an object into the object store by using a driver on one node via ``` In [4]: import numpy as np In [5]: a = (255 * np.random.random(100 * 1000 * 1000)).astype(np.uint8) In [6]: x = ray.put(a) In [7]: x.id() Out[7]: b'\x83\xfd:\x97\x1a\xa5\xff7t\xff(\n\xe0\x1e\xa6\xa5\xbe>\xd7\xe2' ``` and then getting it from a driver on the second node, via ``` In [6]: x = ray.local_scheduler.ObjectID(b'\x83\xfd:\x97\x1a\xa5\xff7t\xff(\n\xe0\x1e\xa6\xa5\xbe>\xd7\xe2') In [7]: %time ray.get(x) ``` The results are as follows (the buffer size is multiples of the pagesize which is 4KB): ``` 4 * 1024 * 1024: 8.3 Gbit/s (for 100MB and 1GB objects) 400 * 1024: 9.1 Gbit/s (for 100MB) , 9.4 Gbit/s (for 1GB) 40 * 1024: 8.3 Gbit/s (for 100MB and 1GB objects) 4 * 1024: 6.9 Gbit/s (for 100MB) ``` So a buffer size attains close to the maximum measured by iperf. cc @robertnishihara @elibol @atumanov @ericl
https://api.github.com/repos/ray-project/ray/pulls/1363
2017-12-21T08:14:43Z
2018-02-09T22:56:37Z
2018-02-09T22:56:37Z
2018-02-09T22:57:03Z
150
ray-project/ray
19,567
zb: use string in safeTicker ccxt/ccxt#11379
diff --git a/js/zb.js b/js/zb.js index cf25ab309606..beafb3bc7867 100644 --- a/js/zb.js +++ b/js/zb.js @@ -618,29 +618,29 @@ module.exports = class zb extends Exchange { // } // const timestamp = this.safeInteger (ticker, 'date', this.milliseconds ()); - const last = this.safeNumber (ticker, 'last'); + const last = this.safeString (ticker, 'last'); return this.safeTicker ({ 'symbol': this.safeSymbol (undefined, market), 'timestamp': timestamp, 'datetime': this.iso8601 (timestamp), - 'high': this.safeNumber (ticker, 'high'), - 'low': this.safeNumber (ticker, 'low'), - 'bid': this.safeNumber (ticker, 'buy'), + 'high': this.safeString (ticker, 'high'), + 'low': this.safeString (ticker, 'low'), + 'bid': this.safeString (ticker, 'buy'), 'bidVolume': undefined, - 'ask': this.safeNumber (ticker, 'sell'), + 'ask': this.safeString (ticker, 'sell'), 'askVolume': undefined, 'vwap': undefined, - 'open': this.safeNumber (ticker, 'open'), + 'open': this.safeString (ticker, 'open'), 'close': last, 'last': last, 'previousClose': undefined, 'change': undefined, 'percentage': undefined, 'average': undefined, - 'baseVolume': this.safeNumber (ticker, 'vol'), + 'baseVolume': this.safeString (ticker, 'vol'), 'quoteVolume': undefined, 'info': ticker, - }, market); + }, market, false); } parseOHLCV (ohlcv, market = undefined) {
ccxt/ccxt#11379
https://api.github.com/repos/ccxt/ccxt/pulls/11564
2022-01-20T10:52:03Z
2022-01-22T10:25:20Z
2022-01-22T10:25:20Z
2022-01-22T10:25:20Z
420
ccxt/ccxt
13,485
Disable unused dependencies
diff --git a/requirements.txt b/requirements.txt index 25dba6f5a250..1e64818bbb6a 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,19 +1,19 @@ beautifulsoup4 fake_useragent imageio -keras +keras ; python_version < '3.12' lxml matplotlib numpy opencv-python pandas pillow -projectq +# projectq # uncomment once quantum/quantum_random.py is fixed qiskit ; python_version < '3.12' qiskit-aer ; python_version < '3.12' requests rich -scikit-fuzzy +# scikit-fuzzy # uncomment once fuzzy_logic/fuzzy_operations.py is fixed scikit-learn statsmodels sympy @@ -21,4 +21,4 @@ tensorflow ; python_version < '3.12' texttable tweepy xgboost -yulewalker +# yulewalker # uncomment once audio_filters/equal_loudness_filter.py is fixed
### Describe your change: Disable unused dependencies in `requirements.txt` to try to reduce build times: - Keras is a TensorFlow frontend, so I've also marked it as `python_version < '3.12'`. There's no reason to install Keras in every build when TensorFlow can't be used. - projectq, scikit-fuzzy, and yulewalker are only used by broken (and thus disabled) files, so I've commented them out. If/when those broken files are fixed, we can uncomment them. * [ ] Add an algorithm? * [ ] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
https://api.github.com/repos/TheAlgorithms/Python/pulls/10467
2023-10-14T19:56:16Z
2023-10-14T20:08:52Z
2023-10-14T20:08:52Z
2023-10-14T22:14:16Z
234
TheAlgorithms/Python
29,634
Backport PR #54750 on branch 2.1.x (Revert deprecation of con as keyword only arg)
diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst index e3f18067e354d..f5758a079b1b5 100644 --- a/doc/source/whatsnew/v2.1.0.rst +++ b/doc/source/whatsnew/v2.1.0.rst @@ -586,7 +586,7 @@ Other Deprecations - Deprecated the use of non-supported datetime64 and timedelta64 resolutions with :func:`pandas.array`. Supported resolutions are: "s", "ms", "us", "ns" resolutions (:issue:`53058`) - Deprecated values ``"pad"``, ``"ffill"``, ``"bfill"``, ``"backfill"`` for :meth:`Series.interpolate` and :meth:`DataFrame.interpolate`, use ``obj.ffill()`` or ``obj.bfill()`` instead (:issue:`53581`) - Deprecated the behavior of :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`Series.argmax`, :meth:`Series.argmin` with either all-NAs and ``skipna=True`` or any-NAs and ``skipna=False`` returning -1; in a future version this will raise ``ValueError`` (:issue:`33941`, :issue:`33942`) -- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_sql` except ``name`` (:issue:`54229`) +- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_sql` except ``name`` and ``con`` (:issue:`54229`) .. --------------------------------------------------------------------------- .. _whatsnew_210.performance: diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 09ebd9387220f..b4281f6959997 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -2796,7 +2796,7 @@ def to_hdf( @final @deprecate_nonkeyword_arguments( - version="3.0", allowed_args=["self", "name"], name="to_sql" + version="3.0", allowed_args=["self", "name", "con"], name="to_sql" ) def to_sql( self, diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py index 2f446e6b8c81d..9ec0ba0b12a76 100644 --- a/pandas/tests/io/test_sql.py +++ b/pandas/tests/io/test_sql.py @@ -2849,13 +2849,14 @@ def setup_driver(cls): def test_keyword_deprecation(self): # GH 54397 msg = ( - "tarting with pandas version 3.0 all arguments of to_sql except for the " - "argument 'name' will be keyword-only." + "Starting with pandas version 3.0 all arguments of to_sql except for the " + "arguments 'name' and 'con' will be keyword-only." ) df = DataFrame([{"A": 1, "B": 2, "C": 3}, {"A": 1, "B": 2, "C": 3}]) + df.to_sql("example", self.conn) with tm.assert_produces_warning(FutureWarning, match=msg): - df.to_sql("example", self.conn) + df.to_sql("example", self.conn, None, if_exists="replace") def test_default_type_conversion(self): df = sql.read_sql_table("types", self.conn)
Backport PR #54750: Revert deprecation of con as keyword only arg
https://api.github.com/repos/pandas-dev/pandas/pulls/54766
2023-08-26T07:57:48Z
2023-08-26T10:40:15Z
2023-08-26T10:40:15Z
2023-08-26T10:40:16Z
794
pandas-dev/pandas
45,008
[3.6] bpo-5945: Improve mappings and sequences C API docs. (GH-7029).
diff --git a/Doc/c-api/mapping.rst b/Doc/c-api/mapping.rst index a71e94283776e9..c16fcf4439d4da 100644 --- a/Doc/c-api/mapping.rst +++ b/Doc/c-api/mapping.rst @@ -5,11 +5,17 @@ Mapping Protocol ================ +See also :c:func:`PyObject_GetItem`, :c:func:`PyObject_SetItem` and +:c:func:`PyObject_DelItem`. + .. c:function:: int PyMapping_Check(PyObject *o) - Return ``1`` if the object provides mapping protocol, and ``0`` otherwise. This - function always succeeds. + Return ``1`` if the object provides mapping protocol or supports slicing, + and ``0`` otherwise. Note that it returns ``1`` for Python classes with + a :meth:`__getitem__` method since in general case it is impossible to + determine what the type of keys it supports. This function always + succeeds. .. c:function:: Py_ssize_t PyMapping_Size(PyObject *o) @@ -17,35 +23,49 @@ Mapping Protocol .. index:: builtin: len - Returns the number of keys in object *o* on success, and ``-1`` on failure. For - objects that do not provide mapping protocol, this is equivalent to the Python - expression ``len(o)``. + Returns the number of keys in object *o* on success, and ``-1`` on failure. + This is equivalent to the Python expression ``len(o)``. -.. c:function:: int PyMapping_DelItemString(PyObject *o, const char *key) +.. c:function:: PyObject* PyMapping_GetItemString(PyObject *o, const char *key) + + Return element of *o* corresponding to the string *key* or *NULL* on failure. + This is the equivalent of the Python expression ``o[key]``. + See also :c:func:`PyObject_GetItem`. + - Remove the mapping for object *key* from the object *o*. Return ``-1`` on - failure. This is equivalent to the Python statement ``del o[key]``. +.. c:function:: int PyMapping_SetItemString(PyObject *o, const char *key, PyObject *v) + + Map the string *key* to the value *v* in object *o*. Returns ``-1`` on + failure. This is the equivalent of the Python statement ``o[key] = v``. + See also :c:func:`PyObject_SetItem`. .. c:function:: int PyMapping_DelItem(PyObject *o, PyObject *key) - Remove the mapping for object *key* from the object *o*. Return ``-1`` on - failure. This is equivalent to the Python statement ``del o[key]``. + Remove the mapping for the object *key* from the object *o*. Return ``-1`` + on failure. This is equivalent to the Python statement ``del o[key]``. + This is an alias of :c:func:`PyObject_DelItem`. -.. c:function:: int PyMapping_HasKeyString(PyObject *o, const char *key) +.. c:function:: int PyMapping_DelItemString(PyObject *o, const char *key) - On success, return ``1`` if the mapping object has the key *key* and ``0`` - otherwise. This is equivalent to the Python expression ``key in o``. - This function always succeeds. + Remove the mapping for the string *key* from the object *o*. Return ``-1`` + on failure. This is equivalent to the Python statement ``del o[key]``. .. c:function:: int PyMapping_HasKey(PyObject *o, PyObject *key) - Return ``1`` if the mapping object has the key *key* and ``0`` otherwise. This - is equivalent to the Python expression ``key in o``. This function always - succeeds. + Return ``1`` if the mapping object has the key *key* and ``0`` otherwise. + This is equivalent to the Python expression ``key in o``. + This function always succeeds. + + +.. c:function:: int PyMapping_HasKeyString(PyObject *o, const char *key) + + Return ``1`` if the mapping object has the key *key* and ``0`` otherwise. + This is equivalent to the Python expression ``key in o``. + This function always succeeds. .. c:function:: PyObject* PyMapping_Keys(PyObject *o) @@ -64,15 +84,3 @@ Mapping Protocol On success, return a list or tuple of the items in object *o*, where each item is a tuple containing a key-value pair. On failure, return *NULL*. - - -.. c:function:: PyObject* PyMapping_GetItemString(PyObject *o, const char *key) - - Return element of *o* corresponding to the object *key* or *NULL* on failure. - This is the equivalent of the Python expression ``o[key]``. - - -.. c:function:: int PyMapping_SetItemString(PyObject *o, const char *key, PyObject *v) - - Map the object *key* to the value *v* in object *o*. Returns ``-1`` on failure. - This is the equivalent of the Python statement ``o[key] = v``. diff --git a/Doc/c-api/object.rst b/Doc/c-api/object.rst index b761c808fcb7f6..8692a2c14ca20f 100644 --- a/Doc/c-api/object.rst +++ b/Doc/c-api/object.rst @@ -360,8 +360,8 @@ Object Protocol parameters must be non-*NULL*. -.. c:function:: Py_ssize_t PyObject_Length(PyObject *o) - Py_ssize_t PyObject_Size(PyObject *o) +.. c:function:: Py_ssize_t PyObject_Size(PyObject *o) + Py_ssize_t PyObject_Length(PyObject *o) .. index:: builtin: len @@ -395,8 +395,8 @@ Object Protocol .. c:function:: int PyObject_DelItem(PyObject *o, PyObject *key) - Delete the mapping for *key* from *o*. Returns ``-1`` on failure. This is the - equivalent of the Python statement ``del o[key]``. + Remove the mapping for the object *key* from the object *o*. Return ``-1`` + on failure. This is equivalent to the Python statement ``del o[key]``. .. c:function:: PyObject* PyObject_Dir(PyObject *o) diff --git a/Doc/c-api/sequence.rst b/Doc/c-api/sequence.rst index 81f8557ea6e665..6d22f35e22b1f2 100644 --- a/Doc/c-api/sequence.rst +++ b/Doc/c-api/sequence.rst @@ -9,7 +9,10 @@ Sequence Protocol .. c:function:: int PySequence_Check(PyObject *o) Return ``1`` if the object provides sequence protocol, and ``0`` otherwise. - This function always succeeds. + Note that it returns ``1`` for Python classes with a :meth:`__getitem__` + method unless they are :class:`dict` subclasses since in general case it + is impossible to determine what the type of keys it supports. This + function always succeeds. .. c:function:: Py_ssize_t PySequence_Size(PyObject *o) @@ -119,18 +122,27 @@ Sequence Protocol .. index:: builtin: tuple - Return a tuple object with the same contents as the arbitrary sequence *o* or - *NULL* on failure. If *o* is a tuple, a new reference will be returned, + Return a tuple object with the same contents as the sequence or iterable *o*, + or *NULL* on failure. If *o* is a tuple, a new reference will be returned, otherwise a tuple will be constructed with the appropriate contents. This is equivalent to the Python expression ``tuple(o)``. .. c:function:: PyObject* PySequence_Fast(PyObject *o, const char *m) - Return the sequence *o* as a list, unless it is already a tuple or list, in + Return the sequence or iterable *o* as a list, unless it is already a tuple or list, in which case *o* is returned. Use :c:func:`PySequence_Fast_GET_ITEM` to access the members of the result. Returns *NULL* on failure. If the object is not - a sequence, raises :exc:`TypeError` with *m* as the message text. + a sequence or iterable, raises :exc:`TypeError` with *m* as the message text. + + +.. c:function:: Py_ssize_t PySequence_Fast_GET_SIZE(PyObject *o) + + Returns the length of *o*, assuming that *o* was returned by + :c:func:`PySequence_Fast` and that *o* is not *NULL*. The size can also be + gotten by calling :c:func:`PySequence_Size` on *o*, but + :c:func:`PySequence_Fast_GET_SIZE` is faster because it can assume *o* is a list + or tuple. .. c:function:: PyObject* PySequence_Fast_GET_ITEM(PyObject *o, Py_ssize_t i) @@ -155,12 +167,3 @@ Sequence Protocol :c:func:`PySequence_GetItem` but without checking that :c:func:`PySequence_Check` on *o* is true and without adjustment for negative indices. - - -.. c:function:: Py_ssize_t PySequence_Fast_GET_SIZE(PyObject *o) - - Returns the length of *o*, assuming that *o* was returned by - :c:func:`PySequence_Fast` and that *o* is not *NULL*. The size can also be - gotten by calling :c:func:`PySequence_Size` on *o*, but - :c:func:`PySequence_Fast_GET_SIZE` is faster because it can assume *o* is a list - or tuple. diff --git a/Doc/c-api/typeobj.rst b/Doc/c-api/typeobj.rst index 0b4577f5b950a9..76515fd2c430e3 100644 --- a/Doc/c-api/typeobj.rst +++ b/Doc/c-api/typeobj.rst @@ -1151,21 +1151,24 @@ Mapping Object Structures .. c:member:: lenfunc PyMappingMethods.mp_length - This function is used by :c:func:`PyMapping_Length` and + This function is used by :c:func:`PyMapping_Size` and :c:func:`PyObject_Size`, and has the same signature. This slot may be set to *NULL* if the object has no defined length. .. c:member:: binaryfunc PyMappingMethods.mp_subscript - This function is used by :c:func:`PyObject_GetItem` and has the same - signature. This slot must be filled for the :c:func:`PyMapping_Check` - function to return ``1``, it can be *NULL* otherwise. + This function is used by :c:func:`PyObject_GetItem` and + :c:func:`PySequence_GetSlice`, and has the same signature as + :c:func:`!PyObject_GetItem`. This slot must be filled for the + :c:func:`PyMapping_Check` function to return ``1``, it can be *NULL* + otherwise. .. c:member:: objobjargproc PyMappingMethods.mp_ass_subscript - This function is used by :c:func:`PyObject_SetItem` and - :c:func:`PyObject_DelItem`. It has the same signature as - :c:func:`PyObject_SetItem`, but *v* can also be set to *NULL* to delete + This function is used by :c:func:`PyObject_SetItem`, + :c:func:`PyObject_DelItem`, :c:func:`PyObject_SetSlice` and + :c:func:`PyObject_DelSlice`. It has the same signature as + :c:func:`!PyObject_SetItem`, but *v* can also be set to *NULL* to delete an item. If this slot is *NULL*, the object does not support item assignment and deletion. @@ -1185,26 +1188,29 @@ Sequence Object Structures .. c:member:: lenfunc PySequenceMethods.sq_length - This function is used by :c:func:`PySequence_Size` and :c:func:`PyObject_Size`, - and has the same signature. + This function is used by :c:func:`PySequence_Size` and + :c:func:`PyObject_Size`, and has the same signature. It is also used for + handling negative indices via the :c:member:`~PySequenceMethods.sq_item` + and the :c:member:`~PySequenceMethods.sq_ass_item` slots. .. c:member:: binaryfunc PySequenceMethods.sq_concat This function is used by :c:func:`PySequence_Concat` and has the same signature. It is also used by the ``+`` operator, after trying the numeric - addition via the :c:member:`~PyTypeObject.tp_as_number.nb_add` slot. + addition via the :c:member:`~PyNumberMethods.nb_add` slot. .. c:member:: ssizeargfunc PySequenceMethods.sq_repeat This function is used by :c:func:`PySequence_Repeat` and has the same signature. It is also used by the ``*`` operator, after trying numeric - multiplication via the :c:member:`~PyTypeObject.tp_as_number.nb_multiply` - slot. + multiplication via the :c:member:`~PyNumberMethods.nb_multiply` slot. .. c:member:: ssizeargfunc PySequenceMethods.sq_item This function is used by :c:func:`PySequence_GetItem` and has the same - signature. This slot must be filled for the :c:func:`PySequence_Check` + signature. It is also used by :c:func:`PyObject_GetItem`, after trying + the subscription via the :c:member:`~PyMappingMethods.mp_subscript` slot. + This slot must be filled for the :c:func:`PySequence_Check` function to return ``1``, it can be *NULL* otherwise. Negative indexes are handled as follows: if the :attr:`sq_length` slot is @@ -1215,28 +1221,36 @@ Sequence Object Structures .. c:member:: ssizeobjargproc PySequenceMethods.sq_ass_item This function is used by :c:func:`PySequence_SetItem` and has the same - signature. This slot may be left to *NULL* if the object does not support + signature. It is also used by :c:func:`PyObject_SetItem` and + :c:func:`PyObject_DelItem`, after trying the item assignment and deletion + via the :c:member:`~PyMappingMethods.mp_ass_subscript` slot. + This slot may be left to *NULL* if the object does not support item assignment and deletion. .. c:member:: objobjproc PySequenceMethods.sq_contains This function may be used by :c:func:`PySequence_Contains` and has the same signature. This slot may be left to *NULL*, in this case - :c:func:`PySequence_Contains` simply traverses the sequence until it finds a - match. + :c:func:`!PySequence_Contains` simply traverses the sequence until it + finds a match. .. c:member:: binaryfunc PySequenceMethods.sq_inplace_concat This function is used by :c:func:`PySequence_InPlaceConcat` and has the same - signature. It should modify its first operand, and return it. + signature. It should modify its first operand, and return it. This slot + may be left to *NULL*, in this case :c:func:`!PySequence_InPlaceConcat` + will fall back to :c:func:`PySequence_Concat`. It is also used by the + augmented assignment ``+=``, after trying numeric inplace addition + via the :c:member:`~PyNumberMethods.nb_inplace_add` slot. .. c:member:: ssizeargfunc PySequenceMethods.sq_inplace_repeat This function is used by :c:func:`PySequence_InPlaceRepeat` and has the same - signature. It should modify its first operand, and return it. - -.. XXX need to explain precedence between mapping and sequence -.. XXX explains when to implement the sq_inplace_* slots + signature. It should modify its first operand, and return it. This slot + may be left to *NULL*, in this case :c:func:`!PySequence_InPlaceRepeat` + will fall back to :c:func:`PySequence_Repeat`. It is also used by the + augmented assignment ``*=``, after trying numeric inplace multiplication + via the :c:member:`~PyNumberMethods.nb_inplace_multiply` slot. .. _buffer-structs:
(cherry picked from commit f5b1183610d5888db3bbd639b1a0c945dbd8f8dd) <!-- issue-number: bpo-5945 --> https://bugs.python.org/issue5945 <!-- /issue-number -->
https://api.github.com/repos/python/cpython/pulls/7049
2018-05-22T10:27:21Z
2018-05-22T11:54:14Z
2018-05-22T11:54:14Z
2018-05-22T11:54:18Z
3,940
python/cpython
4,422
Fix gradient checkpointing bug in trocr
diff --git a/src/transformers/models/trocr/modeling_trocr.py b/src/transformers/models/trocr/modeling_trocr.py index 5eda7479b4d11..e6853d0c5a8eb 100644 --- a/src/transformers/models/trocr/modeling_trocr.py +++ b/src/transformers/models/trocr/modeling_trocr.py @@ -664,6 +664,13 @@ def forward( # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _expand_mask(encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache = True` is incompatible with gradient checkpointing. Setting `use_cache = False`..." + ) + use_cache = False + # decoder layers all_hidden_states = () if output_hidden_states else None all_self_attns = () if output_attentions else None @@ -689,12 +696,6 @@ def forward( past_key_value = past_key_values[idx] if past_key_values is not None else None if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning( - "`use_cache = True` is incompatible with gradient checkpointing. Setting `use_cache =" - " False`..." - ) - use_cache = False def create_custom_forward(module): def custom_forward(*inputs): diff --git a/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py b/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py index ad249f0835b1d..e96e20f896956 100644 --- a/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py +++ b/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py @@ -478,6 +478,12 @@ def forward( output_hidden_states=False, return_dict=True, ): + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None @@ -491,11 +497,6 @@ def forward( past_key_value = past_key_values[i] if past_key_values is not None else None if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False def create_custom_forward(module): def custom_forward(*inputs):
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
https://api.github.com/repos/huggingface/transformers/pulls/22126
2023-03-13T12:51:36Z
2023-03-13T14:45:48Z
2023-03-13T14:45:48Z
2023-03-13T14:48:26Z
700
huggingface/transformers
12,546
Remove Troposphere
diff --git a/README.md b/README.md index ddb489f187..313518e760 100644 --- a/README.md +++ b/README.md @@ -1767,7 +1767,6 @@ API | Description | Auth | HTTPS | CORS | | [SimpleWeather](https://english.api.rakuten.net/mxrck/api/simple-weather/endpoints) | Simple tool for get current weather | `apiKey` | Yes | Yes | | [Storm Glass](https://stormglass.io/) | Global marine weather from multiple sources | `apiKey` | Yes | Yes | | [Tomorrow](https://docs.tomorrow.io) | Weather API Powered by Proprietary Technology | `apiKey` | Yes | Unknown | -| [Troposphere](https://www.troposphere.io/developer) | Global weather and climate data | `apiKey` | Yes | Yes | | [US Weather](https://www.weather.gov/documentation/services-web-api) | US National Weather Service | No | Yes | Yes | | [Visual Crossing](https://www.visualcrossing.com/weather-api) | Global historical and weather forecast data | `apiKey` | Yes | Yes | | [weather-api](https://github.com/robertoduessmann/weather-api) | A RESTful free API to check the weather | No | Yes | No |
Removed Troposphere weather API, because the service is no longer active.
https://api.github.com/repos/public-apis/public-apis/pulls/2993
2022-01-06T12:08:44Z
2022-01-07T08:34:51Z
2022-01-07T08:34:51Z
2022-01-07T08:34:51Z
281
public-apis/public-apis
35,392
🌐 Add Chinese translation for `docs/zh/docs/advanced/security/index.md`
diff --git a/docs/zh/docs/advanced/security/index.md b/docs/zh/docs/advanced/security/index.md new file mode 100644 index 0000000000000..962523c09755d --- /dev/null +++ b/docs/zh/docs/advanced/security/index.md @@ -0,0 +1,16 @@ +# 高级安全 - 介绍 + +## 附加特性 + +除 [教程 - 用户指南: 安全性](../../tutorial/security/){.internal-link target=_blank} 中涵盖的功能之外,还有一些额外的功能来处理安全性. + +!!! tip "小贴士" + 接下来的章节 **并不一定是 "高级的"**. + + 而且对于你的使用场景来说,解决方案很可能就在其中。 + +## 先阅读教程 + +接下来的部分假设你已经阅读了主要的 [教程 - 用户指南: 安全性](../../tutorial/security/){.internal-link target=_blank}. + +它们都基于相同的概念,但支持一些额外的功能. diff --git a/docs/zh/mkdocs.yml b/docs/zh/mkdocs.yml index 522c83766feff..140e942cdb37b 100644 --- a/docs/zh/mkdocs.yml +++ b/docs/zh/mkdocs.yml @@ -120,6 +120,8 @@ nav: - advanced/response-change-status-code.md - advanced/response-headers.md - advanced/wsgi.md + - 高级安全: + - advanced/security/index.md - contributing.md - help-fastapi.md - benchmarks.md
as title.
https://api.github.com/repos/tiangolo/fastapi/pulls/9666
2023-06-12T12:32:42Z
2023-06-22T16:19:49Z
2023-06-22T16:19:49Z
2023-06-22T16:19:50Z
370
tiangolo/fastapi
23,536
Fix resuming when downloading in chunked mode
diff --git a/src/you_get/common.py b/src/you_get/common.py index 2e4edef5b9..41d67cfc16 100755 --- a/src/you_get/common.py +++ b/src/you_get/common.py @@ -629,10 +629,12 @@ def url_save( if refer is not None: tmp_headers['Referer'] = refer if type(url) is list: - file_size = urls_size(url, faker=faker, headers=tmp_headers) + chunk_sizes = [url_size(url, faker=faker, headers=tmp_headers) for url in url] + file_size = sum(chunk_sizes) is_chunked, urls = True, url else: file_size = url_size(url, faker=faker, headers=tmp_headers) + chunk_sizes = [file_size] is_chunked, urls = False, [url] continue_renameing = True @@ -696,9 +698,13 @@ def numreturn(a): else: open_mode = 'wb' - for url in urls: + chunk_start = 0 + chunk_end = 0 + for i, url in enumerate(urls): received_chunk = 0 - if received < file_size: + chunk_start += 0 if i == 0 else chunk_sizes[i - 1] + chunk_end += chunk_sizes[i] + if received < file_size and received < chunk_end: if faker: tmp_headers = fake_headers ''' @@ -708,8 +714,9 @@ def numreturn(a): else: headers = {} ''' - if received and not is_chunked: # only request a range when not chunked - tmp_headers['Range'] = 'bytes=' + str(received) + '-' + if received: + # chunk_start will always be 0 if not chunked + tmp_headers['Range'] = 'bytes=' + str(received - chunk_start) + '-' if refer: tmp_headers['Referer'] = refer @@ -757,8 +764,7 @@ def numreturn(a): elif not is_chunked and received == file_size: # Download finished break # Unexpected termination. Retry request - if not is_chunked: # when - tmp_headers['Range'] = 'bytes=' + str(received) + '-' + tmp_headers['Range'] = 'bytes=' + str(received - chunk_start) + '-' response = urlopen_with_retry( request.Request(url, headers=tmp_headers) )
When `url_save` is passed a list of urls (`is_chunked = True`), and the download is aborted, then the next time that you-get resumes the download, it will append the video data from beginning to the `.download` file, because it does not send a correct `Range` header when resuming. Several "exceeding 100%" issues might be related to this: #2703, #2768 Steps to reproduce: 1. `you-get "https://www.youtube.com/watch?v=Y8Tko2YC5hA"` 2. Press `Ctrl-C` before the download is finished 3. Re-start the `you-get` command 3. The downloaded size will eventually exceed the expected file size and some merging error will appear <details> <summary>The transcript for the steps</summary> ``` ~/test > you-get "https://www.youtube.com/watch?v=Y8Tko2YC5hA" site: YouTube title: What is Python and Why You Must Learn It in [2019] stream: - itag: 248 container: webm quality: 1920x1080 (1080p) size: 17.3 MiB (18168269 bytes) # download-with: you-get --itag=248 [URL] Downloading What is Python and Why You Must Learn It in (2019).webm ... 53.4% ( 9.2/ 17.3MB) ├███████████████████████████████████████████████████████████████████───────────────────────────────────────────────────────────┤[1/2] 13 MB/s^C¶ ~/test (1) > you-get "https://www.youtube.com/watch?v=Y8Tko2YC5hA" site: YouTube title: What is Python and Why You Must Learn It in [2019] stream: - itag: 248 container: webm quality: 1920x1080 (1080p) size: 17.3 MiB (18168269 bytes) # download-with: you-get --itag=248 [URL] Downloading What is Python and Why You Must Learn It in (2019).webm ... 100% ( 23.4/ 17.3MB) ├██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████┤[2/2] 11 MB/s Merging video parts... Merging without re-encode failed. Try again re-encoding audio... Merged into What is Python and Why You Must Learn It in (2019).webm Saving What is Python and Why You Must Learn It in (2019).en.srt ... Done. ``` </details> Basically, the idea of this pull request is to maintain the starting and ending offset for each chunk. And when resuming, it skips downloaded chunks by comparing the file size and ending offsets, and computes the correct `Range` for the first unfinished chunk.
https://api.github.com/repos/soimort/you-get/pulls/2804
2020-05-13T23:11:01Z
2020-07-18T20:53:53Z
2020-07-18T20:53:53Z
2020-07-18T20:53:53Z
568
soimort/you-get
20,998
Typo fixed in C.165
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index eef7534aa..e4aeda907 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -5971,7 +5971,7 @@ For example, the general `swap()` will copy the elements of two `vector`s being void f1(N::X& a, N::X& b) { - std::swap(a,b); // propably not what we we wanted: calls std::swap() + std::swap(a,b); // propably not what we wanted: calls std::swap() } The `std::swap()` in `f1()` does exactly what we asked it to do: it calls the `swap()` in namespace `std`.
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/317
2015-10-10T21:21:51Z
2015-10-12T11:22:53Z
2015-10-12T11:22:53Z
2016-10-04T01:24:23Z
185
isocpp/CppCoreGuidelines
16,042
Update CppCoreGuidelines.md
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index e5b15cb60..bf7c8503d 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -848,7 +848,7 @@ The spurious definition of copy operations disables move semantics so that the r The use of `new` and `delete` for `buf` is redundant; if we really needed a local string, we should use a local `string`. There are several more performance bugs and gratuitous complication. -**Note**: An individual example of waste is rarely significant, and where it is significant, it is typically easily eliminated by and expert. +**Note**: An individual example of waste is rarely significant, and where it is significant, it is typically easily eliminated by an expert. However, waste spread liberally across a code base can easily be significant and experts are not always as available as we would like. The aim of this rule (and the more specific rules that supports it) is to eliminate most waste related to the use of C++ before it happens. After that, we can look at waste related to algorithms and requirements, but that is beyond the scope of these guidelines.
Fix a typo in P.9. I didn't see the need to create a ticket just for one typo.
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/60
2015-09-22T01:34:32Z
2015-09-22T10:18:01Z
2015-09-22T10:18:01Z
2015-09-22T10:18:16Z
262
isocpp/CppCoreGuidelines
15,522
Formatted Model Summary
diff --git a/models/ModelBase.py b/models/ModelBase.py index f841ca6ad..e7ad843bc 100644 --- a/models/ModelBase.py +++ b/models/ModelBase.py @@ -231,36 +231,54 @@ def __init__(self, model_path, training_data_src_path=None, training_data_dst_pa else: self.sample_for_preview = self.generate_next_sample() self.last_sample = self.sample_for_preview + + ###Generate text summary of model hyperparameters + #Find the longest key name and value string. Used as column widths. + width_name = max([len(k) for k in self.options.keys()] + [17]) + 1 # Single space buffer to left edge. Minimum of 17, the length of the longest static string used "Current iteration" + width_value = max([len(str(x)) for x in self.options.values()] + [len(str(self.iter)), len(self.get_model_name())]) + 1 # Single space buffer to right edge + if not self.device_config.cpu_only: #Check length of GPU names + width_value = max([len(nnlib.device.getDeviceName(idx))+1 for idx in self.device_config.gpu_idxs] + [width_value]) + width_total = width_name + width_value + 2 #Plus 2 for ": " + model_summary_text = [] - - model_summary_text += ["===== Model summary ====="] - model_summary_text += ["== Model name: " + self.get_model_name()] - model_summary_text += ["=="] - model_summary_text += ["== Current iteration: " + str(self.iter)] - model_summary_text += ["=="] - model_summary_text += ["== Model options:"] + model_summary_text += [f'=={" Model Summary ":=^{width_total}}=='] # Model/status summary + model_summary_text += [f'=={" "*width_total}=='] + model_summary_text += [f'=={"Model name": >{width_name}}: {self.get_model_name(): <{width_value}}=='] # Name + model_summary_text += [f'=={" "*width_total}=='] + model_summary_text += [f'=={"Current iteration": >{width_name}}: {str(self.iter): <{width_value}}=='] # Iter + model_summary_text += [f'=={" "*width_total}=='] + + model_summary_text += [f'=={" Model Options ":-^{width_total}}=='] # Model options + model_summary_text += [f'=={" "*width_total}=='] for key in self.options.keys(): - model_summary_text += ["== |== %s : %s" % (key, self.options[key])] - + model_summary_text += [f'=={key: >{width_name}}: {str(self.options[key]): <{width_value}}=='] # self.options key/value pairs + model_summary_text += [f'=={" "*width_total}=='] + + model_summary_text += [f'=={" Running On ":-^{width_total}}=='] # Training hardware info + model_summary_text += [f'=={" "*width_total}=='] if self.device_config.multi_gpu: - model_summary_text += ["== |== multi_gpu : True "] - - model_summary_text += ["== Running on:"] + model_summary_text += [f'=={"Using multi_gpu": >{width_name}}: {"True": <{width_value}}=='] # multi_gpu + model_summary_text += [f'=={" "*width_total}=='] if self.device_config.cpu_only: - model_summary_text += ["== |== [CPU]"] + model_summary_text += [f'=={"Using device": >{width_name}}: {"CPU": <{width_value}}=='] # cpu_only else: for idx in self.device_config.gpu_idxs: - model_summary_text += ["== |== [%d : %s]" % (idx, nnlib.device.getDeviceName(idx))] - - if not self.device_config.cpu_only and self.device_config.gpu_vram_gb[0] == 2: - model_summary_text += ["=="] - model_summary_text += ["== WARNING: You are using 2GB GPU. Result quality may be significantly decreased."] - model_summary_text += ["== If training does not start, close all programs and try again."] - model_summary_text += ["== Also you can disable Windows Aero Desktop to get extra free VRAM."] - model_summary_text += ["=="] - - model_summary_text += ["========================="] - model_summary_text = "\r\n".join (model_summary_text) + model_summary_text += [f'=={"Device index": >{width_name}}: {idx: <{width_value}}=='] # GPU hardware device index + model_summary_text += [f'=={"Name": >{width_name}}: {nnlib.device.getDeviceName(idx): <{width_value}}=='] # GPU name + vram_str = f'{nnlib.device.getDeviceVRAMTotalGb(idx):.2f}GB' # GPU VRAM - Formated as #.## (or ##.##) + model_summary_text += [f'=={"VRAM": >{width_name}}: {vram_str: <{width_value}}=='] + model_summary_text += [f'=={" "*width_total}=='] + model_summary_text += [f'=={"="*width_total}=='] + + if not self.device_config.cpu_only and self.device_config.gpu_vram_gb[0] <= 2: # Low VRAM warning + model_summary_text += ["/!\\"] + model_summary_text += ["/!\\ WARNING:"] + model_summary_text += ["/!\\ You are using a GPU with 2GB or less VRAM. This may significantly reduce the quality of your result!"] + model_summary_text += ["/!\\ If training does not start, close all programs and try again."] + model_summary_text += ["/!\\ Also you can disable Windows Aero Desktop to increase available VRAM."] + model_summary_text += ["/!\\"] + + model_summary_text = "\n".join (model_summary_text) self.model_summary_text = model_summary_text io.log_info(model_summary_text)
Aligns the model summary output using f-string formatting. The logic structure of the base class has not been changed, only the lines put into `model_summary_text`. Output width is calculated from keys & values and will scale to show a clean summary for any model/platform. GPU VRAM has been added as an output. Incorrect detection of VRAM is possible in broken environments and GPUs of different sizes can report the same name. Showing it here adds clarity for the user and for issue tickets. Concatenation changed from "\r\n" to "\n", CRLF end of lines for Windows are handled transparently so using it here caused extra blank lines in the summary txt file. **Examples:** Using CUDA + SAE-LIAE ``` ============= Model Summary ============== == == == Model name: SAE == == == == Current iteration: 16 == == == ==----------- Model Options ------------== == == == batch_size: 4 == == sort_by_yaw: False == == random_flip: True == == resolution: 128 == == face_type: f == == learn_mask: True == == optimizer_mode: 1 == == archi: liae == == ae_dims: 256 == == e_ch_dims: 42 == == d_ch_dims: 21 == == multiscale_decoder: False == == ca_weights: False == == pixel_loss: False == == face_style_power: 0.0 == == bg_style_power: 0.0 == == apply_random_ct: False == == clipgrad: False == == == ==------------- Running On -------------== == == == Device index: 0 == == Name: GeForce GTX 1080 == == VRAM: 8.00GB == == == ========================================== ``` Colab ``` ========== Model Summary ========== == == == Model name: SAE == == == == Current iteration: 39822 == == == ==-------- Model Options --------== == == == batch_size: 24 == == sort_by_yaw: True == == random_flip: False == == resolution: 128 == == face_type: f == == learn_mask: True == == optimizer_mode: 2 == == archi: liae == == ae_dims: 222 == == e_ch_dims: 34 == == d_ch_dims: 16 == == multiscale_decoder: True == == ca_weights: True == == pixel_loss: False == == face_style_power: 2.0 == == bg_style_power: 1.5 == == apply_random_ct: False == == clipgrad: True == == == ==--------- Running On ----------== == == == Device index: 0 == == Name: Tesla K80 == == VRAM: 11.00GB == == == =================================== ``` Using OpenCL + H128 ``` =========================== Model Summary =========================== == == == Model name: H128 == == == == Current iteration: 0 == == == ==------------------------- Model Options -------------------------== == == == batch_size: 4 == == sort_by_yaw: False == == random_flip: True == == lighter_ae: False == == pixel_loss: False == == == ==-------------------------- Running On ---------------------------== == == == Device index: 0 == == Name: Advanced Micro Devices, Inc. gfx900 (OpenCL) == == VRAM: 7.98GB == == == ===================================================================== ``` Using CPU (output trimmed) ``` ==------- Running On --------== == == == Using device: CPU == == == =============================== ``` multi_gpu support is retained (output trimmed) ``` ==------------- Running On -------------== == == == Using multi_gpu: True == == == == Device index: 1 == == Name: Geforce GTX 1080 == == VRAM: 8.00GB == == Device index: 2 == == Name: Geforce GTX 1080 == == VRAM: 8.00GB == == == ========================================== ``` Low VRAM warning (output trimmed) ``` ==------------- Running On -------------== == == == Device index: 0 == == Name: Geforce GTX 1050 == == VRAM: 2.00GB == == == ========================================== /!\ /!\ WARNING: /!\ You are using a GPU with 2GB or less VRAM. This may significantly reduce the quality of your result! /!\ If training does not start, close all programs and try again. /!\ Also you can disable Windows Aero Desktop to increase available VRAM. /!\ ```
https://api.github.com/repos/iperov/DeepFaceLab/pulls/348
2019-08-16T07:28:55Z
2019-08-16T14:35:28Z
2019-08-16T14:35:28Z
2019-08-23T23:38:26Z
1,402
iperov/DeepFaceLab
33,378
Add notification platform for Rocket.Chat.
diff --git a/.coveragerc b/.coveragerc index 60375fbb97ec..34959f53299f 100644 --- a/.coveragerc +++ b/.coveragerc @@ -420,6 +420,7 @@ omit = homeassistant/components/notify/pushover.py homeassistant/components/notify/pushsafer.py homeassistant/components/notify/rest.py + homeassistant/components/notify/rocketchat.py homeassistant/components/notify/sendgrid.py homeassistant/components/notify/simplepush.py homeassistant/components/notify/slack.py diff --git a/homeassistant/components/notify/rocketchat.py b/homeassistant/components/notify/rocketchat.py new file mode 100644 index 000000000000..f2898c8b9989 --- /dev/null +++ b/homeassistant/components/notify/rocketchat.py @@ -0,0 +1,76 @@ +""" +Rocket.Chat notification service. + +For more details about this platform, please refer to the documentation at +https://home-assistant.io/components/notify.rocketchat/ +""" +import logging + +import voluptuous as vol + +from homeassistant.const import ( + CONF_URL, CONF_USERNAME, CONF_PASSWORD) +from homeassistant.components.notify import ( + ATTR_DATA, PLATFORM_SCHEMA, + BaseNotificationService) +import homeassistant.helpers.config_validation as cv + +REQUIREMENTS = ['rocketchat-API==0.6.1'] + +CONF_ROOM = 'room' + +_LOGGER = logging.getLogger(__name__) + +# pylint: disable=no-value-for-parameter +PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({ + vol.Required(CONF_URL): vol.Url(), + vol.Required(CONF_USERNAME): cv.string, + vol.Required(CONF_PASSWORD): cv.string, + vol.Required(CONF_ROOM): cv.string, +}) + + +def get_service(hass, config, discovery_info=None): + """Return the notify service.""" + from rocketchat_API.APIExceptions.RocketExceptions import ( + RocketConnectionException, RocketAuthenticationException) + username = config.get(CONF_USERNAME) + password = config.get(CONF_PASSWORD) + + url = config.get(CONF_URL) + room = config.get(CONF_ROOM) + + try: + return RocketChatNotificationService(url, username, password, room) + except RocketConnectionException: + _LOGGER.warning( + "Unable to connect to Rocket.Chat server at %s.", url) + except RocketAuthenticationException: + _LOGGER.warning( + "Rocket.Chat authentication failed for user %s.", username) + _LOGGER.info("Please check your username/password.") + + return None + + +class RocketChatNotificationService(BaseNotificationService): + """Implement the notification service for Rocket.Chat.""" + + def __init__(self, url, username, password, room): + """Initialize the service.""" + from rocketchat_API.rocketchat import RocketChat + self._room = room + self._server = RocketChat(username, password, server_url=url) + + def send_message(self, message="", **kwargs): + """Send a message to Rocket.Chat.""" + data = kwargs.get(ATTR_DATA) or {} + resp = self._server.chat_post_message(message, channel=self._room, + **data) + if resp.status_code == 200: + success = resp.json()["success"] + if not success: + _LOGGER.error("Unable to post Rocket.Chat message") + else: + _LOGGER.error("Incorrect status code when posting message: %d", + resp.status_code) diff --git a/requirements_all.txt b/requirements_all.txt index 7ce8cc5ae9d9..c8ed0adcdbc1 100644 --- a/requirements_all.txt +++ b/requirements_all.txt @@ -865,6 +865,9 @@ rflink==0.0.34 # homeassistant.components.ring ring_doorbell==0.1.4 +# homeassistant.components.notify.rocketchat +rocketchat-API==0.6.1 + # homeassistant.components.vacuum.roomba roombapy==1.3.1
## Description: **Related issue (if applicable):** N/A **Pull request in [home-assistant.github.io](https://github.com/home-assistant/home-assistant.github.io) with documentation (if applicable):** home-assistant/home-assistant.github.io#3424 ## Example entry for `configuration.yaml` (if applicable): ```yaml notify: - platform: rocketchat name: NOTIFIER_NAME url: https://rocketchat.example.com username: USERNAME password: PASSWORD room: my-awesome-room ``` ## Checklist: If user exposed functionality or configuration variables are added/changed: - [x] Documentation added/updated in [home-assistant.github.io](https://github.com/home-assistant/home-assistant.github.io) If the code communicates with devices, web services, or third-party tools: - [x] Local tests with `tox` run successfully. **Your PR cannot be merged unless tests pass** - [x] New dependencies have been added to the `REQUIREMENTS` variable ([example][ex-requir]). - [x] New dependencies are only imported inside functions that use them ([example][ex-import]). - [x] New dependencies have been added to `requirements_all.txt` by running `script/gen_requirements_all.py`. - [x] New files were added to `.coveragerc`. If the code does not interact with devices: - [ ] Local tests with `tox` run successfully. **Your PR cannot be merged unless tests pass** - [ ] Tests have been added to verify that the new code works. [ex-requir]: https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/keyboard.py#L14 [ex-import]: https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/keyboard.py#L54
https://api.github.com/repos/home-assistant/core/pulls/9553
2017-09-23T20:42:04Z
2017-10-09T07:38:49Z
2017-10-09T07:38:49Z
2019-03-21T04:50:51Z
934
home-assistant/core
38,879
dns-rfc2136: use TCP to query SOA records
diff --git a/CHANGELOG.md b/CHANGELOG.md index 4a71d24adea..56235e756f7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -32,6 +32,7 @@ Certbot adheres to [Semantic Versioning](https://semver.org/). * acme.standalone.BaseRequestHandlerWithLogging and acme.standalone.simple_tls_sni_01_server have been deprecated and will be removed in a future release of the library. +* certbot-dns-rfc2136 now use TCP to query SOA records. ### Fixed diff --git a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py index 2061374e0e8..ee71c9681d7 100644 --- a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py +++ b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py @@ -206,7 +206,11 @@ def _query_soa(self, domain_name): request.flags ^= dns.flags.RD try: - response = dns.query.udp(request, self.server, port=self.port) + try: + response = dns.query.tcp(request, self.server, port=self.port) + except OSError as e: + logger.debug('TCP query failed, fallback to UDP: %s', e) + response = dns.query.udp(request, self.server, port=self.port) rcode = response.rcode() # Authoritative Answer bit should be set diff --git a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136_test.py b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136_test.py index d800f1ec7c2..1950ee62e71 100644 --- a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136_test.py +++ b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136_test.py @@ -162,7 +162,7 @@ def test_find_domain_wraps_errors(self): self.rfc2136_client._find_domain, 'foo.bar.'+DOMAIN) - @mock.patch("dns.query.udp") + @mock.patch("dns.query.tcp") def test_query_soa_found(self, query_mock): query_mock.return_value = mock.MagicMock(answer=[mock.MagicMock()], flags=dns.flags.AA) query_mock.return_value.rcode.return_value = dns.rcode.NOERROR @@ -173,7 +173,7 @@ def test_query_soa_found(self, query_mock): query_mock.assert_called_with(mock.ANY, SERVER, port=PORT) self.assertTrue(result) - @mock.patch("dns.query.udp") + @mock.patch("dns.query.tcp") def test_query_soa_not_found(self, query_mock): query_mock.return_value.rcode.return_value = dns.rcode.NXDOMAIN @@ -183,7 +183,7 @@ def test_query_soa_not_found(self, query_mock): query_mock.assert_called_with(mock.ANY, SERVER, port=PORT) self.assertFalse(result) - @mock.patch("dns.query.udp") + @mock.patch("dns.query.tcp") def test_query_soa_wraps_errors(self, query_mock): query_mock.side_effect = Exception @@ -193,6 +193,20 @@ def test_query_soa_wraps_errors(self, query_mock): self.rfc2136_client._query_soa, DOMAIN) + @mock.patch("dns.query.udp") + @mock.patch("dns.query.tcp") + def test_query_soa_fallback_to_udp(self, tcp_mock, udp_mock): + tcp_mock.side_effect = OSError + udp_mock.return_value = mock.MagicMock(answer=[mock.MagicMock()], flags=dns.flags.AA) + udp_mock.return_value.rcode.return_value = dns.rcode.NOERROR + + # _query_soa | pylint: disable=protected-access + result = self.rfc2136_client._query_soa(DOMAIN) + + tcp_mock.assert_called_with(mock.ANY, SERVER, port=PORT) + udp_mock.assert_called_with(mock.ANY, SERVER, port=PORT) + self.assertTrue(result) + if __name__ == "__main__": unittest.main() # pragma: no cover
certbot-dns-rfc2136: Use TCP queries to improve the network robust. Fixes #7502.
https://api.github.com/repos/certbot/certbot/pulls/7503
2019-11-05T16:10:00Z
2019-11-07T17:37:13Z
2019-11-07T17:37:12Z
2019-11-08T09:54:48Z
1,046
certbot/certbot
1,650
Upgrade to chardet 4.x
diff --git a/requests/__init__.py b/requests/__init__.py index c00f556bbc..f8f94295f9 100644 --- a/requests/__init__.py +++ b/requests/__init__.py @@ -65,10 +65,8 @@ def check_compatibility(urllib3_version, chardet_version): # Check chardet for compatibility. major, minor, patch = chardet_version.split('.')[:3] major, minor, patch = int(major), int(minor), int(patch) - # chardet >= 3.0.2, < 3.1.0 - assert major == 3 - assert minor < 1 - assert patch >= 2 + # chardet >= 3.0.2, < 5.0.0 + assert (3, 0, 2) <= (major, minor, patch) < (5, 0, 0) def _check_cryptography(cryptography_version): diff --git a/setup.py b/setup.py index e714bfa441..7ba4b2a25f 100755 --- a/setup.py +++ b/setup.py @@ -42,7 +42,7 @@ def run_tests(self): packages = ['requests'] requires = [ - 'chardet>=3.0.2,<4', + 'chardet>=3.0.2,<5', 'idna>=2.5,<3', 'urllib3>=1.21.1,<1.27', 'certifi>=2017.4.17'
I just released [chardet 4.0.0](https://github.com/chardet/chardet/releases/tag/4.0.0) today, and it's faster and fully backward compatible with chardet 3.x (as long as you aren't mucking around in the models it uses under-the-hood directly). The next major release will be Python 3.6+, but seeing as it took me three years to put out this one, that's unlikely to be soon.
https://api.github.com/repos/psf/requests/pulls/5688
2020-12-11T01:26:30Z
2020-12-14T17:29:11Z
2020-12-14T17:29:11Z
2021-08-27T00:08:56Z
362
psf/requests
32,161