title
stringlengths 2
169
| diff
stringlengths 235
19.5k
| body
stringlengths 0
30.5k
| url
stringlengths 48
84
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| diff_len
float64 101
3.99k
| repo_name
stringclasses 83
values | __index_level_0__
int64 15
52.7k
|
---|---|---|---|---|---|---|---|---|---|---|
DOC add example of DataFrame.index | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index c046d55d80b49..55618590071b5 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -532,7 +532,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas.api.extensions.ExtensionArray.ndim \
pandas.api.extensions.ExtensionArray.shape \
pandas.api.extensions.ExtensionArray.tolist \
- pandas.DataFrame.index \
pandas.DataFrame.columns \
pandas.DataFrame.__iter__ \
pandas.DataFrame.keys \
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index bd298b8d723b8..abe62b475a759 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -11768,7 +11768,50 @@ def isin(self, values: Series | DataFrame | Sequence | Mapping) -> DataFrame:
_info_axis_name: Literal["columns"] = "columns"
index = properties.AxisProperty(
- axis=1, doc="The index (row labels) of the DataFrame."
+ axis=1,
+ doc="""
+ The index (row labels) of the DataFrame.
+
+ The index of a DataFrame is a series of labels that identify each row.
+ The labels can be integers, strings, or any other hashable type. The index
+ is used for label-based access and alignment, and can be accessed or
+ modified using this attribute.
+
+ Returns
+ -------
+ pandas.Index
+ The index labels of the DataFrame.
+
+ See Also
+ --------
+ DataFrame.columns : The column labels of the DataFrame.
+ DataFrame.to_numpy : Convert the DataFrame to a NumPy array.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame({'Name': ['Alice', 'Bob', 'Aritra'],
+ ... 'Age': [25, 30, 35],
+ ... 'Location': ['Seattle', 'New York', 'Kona']},
+ ... index=([10, 20, 30]))
+ >>> df.index
+ Index([10, 20, 30], dtype='int64')
+
+ In this example, we create a DataFrame with 3 rows and 3 columns,
+ including Name, Age, and Location information. We set the index labels to
+ be the integers 10, 20, and 30. We then access the `index` attribute of the
+ DataFrame, which returns an `Index` object containing the index labels.
+
+ >>> df.index = [100, 200, 300]
+ >>> df
+ Name Age Location
+ 100 Alice 25 Seattle
+ 200 Bob 30 New York
+ 300 Aritra 35 Kona
+
+ In this example, we modify the index labels of the DataFrame by assigning
+ a new list of labels to the `index` attribute. The DataFrame is then
+ updated with the new labels, and the output shows the modified DataFrame.
+ """,
)
columns = properties.AxisProperty(axis=0, doc="The column labels of the DataFrame.")
| DOC add example of DataFrame.index
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/52835 | 2023-04-21T18:41:54Z | 2023-04-23T19:13:44Z | 2023-04-23T19:13:44Z | 2023-04-24T13:41:58Z | 721 | pandas-dev/pandas | 45,545 |
Control filter list | diff --git a/tests-ui/tests/widgetInputs.test.js b/tests-ui/tests/widgetInputs.test.js
index 022e549266..e1873105ac 100644
--- a/tests-ui/tests/widgetInputs.test.js
+++ b/tests-ui/tests/widgetInputs.test.js
@@ -14,10 +14,10 @@ const lg = require("../utils/litegraph");
* @param { InstanceType<Ez["EzGraph"]> } graph
* @param { InstanceType<Ez["EzInput"]> } input
* @param { string } widgetType
- * @param { boolean } hasControlWidget
+ * @param { number } controlWidgetCount
* @returns
*/
-async function connectPrimitiveAndReload(ez, graph, input, widgetType, hasControlWidget) {
+async function connectPrimitiveAndReload(ez, graph, input, widgetType, controlWidgetCount = 0) {
// Connect to primitive and ensure its still connected after
let primitive = ez.PrimitiveNode();
primitive.outputs[0].connectTo(input);
@@ -33,13 +33,17 @@ async function connectPrimitiveAndReload(ez, graph, input, widgetType, hasContro
expect(valueWidget.widget.type).toBe(widgetType);
// Check if control_after_generate should be added
- if (hasControlWidget) {
+ if (controlWidgetCount) {
const controlWidget = primitive.widgets.control_after_generate;
expect(controlWidget.widget.type).toBe("combo");
+ if(widgetType === "combo") {
+ const filterWidget = primitive.widgets.control_filter_list;
+ expect(filterWidget.widget.type).toBe("string");
+ }
}
// Ensure we dont have other widgets
- expect(primitive.node.widgets).toHaveLength(1 + +!!hasControlWidget);
+ expect(primitive.node.widgets).toHaveLength(1 + controlWidgetCount);
});
return primitive;
@@ -55,8 +59,8 @@ describe("widget inputs", () => {
});
[
- { name: "int", type: "INT", widget: "number", control: true },
- { name: "float", type: "FLOAT", widget: "number", control: true },
+ { name: "int", type: "INT", widget: "number", control: 1 },
+ { name: "float", type: "FLOAT", widget: "number", control: 1 },
{ name: "text", type: "STRING" },
{
name: "customtext",
@@ -64,7 +68,7 @@ describe("widget inputs", () => {
opt: { multiline: true },
},
{ name: "toggle", type: "BOOLEAN" },
- { name: "combo", type: ["a", "b", "c"], control: true },
+ { name: "combo", type: ["a", "b", "c"], control: 2 },
].forEach((c) => {
test(`widget conversion + primitive works on ${c.name}`, async () => {
const { ez, graph } = await start({
@@ -106,7 +110,7 @@ describe("widget inputs", () => {
n.widgets.ckpt_name.convertToInput();
expect(n.inputs.length).toEqual(inputCount + 1);
- const primitive = await connectPrimitiveAndReload(ez, graph, n.inputs.ckpt_name, "combo", true);
+ const primitive = await connectPrimitiveAndReload(ez, graph, n.inputs.ckpt_name, "combo", 2);
// Disconnect & reconnect
primitive.outputs[0].connections[0].disconnect();
@@ -226,7 +230,7 @@ describe("widget inputs", () => {
// Reload and ensure it still only has 1 converted widget
if (!assertNotNullOrUndefined(input)) return;
- await connectPrimitiveAndReload(ez, graph, input, "number", true);
+ await connectPrimitiveAndReload(ez, graph, input, "number", 1);
n = graph.find(n);
expect(n.widgets).toHaveLength(1);
w = n.widgets.example;
@@ -258,7 +262,7 @@ describe("widget inputs", () => {
// Reload and ensure it still only has 1 converted widget
if (assertNotNullOrUndefined(input)) {
- await connectPrimitiveAndReload(ez, graph, input, "number", true);
+ await connectPrimitiveAndReload(ez, graph, input, "number", 1);
n = graph.find(n);
expect(n.widgets).toHaveLength(1);
expect(n.widgets.example.isConvertedToInput).toBeTruthy();
@@ -316,4 +320,76 @@ describe("widget inputs", () => {
n1.outputs[0].connectTo(n2.inputs[0]);
expect(() => n1.outputs[0].connectTo(n3.inputs[0])).toThrow();
});
+
+ test("combo primitive can filter list when control_after_generate called", async () => {
+ const { ez } = await start({
+ mockNodeDefs: {
+ ...makeNodeDef("TestNode1", { example: [["A", "B", "C", "D", "AA", "BB", "CC", "DD", "AAA", "BBB"], {}] }),
+ },
+ });
+
+ const n1 = ez.TestNode1();
+ n1.widgets.example.convertToInput();
+ const p = ez.PrimitiveNode()
+ p.outputs[0].connectTo(n1.inputs[0]);
+
+ const value = p.widgets.value;
+ const control = p.widgets.control_after_generate.widget;
+ const filter = p.widgets.control_filter_list;
+
+ expect(p.widgets.length).toBe(3);
+ control.value = "increment";
+ expect(value.value).toBe("A");
+
+ // Manually trigger after queue when set to increment
+ control["afterQueued"]();
+ expect(value.value).toBe("B");
+
+ // Filter to items containing D
+ filter.value = "D";
+ control["afterQueued"]();
+ expect(value.value).toBe("D");
+ control["afterQueued"]();
+ expect(value.value).toBe("DD");
+
+ // Check decrement
+ value.value = "BBB";
+ control.value = "decrement";
+ filter.value = "B";
+ control["afterQueued"]();
+ expect(value.value).toBe("BB");
+ control["afterQueued"]();
+ expect(value.value).toBe("B");
+
+ // Check regex works
+ value.value = "BBB";
+ filter.value = "/[AB]|^C$/";
+ control["afterQueued"]();
+ expect(value.value).toBe("AAA");
+ control["afterQueued"]();
+ expect(value.value).toBe("BB");
+ control["afterQueued"]();
+ expect(value.value).toBe("AA");
+ control["afterQueued"]();
+ expect(value.value).toBe("C");
+ control["afterQueued"]();
+ expect(value.value).toBe("B");
+ control["afterQueued"]();
+ expect(value.value).toBe("A");
+
+ // Check random
+ control.value = "randomize";
+ filter.value = "/D/";
+ for(let i = 0; i < 100; i++) {
+ control["afterQueued"]();
+ expect(value.value === "D" || value.value === "DD").toBeTruthy();
+ }
+
+ // Ensure it doesnt apply when fixed
+ control.value = "fixed";
+ value.value = "B";
+ filter.value = "C";
+ control["afterQueued"]();
+ expect(value.value).toBe("B");
+ });
});
diff --git a/web/extensions/core/widgetInputs.js b/web/extensions/core/widgetInputs.js
index bad3ac3a74..5c8fbc9b2d 100644
--- a/web/extensions/core/widgetInputs.js
+++ b/web/extensions/core/widgetInputs.js
@@ -1,4 +1,4 @@
-import { ComfyWidgets, addValueControlWidget } from "../../scripts/widgets.js";
+import { ComfyWidgets, addValueControlWidgets } from "../../scripts/widgets.js";
import { app } from "../../scripts/app.js";
const CONVERTED_TYPE = "converted-widget";
@@ -467,7 +467,11 @@ app.registerExtension({
if (!control_value) {
control_value = "fixed";
}
- addValueControlWidget(this, widget, control_value);
+ addValueControlWidgets(this, widget, control_value);
+ let filter = this.widgets_values?.[2];
+ if(filter && this.widgets.length === 3) {
+ this.widgets[2].value = filter;
+ }
}
// When our value changes, update other widgets to reflect our changes
diff --git a/web/scripts/widgets.js b/web/scripts/widgets.js
index ccddc0bc44..fbc1d0fc32 100644
--- a/web/scripts/widgets.js
+++ b/web/scripts/widgets.js
@@ -24,17 +24,58 @@ function getNumberDefaults(inputData, defaultStep, precision, enable_rounding) {
}
export function addValueControlWidget(node, targetWidget, defaultValue = "randomize", values) {
- const valueControl = node.addWidget("combo", "control_after_generate", defaultValue, function (v) { }, {
+ const widgets = addValueControlWidgets(node, targetWidget, defaultValue, values, {
+ addFilterList: false,
+ });
+ return widgets[0];
+}
+
+export function addValueControlWidgets(node, targetWidget, defaultValue = "randomize", values, options) {
+ if (!options) options = {};
+
+ const widgets = [];
+ const valueControl = node.addWidget("combo", "control_after_generate", defaultValue, function (v) { }, {
values: ["fixed", "increment", "decrement", "randomize"],
serialize: false, // Don't include this in prompt.
});
- valueControl.afterQueued = () => {
+ widgets.push(valueControl);
+
+ const isCombo = targetWidget.type === "combo";
+ let comboFilter;
+ if (isCombo && options.addFilterList !== false) {
+ comboFilter = node.addWidget("string", "control_filter_list", "", function (v) {}, {
+ serialize: false, // Don't include this in prompt.
+ });
+ widgets.push(comboFilter);
+ }
+ valueControl.afterQueued = () => {
var v = valueControl.value;
- if (targetWidget.type == "combo" && v !== "fixed") {
- let current_index = targetWidget.options.values.indexOf(targetWidget.value);
- let current_length = targetWidget.options.values.length;
+ if (isCombo && v !== "fixed") {
+ let values = targetWidget.options.values;
+ const filter = comboFilter?.value;
+ if (filter) {
+ let check;
+ if (filter.startsWith("/") && filter.endsWith("/")) {
+ try {
+ const regex = new RegExp(filter.substring(1, filter.length - 1));
+ check = (item) => regex.test(item);
+ } catch (error) {
+ console.error("Error constructing RegExp filter for node " + node.id, filter, error);
+ }
+ }
+ if (!check) {
+ const lower = filter.toLocaleLowerCase();
+ check = (item) => item.toLocaleLowerCase().includes(lower);
+ }
+ values = values.filter(item => check(item));
+ if (!values.length && targetWidget.options.values.length) {
+ console.warn("Filter for node " + node.id + " has filtered out all items", filter);
+ }
+ }
+ let current_index = values.indexOf(targetWidget.value);
+ let current_length = values.length;
switch (v) {
case "increment":
@@ -51,7 +92,7 @@ export function addValueControlWidget(node, targetWidget, defaultValue = "random
current_index = Math.max(0, current_index);
current_index = Math.min(current_length - 1, current_index);
if (current_index >= 0) {
- let value = targetWidget.options.values[current_index];
+ let value = values[current_index];
targetWidget.value = value;
targetWidget.callback(value);
}
@@ -88,7 +129,8 @@ export function addValueControlWidget(node, targetWidget, defaultValue = "random
targetWidget.callback(targetWidget.value);
}
}
- return valueControl;
+
+ return widgets;
};
function seedWidget(node, inputName, inputData, app) {
| Allows filtering of the items in a COMBO primitive (e.g. LoadImage images list) when using control_after_generate.
You can also use regex to filter the items by wrapping the expression in `/`s e.g. `/(sdxl|sd15)/`
Maintains existing behavior of `addValueControlWidget` for extensions currently using it, adding `addValueControlWidgets` which returns an array of added widgets. | https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/2009 | 2023-11-20T21:48:41Z | 2023-11-22T17:52:20Z | 2023-11-22T17:52:20Z | 2023-11-23T08:37:52Z | 2,804 | comfyanonymous/ComfyUI | 17,901 |
C.40: Fixed a couple of typos. | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 0a4a6c410..bf1bf47b7 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -3905,7 +3905,7 @@ That's what constructors are for.
int d, m, y;
};
-It is often a good idea to express the invariant as an `Ensure` on the constructor.
+It is often a good idea to express the invariant as an `Ensures` on the constructor.
##### Note
@@ -3941,7 +3941,7 @@ Also, the default for `int` would be better done as a [member initializer](#Rc-i
##### Enforcement
-* Flag classes with user-define copy operations but no constructor (a user-defined copy is a good indicator that the class has an invariant)
+* Flag classes with user-defined copy operations but no constructor (a user-defined copy is a good indicator that the class has an invariant)
### <a name="Rc-complete"></a> C.41: A constructor should create a fully initialized object
| https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/471 | 2015-12-18T16:36:52Z | 2015-12-18T16:51:25Z | 2015-12-18T16:51:25Z | 2015-12-19T03:20:50Z | 251 | isocpp/CppCoreGuidelines | 15,485 |
|
[3.8] bpo-41132: Use pymalloc allocator in the f-string parser (GH-21173) | diff --git a/Python/ast.c b/Python/ast.c
index 0a999fcca43a8e..5efb690c299cac 100644
--- a/Python/ast.c
+++ b/Python/ast.c
@@ -4898,7 +4898,7 @@ fstring_compile_expr(const char *expr_start, const char *expr_end,
len = expr_end - expr_start;
/* Allocate 3 extra bytes: open paren, close paren, null byte. */
- str = PyMem_RawMalloc(len + 3);
+ str = PyMem_Malloc(len + 3);
if (str == NULL) {
PyErr_NoMemory();
return NULL;
@@ -4914,7 +4914,7 @@ fstring_compile_expr(const char *expr_start, const char *expr_end,
mod_n = PyParser_SimpleParseStringFlagsFilename(str, "<fstring>",
Py_eval_input, 0);
if (!mod_n) {
- PyMem_RawFree(str);
+ PyMem_Free(str);
return NULL;
}
/* Reuse str to find the correct column offset. */
@@ -4922,7 +4922,7 @@ fstring_compile_expr(const char *expr_start, const char *expr_end,
str[len+1] = '}';
fstring_fix_node_location(n, mod_n, str);
mod = PyAST_FromNode(mod_n, &cf, "<fstring>", c->c_arena);
- PyMem_RawFree(str);
+ PyMem_Free(str);
PyNode_Free(mod_n);
if (!mod)
return NULL;
@@ -5438,7 +5438,7 @@ ExprList_Append(ExprList *l, expr_ty exp)
Py_ssize_t i;
/* We're still using the cached data. Switch to
alloc-ing. */
- l->p = PyMem_RawMalloc(sizeof(expr_ty) * new_size);
+ l->p = PyMem_Malloc(sizeof(expr_ty) * new_size);
if (!l->p)
return -1;
/* Copy the cached data into the new buffer. */
@@ -5446,9 +5446,9 @@ ExprList_Append(ExprList *l, expr_ty exp)
l->p[i] = l->data[i];
} else {
/* Just realloc. */
- expr_ty *tmp = PyMem_RawRealloc(l->p, sizeof(expr_ty) * new_size);
+ expr_ty *tmp = PyMem_Realloc(l->p, sizeof(expr_ty) * new_size);
if (!tmp) {
- PyMem_RawFree(l->p);
+ PyMem_Free(l->p);
l->p = NULL;
return -1;
}
@@ -5476,7 +5476,7 @@ ExprList_Dealloc(ExprList *l)
/* Do nothing. */
} else {
/* We have dynamically allocated. Free the memory. */
- PyMem_RawFree(l->p);
+ PyMem_Free(l->p);
}
l->p = NULL;
l->size = -1;
|
<!--
Thanks for your contribution!
Please read this comment in its entirety. It's quite important.
# Pull Request title
It should be in the following format:
```
bpo-NNNN: Summary of the changes made
```
Where: bpo-NNNN refers to the issue number in the https://bugs.python.org.
Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue.
# Backport Pull Request title
If this is a backport PR (PR made against branches other than `master`),
please ensure that the PR title is in the following format:
```
[X.Y] <title from the original PR> (GH-NNNN)
```
Where: [X.Y] is the branch name, e.g. [3.6].
GH-NNNN refers to the PR number from `master`.
-->
<!-- issue-number: [bpo-41132](https://bugs.python.org/issue41132) -->
https://bugs.python.org/issue41132
<!-- /issue-number -->
Automerge-Triggered-By: @pablogsal | https://api.github.com/repos/python/cpython/pulls/21184 | 2020-06-27T18:26:32Z | 2020-06-27T18:43:42Z | 2020-06-27T18:43:42Z | 2020-06-27T18:43:44Z | 685 | python/cpython | 4,738 |
Update README.md | diff --git a/README.md b/README.md
index c31017e..b1354cc 100644
--- a/README.md
+++ b/README.md
@@ -2336,7 +2336,7 @@ nan
#### 💡 Explanation:
-`'inf'` and `'nan'` are special strings (case-insensitive), which when explicitly typecasted to `float` type, are used to represent mathematical "infinity" and "not a number" respectively.
+`'inf'` and `'nan'` are special strings (case-insensitive), which when explicitly typecast-ed to `float` type, are used to represent mathematical "infinity" and "not a number" respectively.
---
@@ -2382,7 +2382,7 @@ nan
>>> 44
```
**💡 Explanation:**
- This prank comes from [Raymond Hettinger's tweet](https://twitter.com/raymondh/status/1131103570856632321?lang=en). The space invader operator is actually just a malformatted `a -= (-1)`. Which is eqivalent to `a = a - (- 1)`. Similar for the `a += (+ 1)` case.
+ This prank comes from [Raymond Hettinger's tweet](https://twitter.com/raymondh/status/1131103570856632321?lang=en). The space invader operator is actually just a malformatted `a -= (-1)`. Which is equivalent to `a = a - (- 1)`. Similar for the `a += (+ 1)` case.
* Python uses 2 bytes for local variable storage in functions. In theory, this means that only 65536 variables can be defined in a function. However, python has a handy solution built in that can be used to store more than 2^16 variable names. The following code demonstrates what happens in the stack when more than 65536 local variables are defined (Warning: This code prints around 2^18 lines of text, so be prepared!):
```py
@@ -2390,7 +2390,7 @@ nan
exec("""
def f():
""" + """
- """.join(["X"+str(x)+"=" + str(x) for x in range(65539)]))
+ """.join(["X" + str(x) + "=" + str(x) for x in range(65539)]))
f()
| Typo corrected. | https://api.github.com/repos/satwikkansal/wtfpython/pulls/146 | 2019-10-25T08:58:55Z | 2019-10-25T14:22:53Z | 2019-10-25T14:22:53Z | 2019-10-25T14:22:58Z | 524 | satwikkansal/wtfpython | 25,728 |
improve(rules): add mercurial (hg) support | diff --git a/tests/rules/test_mercurial.py b/tests/rules/test_mercurial.py
new file mode 100644
index 000000000..08962f912
--- /dev/null
+++ b/tests/rules/test_mercurial.py
@@ -0,0 +1,134 @@
+import pytest
+
+from tests.utils import Command
+from thefuck.rules.mercurial import (
+ extract_possisiblities, match, get_new_command
+)
+
+
+@pytest.mark.parametrize('command', [
+ Command('hg base', stderr=(
+ "hg: unknown command 'base'"
+ '\n(did you mean one of blame, phase, rebase?)'
+ )),
+ Command('hg branchch', stderr=(
+ "hg: unknown command 'branchch'"
+ '\n(did you mean one of branch, branches?)'
+ )),
+ Command('hg vert', stderr=(
+ "hg: unknown command 'vert'"
+ '\n(did you mean one of revert?)'
+ )),
+ Command('hg lgo -r tip', stderr=(
+ "hg: command 're' is ambiguous:"
+ '\n(did you mean one of log?)'
+ )),
+ Command('hg rerere', stderr=(
+ "hg: unknown command 'rerere'"
+ '\n(did you mean one of revert?)'
+ )),
+ Command('hg re', stderr=(
+ "hg: command 're' is ambiguous:"
+ '\n rebase recover remove rename resolve revert'
+ )),
+ Command('hg re re', stderr=(
+ "hg: command 're' is ambiguous:"
+ '\n rebase recover remove rename resolve revert'
+ )),
+])
+def test_match(command):
+ assert match(command, None)
+
+
+@pytest.mark.parametrize('command', [
+ Command('hg', stderr=(
+ '\nMercurial Distributed SCM\n\nbasic commands:'
+ )),
+ Command('hg asdf', stderr=(
+ "hg: unknown command 'asdf'"
+ '\nMercurial Distributed SCM\n\nbasic commands:'
+ )),
+ Command('hg qwer', stderr=(
+ "hg: unknown command 'qwer'"
+ '\nMercurial Distributed SCM\n\nbasic commands:'
+ )),
+ Command('hg me', stderr=(
+ "\nabort: no repository found in './thefuck' (.hg not found)!"
+ )),
+ Command('hg reb', stderr=(
+ "\nabort: no repository found in './thefuck' (.hg not found)!"
+ )),
+ Command('hg co', stderr=(
+ "\nabort: no repository found in './thefuck' (.hg not found)!"
+ )),
+])
+def test_not_match(command):
+ assert not match(command, None)
+
+
+@pytest.mark.parametrize('command, possibilities', [
+ (Command('hg base', stderr=(
+ "hg: unknown command 'base'"
+ '\n(did you mean one of blame, phase, rebase?)'
+ )), ['blame', 'phase', 'rebase']),
+ (Command('hg branchch', stderr=(
+ "hg: unknown command 'branchch'"
+ '\n(did you mean one of branch, branches?)'
+ )), ['branch', 'branches']),
+ (Command('hg vert', stderr=(
+ "hg: unknown command 'vert'"
+ '\n(did you mean one of revert?)'
+ )), ['revert']),
+ (Command('hg lgo -r tip', stderr=(
+ "hg: command 're' is ambiguous:"
+ '\n(did you mean one of log?)'
+ )), ['log']),
+ (Command('hg rerere', stderr=(
+ "hg: unknown command 'rerere'"
+ '\n(did you mean one of revert?)'
+ )), ['revert']),
+ (Command('hg re', stderr=(
+ "hg: command 're' is ambiguous:"
+ '\n rebase recover remove rename resolve revert'
+ )), ['rebase', 'recover', 'remove', 'rename', 'resolve', 'revert']),
+ (Command('hg re re', stderr=(
+ "hg: command 're' is ambiguous:"
+ '\n rebase recover remove rename resolve revert'
+ )), ['rebase', 'recover', 'remove', 'rename', 'resolve', 'revert']),
+])
+def test_extract_possisiblities(command, possibilities):
+ assert extract_possisiblities(command) == possibilities
+
+
+@pytest.mark.parametrize('command, new_command', [
+ (Command('hg base', stderr=(
+ "hg: unknown command 'base'"
+ '\n(did you mean one of blame, phase, rebase?)'
+ )), 'hg rebase'),
+ (Command('hg branchch', stderr=(
+ "hg: unknown command 'branchch'"
+ '\n(did you mean one of branch, branches?)'
+ )), 'hg branch'),
+ (Command('hg vert', stderr=(
+ "hg: unknown command 'vert'"
+ '\n(did you mean one of revert?)'
+ )), 'hg revert'),
+ (Command('hg lgo -r tip', stderr=(
+ "hg: command 're' is ambiguous:"
+ '\n(did you mean one of log?)'
+ )), 'hg log -r tip'),
+ (Command('hg rerere', stderr=(
+ "hg: unknown command 'rerere'"
+ '\n(did you mean one of revert?)'
+ )), 'hg revert'),
+ (Command('hg re', stderr=(
+ "hg: command 're' is ambiguous:"
+ '\n rebase recover remove rename resolve revert'
+ )), 'hg rebase'),
+ (Command('hg re re', stderr=(
+ "hg: command 're' is ambiguous:"
+ '\n rebase recover remove rename resolve revert'
+ )), 'hg rebase re'),
+])
+def test_get_new_command(command, new_command):
+ assert get_new_command(command, None) == new_command
diff --git a/thefuck/rules/mercurial.py b/thefuck/rules/mercurial.py
new file mode 100644
index 000000000..934e3f1e8
--- /dev/null
+++ b/thefuck/rules/mercurial.py
@@ -0,0 +1,34 @@
+import re
+
+from difflib import get_close_matches
+
+
+def extract_possisiblities(command):
+ possib = re.findall(r'\n\(did you mean one of ([^\?]+)\?\)', command.stderr)
+ if possib:
+ return possib[0].split(', ')
+ possib = re.findall(r'\n ([^$]+)$', command.stderr)
+ if possib:
+ return possib[0].split(' ')
+ return possib
+
+
+def match(command, settings):
+ return (command.script.startswith('hg ')
+ and ('hg: unknown command' in command.stderr
+ and '(did you mean one of ' in command.stderr
+ or "hg: command '" in command.stderr
+ and "' is ambiguous:" in command.stderr
+ )
+ )
+
+
+def get_new_command(command, settings):
+ script = command.script.split(' ')
+ possisiblities = extract_possisiblities(command)
+ matches = get_close_matches(script[1], possisiblities)
+ if matches:
+ script[1] = matches[0]
+ else:
+ script[1] = possisiblities[0]
+ return ' '.join(script)
| Please review and comment!
| https://api.github.com/repos/nvbn/thefuck/pulls/281 | 2015-07-07T00:40:07Z | 2015-07-07T13:36:06Z | 2015-07-07T13:36:06Z | 2015-07-08T01:25:42Z | 1,738 | nvbn/thefuck | 30,780 |
Fix mypyc compatibility issue | diff --git a/src/black/parsing.py b/src/black/parsing.py
index 504e20be00..32cfa5239f 100644
--- a/src/black/parsing.py
+++ b/src/black/parsing.py
@@ -169,6 +169,7 @@ def stringify_ast(
yield f"{' ' * depth}{node.__class__.__name__}("
+ type_ignore_classes: Tuple[Type[Any], ...]
for field in sorted(node._fields): # noqa: F402
# TypeIgnore will not be present using pypy < 3.8, so need for this
if not (_IS_PYPY and sys.version_info < (3, 8)):
| ### Description
I can't wait for when we drop Python 2 support FWIW :)
### Checklist - did you ...
- [x] Add a CHANGELOG entry if necessary? -> n/a
- [x] Add / update tests if necessary? -> n/a
- [x] Add new / update outdated documentation? -> n/a
| https://api.github.com/repos/psf/black/pulls/2628 | 2021-11-19T03:07:24Z | 2021-11-19T03:20:45Z | 2021-11-19T03:20:45Z | 2021-11-19T03:20:47Z | 155 | psf/black | 24,458 |
Added a descriptive error if domain list includes a Unicode-encoded IDN | diff --git a/letsencrypt/configuration.py b/letsencrypt/configuration.py
index a2a54d2d062..69778f5f049 100644
--- a/letsencrypt/configuration.py
+++ b/letsencrypt/configuration.py
@@ -144,6 +144,15 @@ def _check_config_domain_sanity(domains):
if any("xn--" in d for d in domains):
raise errors.ConfigurationError(
"Punycode domains are not supported")
+
+ # Unicode
+ try:
+ for domain in domains:
+ domain.encode('ascii')
+ except UnicodeDecodeError:
+ raise errors.ConfigurationError(
+ "Internationalized domain names are not supported")
+
# FQDN checks from
# http://www.mkyong.com/regular-expressions/domain-name-regular-expression-example/
# Characters used, domain parts < 63 chars, tld > 1 < 64 chars
| The current error for IDNs passed in Unicode form is incorrect and does not describe the actual problem:
```
$ letsencrypt --manual -d example.com -d ёжикв.сайт
Requested domain is not a FQDN
```
This change checks for any domains that cannot be encoded as ASCII, and if one is present:
```
$ letsencrypt --manual -d example.com -d ёжикв.сайт
Internationalized domain names are not supported
```
| https://api.github.com/repos/certbot/certbot/pulls/1759 | 2015-12-05T06:26:33Z | 2015-12-05T07:17:21Z | 2015-12-05T07:17:21Z | 2016-05-06T19:22:00Z | 205 | certbot/certbot | 496 |
`chat_loaders` refactoring | diff --git a/libs/langchain/langchain/chat_loaders/imessage.py b/libs/langchain/langchain/chat_loaders/imessage.py
index d6c02f1e5307a2..eed0cfea3795ee 100644
--- a/libs/langchain/langchain/chat_loaders/imessage.py
+++ b/libs/langchain/langchain/chat_loaders/imessage.py
@@ -4,13 +4,13 @@
from typing import TYPE_CHECKING, Iterator, List, Optional, Union
from langchain import schema
-from langchain.chat_loaders import base as chat_loaders
+from langchain.chat_loaders.base import BaseChatLoader, ChatSession
if TYPE_CHECKING:
import sqlite3
-class IMessageChatLoader(chat_loaders.BaseChatLoader):
+class IMessageChatLoader(BaseChatLoader):
"""Load chat sessions from the `iMessage` chat.db SQLite file.
It only works on macOS when you have iMessage enabled and have the chat.db file.
@@ -18,8 +18,8 @@ class IMessageChatLoader(chat_loaders.BaseChatLoader):
The chat.db file is likely located at ~/Library/Messages/chat.db. However, your
terminal may not have permission to access this file. To resolve this, you can
copy the file to a different location, change the permissions of the file, or
- grant full disk access for your terminal emulator in System Settings > Security
- and Privacy > Full Disk Access.
+ grant full disk access for your terminal emulator
+ in System Settings > Security and Privacy > Full Disk Access.
"""
def __init__(self, path: Optional[Union[str, Path]] = None):
@@ -46,7 +46,7 @@ def __init__(self, path: Optional[Union[str, Path]] = None):
def _load_single_chat_session(
self, cursor: "sqlite3.Cursor", chat_id: int
- ) -> chat_loaders.ChatSession:
+ ) -> ChatSession:
"""
Load a single chat session from the iMessage chat.db.
@@ -83,9 +83,9 @@ def _load_single_chat_session(
)
)
- return chat_loaders.ChatSession(messages=results)
+ return ChatSession(messages=results)
- def lazy_load(self) -> Iterator[chat_loaders.ChatSession]:
+ def lazy_load(self) -> Iterator[ChatSession]:
"""
Lazy load the chat sessions from the iMessage chat.db
and yield them in the required format.
diff --git a/libs/langchain/langchain/chat_loaders/slack.py b/libs/langchain/langchain/chat_loaders/slack.py
index 0bbd503979c7c1..7c9f76c9650e83 100644
--- a/libs/langchain/langchain/chat_loaders/slack.py
+++ b/libs/langchain/langchain/chat_loaders/slack.py
@@ -6,12 +6,12 @@
from typing import Dict, Iterator, List, Union
from langchain import schema
-from langchain.chat_loaders import base as chat_loaders
+from langchain.chat_loaders.base import BaseChatLoader, ChatSession
logger = logging.getLogger(__name__)
-class SlackChatLoader(chat_loaders.BaseChatLoader):
+class SlackChatLoader(BaseChatLoader):
"""Load `Slack` conversations from a dump zip file."""
def __init__(
@@ -27,9 +27,7 @@ def __init__(
if not self.zip_path.exists():
raise FileNotFoundError(f"File {self.zip_path} not found")
- def _load_single_chat_session(
- self, messages: List[Dict]
- ) -> chat_loaders.ChatSession:
+ def _load_single_chat_session(self, messages: List[Dict]) -> ChatSession:
results: List[Union[schema.AIMessage, schema.HumanMessage]] = []
previous_sender = None
for message in messages:
@@ -62,7 +60,7 @@ def _load_single_chat_session(
)
)
previous_sender = sender
- return chat_loaders.ChatSession(messages=results)
+ return ChatSession(messages=results)
def _read_json(self, zip_file: zipfile.ZipFile, file_path: str) -> List[dict]:
"""Read JSON data from a zip subfile."""
@@ -72,7 +70,7 @@ def _read_json(self, zip_file: zipfile.ZipFile, file_path: str) -> List[dict]:
raise ValueError(f"Expected list of dictionaries, got {type(data)}")
return data
- def lazy_load(self) -> Iterator[chat_loaders.ChatSession]:
+ def lazy_load(self) -> Iterator[ChatSession]:
"""
Lazy load the chat sessions from the Slack dump file and yield them
in the required format.
diff --git a/libs/langchain/langchain/chat_loaders/telegram.py b/libs/langchain/langchain/chat_loaders/telegram.py
index 5f0bbfa3246d86..12c30014ac1fa2 100644
--- a/libs/langchain/langchain/chat_loaders/telegram.py
+++ b/libs/langchain/langchain/chat_loaders/telegram.py
@@ -7,12 +7,12 @@
from typing import Iterator, List, Union
from langchain import schema
-from langchain.chat_loaders import base as chat_loaders
+from langchain.chat_loaders.base import BaseChatLoader, ChatSession
logger = logging.getLogger(__name__)
-class TelegramChatLoader(chat_loaders.BaseChatLoader):
+class TelegramChatLoader(BaseChatLoader):
"""Load `telegram` conversations to LangChain chat messages.
To export, use the Telegram Desktop app from
@@ -35,16 +35,14 @@ def __init__(
"""
self.path = path if isinstance(path, str) else str(path)
- def _load_single_chat_session_html(
- self, file_path: str
- ) -> chat_loaders.ChatSession:
+ def _load_single_chat_session_html(self, file_path: str) -> ChatSession:
"""Load a single chat session from an HTML file.
Args:
file_path (str): Path to the HTML file.
Returns:
- chat_loaders.ChatSession: The loaded chat session.
+ ChatSession: The loaded chat session.
"""
try:
from bs4 import BeautifulSoup
@@ -81,18 +79,16 @@ def _load_single_chat_session_html(
)
previous_sender = from_name
- return chat_loaders.ChatSession(messages=results)
+ return ChatSession(messages=results)
- def _load_single_chat_session_json(
- self, file_path: str
- ) -> chat_loaders.ChatSession:
+ def _load_single_chat_session_json(self, file_path: str) -> ChatSession:
"""Load a single chat session from a JSON file.
Args:
file_path (str): Path to the JSON file.
Returns:
- chat_loaders.ChatSession: The loaded chat session.
+ ChatSession: The loaded chat session.
"""
with open(file_path, "r", encoding="utf-8") as file:
data = json.load(file)
@@ -114,7 +110,7 @@ def _load_single_chat_session_json(
)
)
- return chat_loaders.ChatSession(messages=results)
+ return ChatSession(messages=results)
def _iterate_files(self, path: str) -> Iterator[str]:
"""Iterate over files in a directory or zip file.
@@ -139,12 +135,12 @@ def _iterate_files(self, path: str) -> Iterator[str]:
with tempfile.TemporaryDirectory() as temp_dir:
yield zip_file.extract(file, path=temp_dir)
- def lazy_load(self) -> Iterator[chat_loaders.ChatSession]:
+ def lazy_load(self) -> Iterator[ChatSession]:
"""Lazy load the messages from the chat file and yield them
in as chat sessions.
Yields:
- chat_loaders.ChatSession: The loaded chat session.
+ ChatSession: The loaded chat session.
"""
for file_path in self._iterate_files(self.path):
if file_path.endswith(".html"):
diff --git a/libs/langchain/langchain/chat_loaders/whatsapp.py b/libs/langchain/langchain/chat_loaders/whatsapp.py
index e2518ab44df660..39266485e23ea3 100644
--- a/libs/langchain/langchain/chat_loaders/whatsapp.py
+++ b/libs/langchain/langchain/chat_loaders/whatsapp.py
@@ -5,13 +5,13 @@
from typing import Iterator, List, Union
from langchain import schema
-from langchain.chat_loaders import base as chat_loaders
+from langchain.chat_loaders.base import BaseChatLoader, ChatSession
from langchain.schema import messages
logger = logging.getLogger(__name__)
-class WhatsAppChatLoader(chat_loaders.BaseChatLoader):
+class WhatsAppChatLoader(BaseChatLoader):
"""Load `WhatsApp` conversations from a dump zip file or directory."""
def __init__(self, path: str):
@@ -42,7 +42,7 @@ def __init__(self, path: str):
flags=re.IGNORECASE,
)
- def _load_single_chat_session(self, file_path: str) -> chat_loaders.ChatSession:
+ def _load_single_chat_session(self, file_path: str) -> ChatSession:
"""Load a single chat session from a file.
Args:
@@ -84,7 +84,7 @@ def _load_single_chat_session(self, file_path: str) -> chat_loaders.ChatSession:
)
else:
logger.debug(f"Could not parse line: {line}")
- return chat_loaders.ChatSession(messages=results)
+ return ChatSession(messages=results)
def _iterate_files(self, path: str) -> Iterator[str]:
"""Iterate over the files in a directory or zip file.
@@ -108,7 +108,7 @@ def _iterate_files(self, path: str) -> Iterator[str]:
if file.endswith(".txt"):
yield zip_file.extract(file)
- def lazy_load(self) -> Iterator[chat_loaders.ChatSession]:
+ def lazy_load(self) -> Iterator[ChatSession]:
"""Lazy load the messages from the chat file and yield
them as chat sessions.
| Replaced unnecessary namespace renaming
`from langchain.chat_loaders import base as chat_loaders`
with
`from langchain.chat_loaders.base import BaseChatLoader, ChatSession`
and simplified correspondent types.
@eyurtsev | https://api.github.com/repos/langchain-ai/langchain/pulls/10381 | 2023-09-08T17:16:24Z | 2023-09-09T22:22:56Z | 2023-09-09T22:22:56Z | 2023-09-10T18:23:43Z | 2,233 | langchain-ai/langchain | 43,519 |
Added scibert-nli model card | diff --git a/model_cards/gsarti/scibert-nli/README.md b/model_cards/gsarti/scibert-nli/README.md
new file mode 100644
index 0000000000000..1388fea8d557a
--- /dev/null
+++ b/model_cards/gsarti/scibert-nli/README.md
@@ -0,0 +1,32 @@
+# SciBERT-NLI
+
+This is the model [SciBERT](https://github.com/allenai/scibert) [1] fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [2].
+
+The model uses the original `scivocab` wordpiece vocabulary and was trained using the **average pooling strategy** and a **softmax loss**.
+
+**Base model**: `allenai/scibert-scivocab-cased` from HuggingFace AutoModel
+
+**Parameters**:
+
+| Parameter | Value |
+|----------------|-------|
+| Batch size | 64 |
+| Training steps | 20000 |
+| Warmup steps | 1450 |
+
+**Performances**: The performance was evaluated on the test portion of the [STS dataset](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) using Spearman rank correlation and compared to the performances of a general BERT base model obtained with the same procedure to verify their similarity.
+
+
+| Model | Score |
+|-----------------------------|-------------|
+| `scibert-nli` (ours) | 74.50 |
+| `bert-base-nli-mean-tokens` | 77.12 |
+
+
+An example usage for similarity-based scientific paper retrieval is provided in the [Covid Papers Browser](https://github.com/gsarti/covid-papers-browser) repository.
+
+**References:**
+
+[1] I. Beltagy et al, [SciBERT: A Pretrained Language Model for Scientific Text](https://www.aclweb.org/anthology/D19-1371/)
+
+[2] A. Conneau et al., [Supervised Learning of Universal Sentence Representations from Natural Language Inference Data](https://www.aclweb.org/anthology/D17-1070/)
| https://api.github.com/repos/huggingface/transformers/pulls/3376 | 2020-03-22T13:26:17Z | 2020-03-23T15:55:42Z | 2020-03-23T15:55:42Z | 2020-03-23T15:55:43Z | 544 | huggingface/transformers | 12,407 |
|
Set CSR version in make_csr | diff --git a/letsencrypt/crypto_util.py b/letsencrypt/crypto_util.py
index 76265a73914..5fdcba843aa 100644
--- a/letsencrypt/crypto_util.py
+++ b/letsencrypt/crypto_util.py
@@ -118,6 +118,7 @@ def make_csr(key_str, domains):
value=", ".join("DNS:%s" % d for d in domains)
),
])
+ req.set_version(2)
req.set_pubkey(pkey)
req.sign(pkey, "sha256")
return tuple(OpenSSL.crypto.dump_certificate_request(method, req)
| Fixes #2528.
| https://api.github.com/repos/certbot/certbot/pulls/2529 | 2016-02-23T05:45:45Z | 2016-02-23T16:08:23Z | 2016-02-23T16:08:23Z | 2016-05-06T19:22:24Z | 134 | certbot/certbot | 632 |
🌐 Update Chinese translation for `docs/zh/docs/tutorial/query-params.md` | diff --git a/docs/zh/docs/tutorial/query-params.md b/docs/zh/docs/tutorial/query-params.md
index a0cc7fea39310..308dd68a486ab 100644
--- a/docs/zh/docs/tutorial/query-params.md
+++ b/docs/zh/docs/tutorial/query-params.md
@@ -1,67 +1,67 @@
# 查询参数
-声明不属于路径参数的其他函数参数时,它们将被自动解释为"查询字符串"参数
+声明的参数不是路径参数时,路径操作函数会把该参数自动解释为**查询**参数。
```Python hl_lines="9"
{!../../../docs_src/query_params/tutorial001.py!}
```
-查询字符串是键值对的集合,这些键值对位于 URL 的 `?` 之后,并以 `&` 符号分隔。
+查询字符串是键值对的集合,这些键值对位于 URL 的 `?` 之后,以 `&` 分隔。
-例如,在以下 url 中:
+例如,以下 URL 中:
```
http://127.0.0.1:8000/items/?skip=0&limit=10
```
-...查询参数为:
+……查询参数为:
-* `skip`:对应的值为 `0`
-* `limit`:对应的值为 `10`
+* `skip`:值为 `0`
+* `limit`:值为 `10`
-由于它们是 URL 的一部分,因此它们的"原始值"是字符串。
+这些值都是 URL 的组成部分,因此,它们的类型**本应**是字符串。
-但是,当你为它们声明了 Python 类型(在上面的示例中为 `int`)时,它们将转换为该类型并针对该类型进行校验。
+但声明 Python 类型(上例中为 `int`)之后,这些值就会转换为声明的类型,并进行类型校验。
-应用于路径参数的所有相同过程也适用于查询参数:
+所有应用于路径参数的流程也适用于查询参数:
-* (很明显的)编辑器支持
-* 数据<abbr title="将来自 HTTP 请求的字符串转换为 Python 数据类型">"解析"</abbr>
+* (显而易见的)编辑器支持
+* 数据<abbr title="将来自 HTTP 请求的字符串转换为 Python 数据类型">**解析**</abbr>
* 数据校验
-* 自动生成文档
+* API 文档
## 默认值
-由于查询参数不是路径的固定部分,因此它们可以是可选的,并且可以有默认值。
+查询参数不是路径的固定内容,它是可选的,还支持默认值。
-在上面的示例中,它们具有 `skip=0` 和 `limit=10` 的默认值。
+上例用 `skip=0` 和 `limit=10` 设定默认值。
-因此,访问 URL:
+访问 URL:
```
http://127.0.0.1:8000/items/
```
-将与访问以下地址相同:
+与访问以下地址相同:
```
http://127.0.0.1:8000/items/?skip=0&limit=10
```
-但是,如果你访问的是:
+但如果访问:
```
http://127.0.0.1:8000/items/?skip=20
```
-函数中的参数值将会是:
+查询参数的值就是:
* `skip=20`:在 URL 中设定的值
* `limit=10`:使用默认值
## 可选参数
-通过同样的方式,你可以将它们的默认值设置为 `None` 来声明可选查询参数:
+同理,把默认值设为 `None` 即可声明**可选的**查询参数:
=== "Python 3.10+"
@@ -76,20 +76,27 @@ http://127.0.0.1:8000/items/?skip=20
```
-在这个例子中,函数参数 `q` 将是可选的,并且默认值为 `None`。
+本例中,查询参数 `q` 是可选的,默认值为 `None`。
-!!! check
- 还要注意的是,**FastAPI** 足够聪明,能够分辨出参数 `item_id` 是路径参数而 `q` 不是,因此 `q` 是一个查询参数。
+!!! check "检查"
+
+ 注意,**FastAPI** 可以识别出 `item_id` 是路径参数,`q` 不是路径参数,而是查询参数。
+
+!!! note "笔记"
+
+ 因为默认值为 `= None`,FastAPI 把 `q` 识别为可选参数。
+
+ FastAPI 不使用 `Optional[str]` 中的 `Optional`(只使用 `str`),但 `Optional[str]` 可以帮助编辑器发现代码中的错误。
## 查询参数类型转换
-你还可以声明 `bool` 类型,它们将被自动转换:
+参数还可以声明为 `bool` 类型,FastAPI 会自动转换参数类型:
-```Python hl_lines="7"
+```Python hl_lines="9"
{!../../../docs_src/query_params/tutorial003.py!}
```
-这个例子中,如果你访问:
+本例中,访问:
```
http://127.0.0.1:8000/items/foo?short=1
@@ -119,42 +126,42 @@ http://127.0.0.1:8000/items/foo?short=on
http://127.0.0.1:8000/items/foo?short=yes
```
-或任何其他的变体形式(大写,首字母大写等等),你的函数接收的 `short` 参数都会是布尔值 `True`。对于值为 `False` 的情况也是一样的。
+或其它任意大小写形式(大写、首字母大写等),函数接收的 `short` 参数都是布尔值 `True`。值为 `False` 时也一样。
## 多个路径和查询参数
-你可以同时声明多个路径参数和查询参数,**FastAPI** 能够识别它们。
+**FastAPI** 可以识别同时声明的多个路径参数和查询参数。
-而且你不需要以任何特定的顺序来声明。
+而且声明查询参数的顺序并不重要。
-它们将通过名称被检测到:
+FastAPI 通过参数名进行检测:
-```Python hl_lines="6 8"
+```Python hl_lines="8 10"
{!../../../docs_src/query_params/tutorial004.py!}
```
-## 必需查询参数
+## 必选查询参数
-当你为非路径参数声明了默认值时(目前而言,我们所知道的仅有查询参数),则该参数不是必需的。
+为不是路径参数的参数声明默认值(至此,仅有查询参数),该参数就**不是必选**的了。
-如果你不想添加一个特定的值,而只是想使该参数成为可选的,则将默认值设置为 `None`。
+如果只想把参数设为**可选**,但又不想指定参数的值,则要把默认值设为 `None`。
-但当你想让一个查询参数成为必需的,不声明任何默认值就可以:
+如果要把查询参数设置为**必选**,就不要声明默认值:
```Python hl_lines="6-7"
{!../../../docs_src/query_params/tutorial005.py!}
```
-这里的查询参数 `needy` 是类型为 `str` 的必需查询参数。
+这里的查询参数 `needy` 是类型为 `str` 的必选查询参数。
-如果你在浏览器中打开一个像下面的 URL:
+在浏览器中打开如下 URL:
```
http://127.0.0.1:8000/items/foo-item
```
-...因为没有添加必需的参数 `needy`,你将看到类似以下的错误:
+……因为路径中没有必选参数 `needy`,返回的响应中会显示如下错误信息:
```JSON
{
@@ -171,13 +178,13 @@ http://127.0.0.1:8000/items/foo-item
}
```
-由于 `needy` 是必需参数,因此你需要在 URL 中设置它的值:
+`needy` 是必选参数,因此要在 URL 中设置值:
```
http://127.0.0.1:8000/items/foo-item?needy=sooooneedy
```
-...这样就正常了:
+……这样就正常了:
```JSON
{
@@ -186,17 +193,18 @@ http://127.0.0.1:8000/items/foo-item?needy=sooooneedy
}
```
-当然,你也可以定义一些参数为必需的,一些具有默认值,而某些则完全是可选的:
+当然,把一些参数定义为必选,为另一些参数设置默认值,再把其它参数定义为可选,这些操作都是可以的:
-```Python hl_lines="7"
+```Python hl_lines="10"
{!../../../docs_src/query_params/tutorial006.py!}
```
-在这个例子中,有3个查询参数:
+本例中有 3 个查询参数:
+
+* `needy`,必选的 `str` 类型参数
+* `skip`,默认值为 `0` 的 `int` 类型参数
+* `limit`,可选的 `int` 类型参数
-* `needy`,一个必需的 `str` 类型参数。
-* `skip`,一个默认值为 `0` 的 `int` 类型参数。
-* `limit`,一个可选的 `int` 类型参数。
+!!! tip "提示"
-!!! tip
- 你还可以像在 [路径参数](path-params.md#predefined-values){.internal-link target=_blank} 中那样使用 `Enum`。
+ 还可以像在[路径参数](path-params.md#predefined-values){.internal-link target=_blank} 中那样使用 `Enum`。
| also fix code highlight for tutorial002.py and tutorial006.py | https://api.github.com/repos/tiangolo/fastapi/pulls/3480 | 2021-07-07T08:05:07Z | 2024-04-01T05:36:48Z | 2024-04-01T05:36:48Z | 2024-04-01T05:36:48Z | 2,290 | tiangolo/fastapi | 23,195 |
C.146 Fix variable name in example | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index ec3200ed6..94876f845 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -8207,8 +8207,8 @@ Consider:
cout << pb2->id(); // "D"
- if (pb1->id() == "D") { // looks innocent
- D* pd = static_cast<D*>(pb1);
+ if (pb2->id() == "D") { // looks innocent
+ D* pd = static_cast<D*>(pb2);
// ...
}
// ...
| https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/2116 | 2023-08-06T04:02:14Z | 2023-08-06T05:25:02Z | 2023-08-06T05:25:02Z | 2023-08-06T05:25:02Z | 151 | isocpp/CppCoreGuidelines | 15,273 |
|
Reset progress bar in between report runs | diff --git a/frontend/src/components/core/Block/Block.tsx b/frontend/src/components/core/Block/Block.tsx
index a403c3d1674f..c6df749bb5fe 100644
--- a/frontend/src/components/core/Block/Block.tsx
+++ b/frontend/src/components/core/Block/Block.tsx
@@ -16,7 +16,6 @@
*/
import React, { PureComponent, ReactNode, Suspense } from "react"
-import { Progress } from "reactstrap"
import { AutoSizer } from "react-virtualized"
import { List, Map as ImmutableMap } from "immutable"
import { dispatchOneOf } from "lib/immutableProto"
@@ -59,6 +58,7 @@ const Button = React.lazy(() => import("components/widgets/Button/"))
const Checkbox = React.lazy(() => import("components/widgets/Checkbox/"))
const DateInput = React.lazy(() => import("components/widgets/DateInput/"))
const Multiselect = React.lazy(() => import("components/widgets/Multiselect/"))
+const Progress = React.lazy(() => import("components/elements/Progress/"))
const Radio = React.lazy(() => import("components/widgets/Radio/"))
const Selectbox = React.lazy(() => import("components/widgets/Selectbox/"))
const Slider = React.lazy(() => import("components/widgets/Slider/"))
@@ -249,13 +249,7 @@ class Block extends PureComponent<Props> {
plotlyChart: (el: SimpleElement) => (
<PlotlyChart element={el} width={width} />
),
- progress: (el: SimpleElement) => (
- <Progress
- value={el.get("value")}
- className="stProgress"
- style={{ width }}
- />
- ),
+ progress: (el: SimpleElement) => <Progress element={el} width={width} />,
table: (el: SimpleElement) => <Table element={el} width={width} />,
text: (el: SimpleElement) => <Text element={el} width={width} />,
vegaLiteChart: (el: SimpleElement) => (
diff --git a/frontend/src/components/elements/Progress/Progress.scss b/frontend/src/components/elements/Progress/Progress.scss
new file mode 100644
index 000000000000..584db1e590f8
--- /dev/null
+++ b/frontend/src/components/elements/Progress/Progress.scss
@@ -0,0 +1,32 @@
+/**
+ * Copyright 2018-2019 Streamlit Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+@import "src/assets/css/variables";
+
+// Reset the progress bar to 0 without animating on report re-run.
+.reportview-container {
+ .without-transition .progress-bar {
+ transition: none;
+ }
+
+ .with-transition .progress-bar {
+ transition: width 0.1s linear;
+ }
+
+ .stale-element .progress-bar {
+ background-color: transparent;
+ }
+}
diff --git a/frontend/src/components/elements/Progress/Progress.tsx b/frontend/src/components/elements/Progress/Progress.tsx
new file mode 100644
index 000000000000..3142ac25a34b
--- /dev/null
+++ b/frontend/src/components/elements/Progress/Progress.tsx
@@ -0,0 +1,63 @@
+/**
+ * @license
+ * Copyright 2018-2019 Streamlit Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import React from "react"
+import { Map as ImmutableMap } from "immutable"
+import { Progress as UIProgress } from "reactstrap"
+
+import "./Progress.scss"
+
+interface Props {
+ width: number
+ element: ImmutableMap<string, any>
+}
+
+const FAST_UPDATE_MS = 50
+
+class Progress extends React.PureComponent<Props> {
+ lastValue = -1
+ lastAnimatedTime = -1
+
+ public render(): React.ReactNode {
+ const { element, width } = this.props
+ const value = element.get("value")
+ const time = new Date().getTime()
+
+ // Make progress bar stop acting weird when moving backwards or quickly.
+ const isMovingBackwards = value < this.lastValue
+ const isMovingSuperFast = time - this.lastAnimatedTime < FAST_UPDATE_MS
+ const className =
+ isMovingBackwards || isMovingSuperFast
+ ? "without-transition"
+ : "with-transition"
+
+ if (className === "with-transition") {
+ this.lastAnimatedTime = time
+ }
+ this.lastValue = value
+
+ return (
+ <UIProgress
+ value={value}
+ className={"stProgress " + className}
+ style={{ width }}
+ />
+ )
+ }
+}
+
+export default Progress
diff --git a/frontend/src/components/elements/Progress/index.tsx b/frontend/src/components/elements/Progress/index.tsx
new file mode 100644
index 000000000000..9bf69066a927
--- /dev/null
+++ b/frontend/src/components/elements/Progress/index.tsx
@@ -0,0 +1,18 @@
+/**
+ * @license
+ * Copyright 2018-2019 Streamlit Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+export { default } from "./Progress"
diff --git a/lib/streamlit/credentials.py b/lib/streamlit/credentials.py
index 7202ba6cf8b2..b32909eca171 100644
--- a/lib/streamlit/credentials.py
+++ b/lib/streamlit/credentials.py
@@ -202,8 +202,10 @@ def activate(self, show_instructions=True):
email = ""
else:
email = click.prompt(
- text=EMAIL_PROMPT, prompt_suffix="", default="",
- show_default=False
+ text=EMAIL_PROMPT,
+ prompt_suffix="",
+ default="",
+ show_default=False,
)
self.activation = _verify_email(email)
diff --git a/lib/streamlit/server/Server.py b/lib/streamlit/server/Server.py
index 5280bb3d386d..723de9c72ea8 100644
--- a/lib/streamlit/server/Server.py
+++ b/lib/streamlit/server/Server.py
@@ -247,10 +247,11 @@ def _create_app(self):
routes.extend(
[
- (r"/(.*)", StaticFileHandler, {
- "path": "%s/" % static_path,
- "default_filename": "index.html",
- }),
+ (
+ r"/(.*)",
+ StaticFileHandler,
+ {"path": "%s/" % static_path, "default_filename": "index.html"},
+ )
]
)
diff --git a/lib/streamlit/util.py b/lib/streamlit/util.py
index bb553c899e60..d9eafdb92cb8 100644
--- a/lib/streamlit/util.py
+++ b/lib/streamlit/util.py
@@ -252,6 +252,7 @@ def open_browser(url):
# browser even though 'start url' works from the command prompt.
# Fun!
import webbrowser
+
webbrowser.open(url)
return
| **Issue:**
https://github.com/streamlit/streamlit-old-private/issues/760
**Description:**
- Unmount progress bar when it's stale so it resets
**Notes:**
- I didn't see any issues with the `update-progress animation` as mentioned in the issue. Perhaps @tconkling can provide reproduction steps for that. | https://api.github.com/repos/streamlit/streamlit/pulls/321 | 2019-10-08T15:01:11Z | 2019-10-14T20:48:35Z | 2019-10-14T20:48:35Z | 2019-10-14T20:48:38Z | 1,968 | streamlit/streamlit | 22,550 |
Remove redundant SQL index in Pastebin exercise | diff --git a/solutions/system_design/pastebin/README.md b/solutions/system_design/pastebin/README.md
index 756c78c274..2d87ddcc7e 100644
--- a/solutions/system_design/pastebin/README.md
+++ b/solutions/system_design/pastebin/README.md
@@ -116,7 +116,7 @@ paste_path varchar(255) NOT NULL
PRIMARY KEY(shortlink)
```
-We'll create an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) on `shortlink ` and `created_at` to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
+Setting the primary key to be based on the `shortlink` column creates an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) that the database uses to enforce uniqueness. We'll create an additional index on `created_at` to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
To generate the unique url, we could:
| Making a column a primary key enforces uniqueness, and the db usually does so by creating a unique clustered index. So, you don't need a second index. | https://api.github.com/repos/donnemartin/system-design-primer/pulls/405 | 2020-04-20T19:16:06Z | 2020-07-07T01:05:51Z | 2020-07-07T01:05:51Z | 2020-07-07T01:06:04Z | 380 | donnemartin/system-design-primer | 36,749 |
Prep for 1.32.1 | diff --git a/.azure-pipelines/release.yml b/.azure-pipelines/release.yml
index 1c983a3b647..2374289e3c9 100644
--- a/.azure-pipelines/release.yml
+++ b/.azure-pipelines/release.yml
@@ -8,7 +8,7 @@ pr: none
variables:
dockerTag: ${{variables['Build.SourceBranchName']}}
- snapBuildTimeout: 5400
+ snapBuildTimeout: 19800
stages:
- template: templates/stages/test-and-package-stage.yml
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md
index 2fcb78c99c2..f3bd9f56507 100644
--- a/certbot/CHANGELOG.md
+++ b/certbot/CHANGELOG.md
@@ -2,6 +2,16 @@
Certbot adheres to [Semantic Versioning](https://semver.org/).
+## 1.32.1 - master
+
+### Fixed
+
+* Our snaps and docker images were rebuilt to include updated versions of our dependencies.
+
+This release was not pushed to PyPI since those packages were unaffected.
+
+More details about these changes can be found on our GitHub repo.
+
## 1.32.0 - 2022-11-08
### Added
| I wanted to do this because we were notified that https://ubuntu.com/security/notices/USN-5638-3/ affects our snaps. This probably doesn't affect us, but rebuilding to be safe seems worth it to me personally.
I started to just trigger a new v1.32.0 release build, but I don't want to overwrite our 2.0 Docker images under the `latest` tag.
Changelog changes here are similar to what has been done for past point releases like https://github.com/certbot/certbot/pull/8501.
I also cherry picked #9474 to this branch to help the release process pass.
| https://api.github.com/repos/certbot/certbot/pulls/9492 | 2022-12-02T15:00:30Z | 2022-12-05T15:00:44Z | 2022-12-05T15:00:44Z | 2022-12-05T15:00:45Z | 299 | certbot/certbot | 1,077 |
Add share.flows command, fix #2779 | diff --git a/mitmproxy/addons/__init__.py b/mitmproxy/addons/__init__.py
index 8f84c20d9f..619211130b 100644
--- a/mitmproxy/addons/__init__.py
+++ b/mitmproxy/addons/__init__.py
@@ -20,6 +20,7 @@
from mitmproxy.addons import streambodies
from mitmproxy.addons import save
from mitmproxy.addons import upstream_auth
+from mitmproxy.addons import share
def default_addons():
@@ -46,4 +47,5 @@ def default_addons():
streambodies.StreamBodies(),
save.Save(),
upstream_auth.UpstreamAuth(),
+ share.Share()
]
diff --git a/mitmproxy/addons/save.py b/mitmproxy/addons/save.py
index 44afef686e..47da29b256 100644
--- a/mitmproxy/addons/save.py
+++ b/mitmproxy/addons/save.py
@@ -61,8 +61,8 @@ def save(self, flows: typing.Sequence[flow.Flow], path: mitmproxy.types.Path) ->
except IOError as v:
raise exceptions.CommandError(v) from v
stream = io.FlowWriter(f)
- for i in flows:
- stream.add(i)
+ for x in flows:
+ stream.add(x)
f.close()
ctx.log.alert("Saved %s flows." % len(flows))
diff --git a/mitmproxy/addons/share.py b/mitmproxy/addons/share.py
new file mode 100644
index 0000000000..1a234cc944
--- /dev/null
+++ b/mitmproxy/addons/share.py
@@ -0,0 +1,79 @@
+import typing
+import random
+import string
+import io
+import http.client
+
+from mitmproxy import command
+import mitmproxy.io
+from mitmproxy import ctx
+from mitmproxy import flow
+from mitmproxy.net.http import status_codes
+
+
+class Share:
+ def encode_multipart_formdata(self, filename: str, content: bytes) -> typing.Tuple[str, bytes]:
+ params = {"key": filename, "acl": "bucket-owner-full-control", "Content-Type": "application/octet-stream"}
+ LIMIT = b'---------------------------198495659117975628761412556003'
+ CRLF = b'\r\n'
+ l = []
+ for (key, value) in params.items():
+ l.append(b'--' + LIMIT)
+ l.append(b'Content-Disposition: form-data; name="%b"' % key.encode("utf-8"))
+ l.append(b'')
+ l.append(value.encode("utf-8"))
+ l.append(b'--' + LIMIT)
+ l.append(b'Content-Disposition: form-data; name="file"; filename="%b"' % filename.encode("utf-8"))
+ l.append(b'Content-Type: application/octet-stream')
+ l.append(b'')
+ l.append(content)
+ l.append(b'--' + LIMIT + b'--')
+ l.append(b'')
+ body = CRLF.join(l)
+ content_type = 'multipart/form-data; boundary=%s' % LIMIT.decode("utf-8")
+ return content_type, body
+
+ def post_multipart(self, host: str, filename: str, content: bytes) -> str:
+ """
+ Upload flows to the specified S3 server.
+
+ Returns:
+ - The share URL, if upload is successful.
+ Raises:
+ - IOError, otherwise.
+ """
+ content_type, body = self.encode_multipart_formdata(filename, content)
+ conn = http.client.HTTPConnection(host) # FIXME: This ultimately needs to be HTTPSConnection
+ headers = {'content-type': content_type}
+ try:
+ conn.request("POST", "", body, headers)
+ resp = conn.getresponse()
+ except Exception as v:
+ raise IOError(v)
+ finally:
+ conn.close()
+ if resp.status != 204:
+ if resp.reason:
+ reason = resp.reason
+ else:
+ reason = status_codes.RESPONSES.get(resp.status, str(resp.status))
+ raise IOError(reason)
+ return "https://share.mitmproxy.org/%s" % filename
+
+ @command.command("share.flows")
+ def share(self, flows: typing.Sequence[flow.Flow]) -> None:
+ u_id = "".join(random.choice(string.ascii_lowercase + string.digits)for _ in range(7))
+ f = io.BytesIO()
+ stream = mitmproxy.io.FlowWriter(f)
+ for x in flows:
+ stream.add(x)
+ f.seek(0)
+ content = f.read()
+ try:
+ res = self.post_multipart('upload.share.mitmproxy.org.s3.amazonaws.com', u_id, content)
+ except IOError as v:
+ ctx.log.warn("%s" % v)
+ else:
+ ctx.log.alert("%s" % res)
+ finally:
+ f.close()
\ No newline at end of file
diff --git a/test/mitmproxy/addons/test_share.py b/test/mitmproxy/addons/test_share.py
new file mode 100644
index 0000000000..6c3d6e28fc
--- /dev/null
+++ b/test/mitmproxy/addons/test_share.py
@@ -0,0 +1,34 @@
+from unittest import mock
+import http.client
+
+from mitmproxy.test import taddons
+from mitmproxy.test import tflow
+
+from mitmproxy.addons import share
+from mitmproxy.addons import view
+
+
+def test_share_command():
+ with mock.patch('mitmproxy.addons.share.http.client.HTTPConnection') as mock_http:
+ sh = share.Share()
+ with taddons.context() as tctx:
+ mock_http.return_value.getresponse.return_value = mock.MagicMock(status=204, reason="No Content")
+ sh.share([tflow.tflow(resp=True)])
+ assert tctx.master.has_log("https://share.mitmproxy.org/")
+
+ mock_http.return_value.getresponse.return_value = mock.MagicMock(status=403, reason="Forbidden")
+ sh.share([tflow.tflow(resp=True)])
+ assert tctx.master.has_log("Forbidden")
+
+ mock_http.return_value.getresponse.return_value = mock.MagicMock(status=404, reason="")
+ sh.share([tflow.tflow(resp=True)])
+ assert tctx.master.has_log("Not Found")
+
+ mock_http.return_value.request.side_effect = http.client.CannotSendRequest("Error in sending req")
+ sh.share([tflow.tflow(resp=True)])
+ assert tctx.master.has_log("Error in sending req")
+
+ v = view.View()
+ tctx.master.addons.add(v)
+ tctx.master.addons.add(sh)
+ tctx.master.commands.call_args("share.flows", ["@shown"])
| Fixes #2779 . Added the command 'upload.file' | https://api.github.com/repos/mitmproxy/mitmproxy/pulls/2802 | 2018-01-18T20:46:01Z | 2018-02-13T19:01:01Z | 2018-02-13T19:01:00Z | 2018-06-15T16:54:24Z | 1,559 | mitmproxy/mitmproxy | 28,324 |
Update README.md | diff --git a/exercises/ansible/README.md b/exercises/ansible/README.md
index 12982c3fd..206c4d84e 100644
--- a/exercises/ansible/README.md
+++ b/exercises/ansible/README.md
@@ -15,6 +15,7 @@
<summary>Describe each of the following components in Ansible, including the relationship between them:
* Task
+ * Inventory
* Module
* Play
* Playbook
@@ -23,6 +24,8 @@
Task – a call to a specific Ansible module
Module – the actual unit of code executed by Ansible on your own host or a remote host. Modules are indexed by category (database, file, network, …) and also referred to as task plugins.
+
+Inventory – An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon. The inventory file can be in one of many formats, depending on the inventory plugins you have. The most common formats are INI and YAML.
Play – One or more tasks executed on a given host(s)
| Hey repo owner!
I have added a new feature on Ansible topic (inventory component), I think that will increase the understand of all people on this topic about Ansible tool. if i am wrong just ignore
Let me know if you have any questions around this.
| https://api.github.com/repos/bregman-arie/devops-exercises/pulls/194 | 2021-11-29T00:17:04Z | 2021-11-29T06:11:48Z | 2021-11-29T06:11:48Z | 2021-11-29T06:11:48Z | 242 | bregman-arie/devops-exercises | 17,400 |
Better handling of CookieJar Runtime Exception | diff --git a/lib/request/connect.py b/lib/request/connect.py
index 8508dee51ff..84ec25e4d38 100644
--- a/lib/request/connect.py
+++ b/lib/request/connect.py
@@ -587,14 +587,9 @@ class _(dict):
if not getRequestHeader(req, HTTP_HEADER.COOKIE) and conf.cj:
conf.cj._policy._now = conf.cj._now = int(time.time())
- while True:
- try:
- cookies = conf.cj._cookies_for_request(req)
- except RuntimeError: # NOTE: https://github.com/sqlmapproject/sqlmap/issues/5187
- time.sleep(1)
- else:
- requestHeaders += "\r\n%s" % ("Cookie: %s" % ";".join("%s=%s" % (getUnicode(cookie.name), getUnicode(cookie.value)) for cookie in cookies))
- break
+ with conf.cj._cookies_lock:
+ cookies = conf.cj._cookies_for_request(req)
+ requestHeaders += "\r\n%s" % ("Cookie: %s" % ";".join("%s=%s" % (getUnicode(cookie.name), getUnicode(cookie.value)) for cookie in cookies))
if post is not None:
if not getRequestHeader(req, HTTP_HEADER.CONTENT_LENGTH) and not chunked:
| Instead of waiting for the exception and sleeping in between, we can simply use the same lock used by the original cookiejar code (since we are already accessing private member), to prevent the exception from happening in the first place.
Fixes #5187 | https://api.github.com/repos/sqlmapproject/sqlmap/pulls/5206 | 2022-10-21T08:56:40Z | 2022-10-21T17:10:43Z | 2022-10-21T17:10:43Z | 2022-10-21T17:10:52Z | 294 | sqlmapproject/sqlmap | 15,062 |
W&B log epoch | diff --git a/test.py b/test.py
index 891f6bef41c..db344e72204 100644
--- a/test.py
+++ b/test.py
@@ -239,8 +239,8 @@ def test(data,
if plots:
confusion_matrix.plot(save_dir=save_dir, names=list(names.values()))
if wandb and wandb.run:
- wandb.log({"Images": wandb_images})
- wandb.log({"Validation": [wandb.Image(str(f), caption=f.name) for f in sorted(save_dir.glob('test*.jpg'))]})
+ val_batches = [wandb.Image(str(f), caption=f.name) for f in sorted(save_dir.glob('test*.jpg'))]
+ wandb.log({"Images": wandb_images, "Validation": val_batches}, commit=False)
# Save JSON
if save_json and len(jdict):
diff --git a/train.py b/train.py
index 9e6bd867372..5eff4bbac17 100644
--- a/train.py
+++ b/train.py
@@ -321,7 +321,7 @@ def train(hyp, opt, device, tb_writer=None, wandb=None):
# tb_writer.add_graph(model, imgs) # add model to tensorboard
elif plots and ni == 10 and wandb:
wandb.log({"Mosaics": [wandb.Image(str(x), caption=x.name) for x in save_dir.glob('train*.jpg')
- if x.exists()]})
+ if x.exists()]}, commit=False)
# end batch ------------------------------------------------------------------------------------------------
# end epoch ----------------------------------------------------------------------------------------------------
diff --git a/utils/plots.py b/utils/plots.py
index 4765069e037..67f11bfd201 100644
--- a/utils/plots.py
+++ b/utils/plots.py
@@ -295,7 +295,7 @@ def plot_labels(labels, save_dir=Path(''), loggers=None):
# loggers
for k, v in loggers.items() or {}:
if k == 'wandb' and v:
- v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]})
+ v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}, commit=False)
def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution()
| May allow W&B results logging with epoch x axis.
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Improvement in Weight & Biases logging within YOLOv5's testing and training routines.
### 📊 Key Changes
- Modified `wandb.log` calls in `test.py` and `train.py` to use `commit=False` option.
- Updated `wandb.log` in `utils/plots.py` with `commit=False`.
### 🎯 Purpose & Impact
- 👍 **Purpose**: These changes allow for more efficient logging to Weight & Biases by grouping multiple logs into a single network call.
- 🚀 **Impact**: Reduces the number of calls to Weights & Biases API during training and testing, potentially speeding up these processes and improving resource usage. Users of the YOLOv5 framework with integrated Weights & Biases logging will benefit from these optimizations. | https://api.github.com/repos/ultralytics/yolov5/pulls/1946 | 2021-01-15T04:44:40Z | 2021-01-27T05:16:02Z | 2021-01-27T05:16:02Z | 2024-01-19T19:49:02Z | 538 | ultralytics/yolov5 | 24,979 |
Fix super tiny type error | diff --git a/timm/scheduler/cosine_lr.py b/timm/scheduler/cosine_lr.py
index e2c975fb79..4eaaa86a81 100644
--- a/timm/scheduler/cosine_lr.py
+++ b/timm/scheduler/cosine_lr.py
@@ -8,6 +8,7 @@
import math
import numpy as np
import torch
+from typing import List
from .scheduler import Scheduler
@@ -77,7 +78,7 @@ def __init__(
else:
self.warmup_steps = [1 for _ in self.base_values]
- def _get_lr(self, t):
+ def _get_lr(self, t: int) -> List[float]:
if t < self.warmup_t:
lrs = [self.warmup_lr_init + t * s for s in self.warmup_steps]
else:
diff --git a/timm/scheduler/multistep_lr.py b/timm/scheduler/multistep_lr.py
index 10f2fb5044..e5db556d43 100644
--- a/timm/scheduler/multistep_lr.py
+++ b/timm/scheduler/multistep_lr.py
@@ -53,7 +53,7 @@ def get_curr_decay_steps(self, t):
# assumes self.decay_t is sorted
return bisect.bisect_right(self.decay_t, t + 1)
- def _get_lr(self, t):
+ def _get_lr(self, t: int) -> List[float]:
if t < self.warmup_t:
lrs = [self.warmup_lr_init + t * s for s in self.warmup_steps]
else:
diff --git a/timm/scheduler/plateau_lr.py b/timm/scheduler/plateau_lr.py
index 9f8271579b..e868bd5e58 100644
--- a/timm/scheduler/plateau_lr.py
+++ b/timm/scheduler/plateau_lr.py
@@ -5,6 +5,7 @@
Hacked together by / Copyright 2020 Ross Wightman
"""
import torch
+from typing import List
from .scheduler import Scheduler
@@ -106,5 +107,5 @@ def _apply_noise(self, epoch):
param_group['lr'] = new_lr
self.restore_lr = restore_lr
- def _get_lr(self, t: int) -> float:
+ def _get_lr(self, t: int) -> List[float]:
assert False, 'should not be called as step is overridden'
diff --git a/timm/scheduler/poly_lr.py b/timm/scheduler/poly_lr.py
index 906f6acf82..8875e15bfe 100644
--- a/timm/scheduler/poly_lr.py
+++ b/timm/scheduler/poly_lr.py
@@ -6,6 +6,7 @@
"""
import math
import logging
+from typing import List
import torch
@@ -73,7 +74,7 @@ def __init__(
else:
self.warmup_steps = [1 for _ in self.base_values]
- def _get_lr(self, t):
+ def _get_lr(self, t: int) -> List[float]:
if t < self.warmup_t:
lrs = [self.warmup_lr_init + t * s for s in self.warmup_steps]
else:
diff --git a/timm/scheduler/scheduler.py b/timm/scheduler/scheduler.py
index 4ae2e2aeb6..583357f7c5 100644
--- a/timm/scheduler/scheduler.py
+++ b/timm/scheduler/scheduler.py
@@ -1,6 +1,6 @@
import abc
from abc import ABC
-from typing import Any, Dict, Optional
+from typing import Any, Dict, List, Optional
import torch
@@ -65,10 +65,10 @@ def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
self.__dict__.update(state_dict)
@abc.abstractmethod
- def _get_lr(self, t: int) -> float:
+ def _get_lr(self, t: int) -> List[float]:
pass
- def _get_values(self, t: int, on_epoch: bool = True) -> Optional[float]:
+ def _get_values(self, t: int, on_epoch: bool = True) -> Optional[List[float]]:
proceed = (on_epoch and self.t_in_epochs) or (not on_epoch and not self.t_in_epochs)
if not proceed:
return None
diff --git a/timm/scheduler/step_lr.py b/timm/scheduler/step_lr.py
index 70a45a70d4..c205d43715 100644
--- a/timm/scheduler/step_lr.py
+++ b/timm/scheduler/step_lr.py
@@ -6,6 +6,8 @@
"""
import math
import torch
+from typing import List
+
from .scheduler import Scheduler
@@ -51,7 +53,7 @@ def __init__(
else:
self.warmup_steps = [1 for _ in self.base_values]
- def _get_lr(self, t):
+ def _get_lr(self, t: int) -> List[float]:
if t < self.warmup_t:
lrs = [self.warmup_lr_init + t * s for s in self.warmup_steps]
else:
diff --git a/timm/scheduler/tanh_lr.py b/timm/scheduler/tanh_lr.py
index 48acc61b03..94455302c6 100644
--- a/timm/scheduler/tanh_lr.py
+++ b/timm/scheduler/tanh_lr.py
@@ -8,6 +8,7 @@
import math
import numpy as np
import torch
+from typing import List
from .scheduler import Scheduler
@@ -75,7 +76,7 @@ def __init__(
else:
self.warmup_steps = [1 for _ in self.base_values]
- def _get_lr(self, t):
+ def _get_lr(self, t: int) -> List[float]:
if t < self.warmup_t:
lrs = [self.warmup_lr_init + t * s for s in self.warmup_steps]
else:
| IMHO, for example, CosineLRScheduler returns list of floats, instead of a single float. Therefore, the type signature may need to be updated. Please correct me if I am wrong! | https://api.github.com/repos/huggingface/pytorch-image-models/pulls/2124 | 2024-03-23T03:27:44Z | 2024-04-02T21:31:38Z | 2024-04-02T21:31:38Z | 2024-04-03T00:39:19Z | 1,394 | huggingface/pytorch-image-models | 16,230 |
gh-72073: Add Windows case in pathlib.rename | diff --git a/Doc/library/pathlib.rst b/Doc/library/pathlib.rst
index d45e7aa84b28ce..cda2cc83225e07 100644
--- a/Doc/library/pathlib.rst
+++ b/Doc/library/pathlib.rst
@@ -1021,8 +1021,9 @@ call fails (for example because the path doesn't exist).
Rename this file or directory to the given *target*, and return a new Path
instance pointing to *target*. On Unix, if *target* exists and is a file,
- it will be replaced silently if the user has permission. *target* can be
- either a string or another path object::
+ it will be replaced silently if the user has permission.
+ On Windows, if *target* exists, :data:`FileExistsError` will be raised.
+ *target* can be either a string or another path object::
>>> p = Path('foo')
>>> p.open('w').write('some text')
| #72073
https://docs.python.org/3.12/library/pathlib.html#pathlib.Path.rename
Automerge-Triggered-By: GH:brettcannon | https://api.github.com/repos/python/cpython/pulls/93002 | 2022-05-20T08:05:26Z | 2022-05-20T22:25:40Z | 2022-05-20T22:25:39Z | 2022-05-21T04:31:52Z | 229 | python/cpython | 4,622 |
Use long classes names for enabled middlewares in startup logs | diff --git a/scrapy/middleware.py b/scrapy/middleware.py
index 6120488e22f..be36f977e41 100644
--- a/scrapy/middleware.py
+++ b/scrapy/middleware.py
@@ -28,6 +28,7 @@ def _get_mwlist_from_settings(cls, settings):
def from_settings(cls, settings, crawler=None):
mwlist = cls._get_mwlist_from_settings(settings)
middlewares = []
+ enabled = []
for clspath in mwlist:
try:
mwcls = load_object(clspath)
@@ -38,6 +39,7 @@ def from_settings(cls, settings, crawler=None):
else:
mw = mwcls()
middlewares.append(mw)
+ enabled.append(clspath)
except NotConfigured as e:
if e.args:
clsname = clspath.split('.')[-1]
@@ -45,7 +47,6 @@ def from_settings(cls, settings, crawler=None):
{'clsname': clsname, 'eargs': e.args[0]},
extra={'crawler': crawler})
- enabled = [x.__class__.__name__ for x in middlewares]
logger.info("Enabled %(componentname)ss:\n%(enabledlist)s",
{'componentname': cls.component_name,
'enabledlist': pprint.pformat(enabled)},
| Continuation of https://github.com/scrapy/scrapy/pull/1722
| https://api.github.com/repos/scrapy/scrapy/pulls/1726 | 2016-01-26T15:44:27Z | 2016-01-26T16:29:44Z | 2016-01-26T16:29:44Z | 2016-02-01T14:53:45Z | 294 | scrapy/scrapy | 35,115 |
dns-cloudflare: update URL for obtaining API keys | diff --git a/certbot-dns-cloudflare/certbot_dns_cloudflare/dns_cloudflare.py b/certbot-dns-cloudflare/certbot_dns_cloudflare/dns_cloudflare.py
index e3d0d42e047..0bbdf703ae8 100644
--- a/certbot-dns-cloudflare/certbot_dns_cloudflare/dns_cloudflare.py
+++ b/certbot-dns-cloudflare/certbot_dns_cloudflare/dns_cloudflare.py
@@ -10,7 +10,7 @@
logger = logging.getLogger(__name__)
-ACCOUNT_URL = 'https://www.cloudflare.com/a/account/my-account'
+ACCOUNT_URL = 'https://dash.cloudflare.com/profile/api-tokens'
@zope.interface.implementer(interfaces.IAuthenticator)
| Updated the ACCOUNT_URL in the Cloudflare-DNS plugin.
This uses the new "dash.cloudflare.com" scheme and future-proofs this URL for an upcoming change to Cloudflare API keys (this is not public yet, so no other changes related to this).
| https://api.github.com/repos/certbot/certbot/pulls/7052 | 2019-05-11T22:00:21Z | 2019-06-26T00:53:32Z | 2019-06-26T00:53:32Z | 2019-06-26T14:11:20Z | 168 | certbot/certbot | 2,376 |
cookies.txt extension doesn't exist anymore on the Chrome Web Store | diff --git a/README.md b/README.md
index 34c6c677d8f..35ae364213e 100644
--- a/README.md
+++ b/README.md
@@ -879,7 +879,7 @@ Either prepend `https://www.youtube.com/watch?v=` or separate the ID from the op
Use the `--cookies` option, for example `--cookies /path/to/cookies/file.txt`.
-In order to extract cookies from browser use any conforming browser extension for exporting cookies. For example, [cookies.txt](https://chrome.google.com/webstore/detail/cookiestxt/njabckikapfpffapmjgojcnbfjonfjfg) (for Chrome) or [cookies.txt](https://addons.mozilla.org/en-US/firefox/addon/cookies-txt/) (for Firefox).
+In order to extract cookies from browser use any conforming browser extension for exporting cookies. For example, [Get cookies.txt](https://chrome.google.com/webstore/detail/get-cookiestxt/bgaddhkoddajcdgocldbbfleckgcbcid/) (for Chrome) or [cookies.txt](https://addons.mozilla.org/en-US/firefox/addon/cookies-txt/) (for Firefox).
Note that the cookies file must be in Mozilla/Netscape format and the first line of the cookies file must be either `# HTTP Cookie File` or `# Netscape HTTP Cookie File`. Make sure you have correct [newline format](https://en.wikipedia.org/wiki/Newline) in the cookies file and convert newlines if necessary to correspond with your OS, namely `CRLF` (`\r\n`) for Windows and `LF` (`\n`) for Unix and Unix-like systems (Linux, macOS, etc.). `HTTP Error 400: Bad Request` when using `--cookies` is a good sign of invalid newline format.
| ### Before submitting a *pull request* make sure you have:
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Read [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site)
- [x] Read [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) and adjusted the code to meet them
- [x] Covered the code with tests (note that PRs without tests will be REJECTED)
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [x] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
The [cookies.txt extension](https://chrome.google.com/webstore/detail/cookiestxt/njabckikapfpffapmjgojcnbfjonfjfg) doesn't exist anymore on the Chrome Web Store, so I propose to change the link in the _README.md_ to another similar extension called [Get cookies.txt](https://chrome.google.com/webstore/detail/get-cookiestxt/bgaddhkoddajcdgocldbbfleckgcbcid/) with the same functions and utility of the old one. I tested the extension personally on my machine and it exports cookies in the Netscape cookie file format which is compatible with youtube-dl, just like the cookies.txt extension did before.
This commit does **NOT change any code**, is just a fix for a link on the _README.md_
This PR close #26885
| https://api.github.com/repos/ytdl-org/youtube-dl/pulls/27433 | 2020-12-14T14:15:10Z | 2020-12-26T13:50:40Z | 2020-12-26T13:50:40Z | 2020-12-26T13:52:26Z | 393 | ytdl-org/youtube-dl | 50,343 |
Add --new-key | diff --git a/certbot-ci/certbot_integration_tests/certbot_tests/assertions.py b/certbot-ci/certbot_integration_tests/certbot_tests/assertions.py
index 2720842171e..3650f64f043 100644
--- a/certbot-ci/certbot_integration_tests/certbot_tests/assertions.py
+++ b/certbot-ci/certbot_integration_tests/certbot_tests/assertions.py
@@ -37,16 +37,19 @@ def assert_elliptic_key(key: str, curve: Type[EllipticCurve]) -> None:
assert isinstance(key.curve, curve)
-def assert_rsa_key(key: str) -> None:
+def assert_rsa_key(key: str, key_size: Optional[int] = None) -> None:
"""
Asserts that the key at the given path is an RSA key.
:param str key: path to key
+ :param int key_size: if provided, assert that the RSA key is of this size
"""
with open(key, 'rb') as file:
privkey1 = file.read()
key = load_pem_private_key(data=privkey1, password=None, backend=default_backend())
assert isinstance(key, RSAPrivateKey)
+ if key_size:
+ assert key_size == key.key_size
def assert_hook_execution(probe_path: str, probe_content: str) -> None:
diff --git a/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py b/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
index 4a33952174f..2827ae939e2 100644
--- a/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
+++ b/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
@@ -8,6 +8,7 @@
import time
from typing import Iterable
from typing import Generator
+from typing import Tuple
from typing import Type
from cryptography.hazmat.primitives.asymmetric.ec import EllipticCurve
@@ -463,6 +464,42 @@ def test_reuse_key(context: IntegrationTestsContext) -> None:
assert len({cert1, cert2, cert3}) == 3
+def test_new_key(context: IntegrationTestsContext) -> None:
+ """Tests --new-key and its interactions with --reuse-key"""
+ def private_key(generation: int) -> Tuple[str, str]:
+ pk_path = join(context.config_dir, f'archive/{certname}/privkey{generation}.pem')
+ with open(pk_path, 'r') as file:
+ return file.read(), pk_path
+
+ certname = context.get_domain('newkey')
+
+ context.certbot(['--domains', certname, '--reuse-key',
+ '--key-type', 'rsa', '--rsa-key-size', '4096'])
+ privkey1, _ = private_key(1)
+
+ # renew: --new-key should replace the key, but keep reuse_key and the key type + params
+ context.certbot(['renew', '--cert-name', certname, '--new-key'])
+ privkey2, privkey2_path = private_key(2)
+ assert privkey1 != privkey2
+ assert_saved_lineage_option(context.config_dir, certname, 'reuse_key', 'True')
+ assert_rsa_key(privkey2_path, 4096)
+
+ # certonly: it should replace the key but the key size will change
+ context.certbot(['certonly', '-d', certname, '--reuse-key', '--new-key'])
+ privkey3, privkey3_path = private_key(3)
+ assert privkey2 != privkey3
+ assert_saved_lineage_option(context.config_dir, certname, 'reuse_key', 'True')
+ assert_rsa_key(privkey3_path, 2048)
+
+ # certonly: it should be possible to change the key type and keep reuse_key
+ context.certbot(['certonly', '-d', certname, '--reuse-key', '--new-key', '--key-type', 'ecdsa',
+ '--cert-name', certname])
+ privkey4, privkey4_path = private_key(4)
+ assert privkey3 != privkey4
+ assert_saved_lineage_option(context.config_dir, certname, 'reuse_key', 'True')
+ assert_elliptic_key(privkey4_path, SECP256R1)
+
+
def test_incorrect_key_type(context: IntegrationTestsContext) -> None:
with pytest.raises(subprocess.CalledProcessError):
context.certbot(['--key-type="failwhale"'])
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md
index 3dd1a9e2651..9814e67eafe 100644
--- a/certbot/CHANGELOG.md
+++ b/certbot/CHANGELOG.md
@@ -6,7 +6,10 @@ Certbot adheres to [Semantic Versioning](https://semver.org/).
### Added
-*
+* Added `--new-key`. When renewing or replacing a certificate that has `--reuse-key`
+ set, it will force a new private key to be generated.
+ Combining `--reuse-key` and `--new-key` will replace the certificate's private key
+ and then reuse it for future renewals.
### Changed
diff --git a/certbot/certbot/_internal/cli/__init__.py b/certbot/certbot/_internal/cli/__init__.py
index d11a454b146..30b3fab1d35 100644
--- a/certbot/certbot/_internal/cli/__init__.py
+++ b/certbot/certbot/_internal/cli/__init__.py
@@ -223,6 +223,13 @@ def prepare_and_parse_args(plugins: plugins_disco.PluginsRegistry, args: List[st
"certificate. Not reusing private keys is the default behavior of "
"Certbot. This option may be used to unset --reuse-key on an "
"existing certificate.")
+ helpful.add(
+ "automation", "--new-key",
+ dest="new_key", action="store_true", default=flag_default("new_key"),
+ help="When renewing or replacing a certificate, generate a new private key, "
+ "even if --reuse-key is set on the existing certificate. Combining "
+ "--new-key and --reuse-key will result in the private key being replaced and "
+ "then reused in future renewals.")
helpful.add(
["automation", "renew", "certonly"],
diff --git a/certbot/certbot/_internal/constants.py b/certbot/certbot/_internal/constants.py
index 3867d777c1e..5a9d97d835d 100644
--- a/certbot/certbot/_internal/constants.py
+++ b/certbot/certbot/_internal/constants.py
@@ -74,6 +74,7 @@
validate_hooks=True,
directory_hooks=True,
reuse_key=False,
+ new_key=False,
disable_renew_updates=False,
random_sleep_on_renew=True,
eab_hmac_key=None,
diff --git a/certbot/certbot/_internal/renewal.py b/certbot/certbot/_internal/renewal.py
index 4fb2ca00aa5..0ba2e810802 100644
--- a/certbot/certbot/_internal/renewal.py
+++ b/certbot/certbot/_internal/renewal.py
@@ -336,7 +336,7 @@ def renew_cert(config: configuration.NamespaceConfig, domains: Optional[List[str
domains = lineage.names()
# The private key is the existing lineage private key if reuse_key is set.
# Otherwise, generate a fresh private key by passing None.
- if config.reuse_key:
+ if config.reuse_key and not config.new_key:
new_key = os.path.normpath(lineage.privkey)
_update_renewal_params_from_key(new_key, config)
else:
diff --git a/certbot/certbot/configuration.py b/certbot/certbot/configuration.py
index ebeb8e98c65..d5ad8759957 100644
--- a/certbot/certbot/configuration.py
+++ b/certbot/certbot/configuration.py
@@ -300,6 +300,13 @@ def issuance_timeout(self) -> int:
"""
return self.namespace.issuance_timeout
+ @property
+ def new_key(self) -> bool:
+ """This option specifies whether Certbot should generate a new private
+ key when replacing a certificate, even if reuse_key is set.
+ """
+ return self.namespace.new_key
+
# Magic methods
def __deepcopy__(self, _memo: Any) -> 'NamespaceConfig':
diff --git a/certbot/tests/main_test.py b/certbot/tests/main_test.py
index c29f4d758e1..09a069c6121 100644
--- a/certbot/tests/main_test.py
+++ b/certbot/tests/main_test.py
@@ -1115,7 +1115,7 @@ def test_certonly_new_request_failure(self, mock_subscription):
def _test_renewal_common(self, due_for_renewal, extra_args, log_out=None,
args=None, should_renew=True, error_expected=False,
quiet_mode=False, expiry_date=datetime.datetime.now(),
- reuse_key=False):
+ reuse_key=False, new_key=False):
cert_path = test_util.vector_path('cert_512.pem')
chain_path = os.path.normpath(os.path.join(self.config.config_dir,
'live/foo.bar/fullchain.pem'))
@@ -1165,7 +1165,7 @@ def write_msg(message, *args, **kwargs): # pylint: disable=unused-argument
traceback.format_exc())
if should_renew:
- if reuse_key:
+ if reuse_key and not new_key:
# The location of the previous live privkey.pem is passed
# to obtain_certificate
mock_client.obtain_certificate.assert_called_once_with(['isnot.org'],
@@ -1236,6 +1236,13 @@ def test_reuse_key_no_dry_run(self, unused_save_successor):
args = ["renew", "--reuse-key"]
self._test_renewal_common(True, [], args=args, should_renew=True, reuse_key=True)
+ @mock.patch('certbot._internal.storage.RenewableCert.save_successor')
+ def test_new_key(self, unused_save_successor):
+ test_util.make_lineage(self.config.config_dir, 'sample-renewal.conf')
+ args = ["renew", "--reuse-key", "--new-key"]
+ self._test_renewal_common(True, [], args=args, should_renew=True, reuse_key=True,
+ new_key=True)
+
@mock.patch('sys.stdin')
def test_noninteractive_renewal_delay(self, stdin):
stdin.isatty.return_value = False
diff --git a/certbot/tests/renewal_test.py b/certbot/tests/renewal_test.py
index 110c0d7bd91..d6e2866dc56 100644
--- a/certbot/tests/renewal_test.py
+++ b/certbot/tests/renewal_test.py
@@ -99,6 +99,32 @@ def test_reuse_ec_key_renewal_params(self):
assert self.config.elliptic_curve == 'secp256r1'
+ def test_new_key(self):
+ # When renewing with both reuse_key and new_key, the key should be regenerated,
+ # the key type, key parameters and reuse_key should be kept.
+ self.config.reuse_key = True
+ self.config.new_key = True
+ self.config.dry_run = True
+ config = configuration.NamespaceConfig(self.config)
+
+ rc_path = test_util.make_lineage(
+ self.config.config_dir, 'sample-renewal.conf')
+ lineage = storage.RenewableCert(rc_path, config)
+
+ le_client = mock.MagicMock()
+ le_client.obtain_certificate.return_value = (None, None, None, None)
+
+ from certbot._internal import renewal
+
+ with mock.patch('certbot._internal.renewal.hooks.renew_hook'):
+ renewal.renew_cert(self.config, None, le_client, lineage)
+
+ self.assertEqual(self.config.rsa_key_size, 2048)
+ self.assertEqual(self.config.key_type, 'rsa')
+ self.assertTrue(self.config.reuse_key)
+ # None is passed as the existing key, i.e. the key is not actually being reused.
+ le_client.obtain_certificate.assert_called_with(mock.ANY, None)
+
@test_util.patch_display_util()
@mock.patch('certbot._internal.renewal.cli.set_by_cli')
def test_remove_deprecated_config_elements(self, mock_set_by_cli, unused_mock_get_utility):
| https://api.github.com/repos/certbot/certbot/pulls/9252 | 2022-03-29T21:50:13Z | 2022-03-31T18:40:22Z | 2022-03-31T18:40:22Z | 2022-03-31T18:40:22Z | 2,862 | certbot/certbot | 116 |
|
added possibility to freeze layers during training | diff --git a/examples/mnist_transfer_cnn.py b/examples/mnist_transfer_cnn.py
new file mode 100644
index 00000000000..579df76d08f
--- /dev/null
+++ b/examples/mnist_transfer_cnn.py
@@ -0,0 +1,114 @@
+from __future__ import absolute_import
+from __future__ import print_function
+import numpy as np
+import datetime
+
+np.random.seed(1337) # for reproducibility
+
+from keras.datasets import mnist
+from keras.models import Sequential
+from keras.layers.core import Dense, Dropout, Activation, Flatten
+from keras.layers.convolutional import Convolution2D, MaxPooling2D
+from keras.utils import np_utils
+
+'''
+ Transfer learning toy example:
+ 1- Train a simple convnet on the MNIST dataset the first 5 digits [0..4].
+ 2- Freeze convolutional layers and fine-tune dense layers for the classification of digits [5..9].
+
+ Run on GPU: THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_cnn.py
+
+ Get to 99.8% test accuracy after 5 epochs for the first five digits classifier
+ and 99.2% for the last five digits after transfer + fine-tuning.
+'''
+
+now = datetime.datetime.now
+
+batch_size = 128
+nb_classes = 5
+nb_epoch = 5
+
+# input image dimensions
+img_rows, img_cols = 28, 28
+# number of convolutional filters to use
+nb_filters = 32
+# size of pooling area for max pooling
+nb_pool = 2
+# convolution kernel size
+nb_conv = 3
+
+
+def train_model(model, train, test, nb_classes):
+ X_train = train[0].reshape(train[0].shape[0], 1, img_rows, img_cols)
+ X_test = test[0].reshape(test[0].shape[0], 1, img_rows, img_cols)
+ X_train = X_train.astype("float32")
+ X_test = X_test.astype("float32")
+ X_train /= 255
+ X_test /= 255
+ print('X_train shape:', X_train.shape)
+ print(X_train.shape[0], 'train samples')
+ print(X_test.shape[0], 'test samples')
+
+ # convert class vectors to binary class matrices
+ Y_train = np_utils.to_categorical(train[1], nb_classes)
+ Y_test = np_utils.to_categorical(test[1], nb_classes)
+
+ model.compile(loss='categorical_crossentropy', optimizer='adadelta')
+
+ t = now()
+ model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, show_accuracy=True, verbose=1,
+ validation_data=(X_test, Y_test))
+ print('Training time: %s' % (now() - t))
+ score = model.evaluate(X_test, Y_test, show_accuracy=True, verbose=0)
+ print('Test score:', score[0])
+ print('Test accuracy:', score[1])
+
+
+# the data, shuffled and split between train and test sets
+(X_train, y_train), (X_test, y_test) = mnist.load_data()
+
+# create two datasets one with digits below 5 and one with 5 and above
+X_train_lt5 = X_train[y_train < 5]
+y_train_lt5 = y_train[y_train < 5]
+X_test_lt5 = X_test[y_test < 5]
+y_test_lt5 = y_test[y_test < 5]
+
+X_train_gte5 = X_train[y_train >= 5]
+y_train_gte5 = y_train[y_train >= 5] - 5 # make classes start at 0 for
+X_test_gte5 = X_test[y_test >= 5] # np_utils.to_categorical
+y_test_gte5 = y_test[y_test >= 5] - 5
+
+# define two groups of layers: feature (convolutions) and classification (dense)
+feature_layers = [
+ Convolution2D(nb_filters, nb_conv, nb_conv,
+ border_mode='full',
+ input_shape=(1, img_rows, img_cols)),
+ Activation('relu'),
+ Convolution2D(nb_filters, nb_conv, nb_conv),
+ Activation('relu'),
+ MaxPooling2D(pool_size=(nb_pool, nb_pool)),
+ Dropout(0.25),
+ Flatten(),
+]
+classification_layers = [
+ Dense(128),
+ Activation('relu'),
+ Dropout(0.5),
+ Dense(nb_classes),
+ Activation('softmax')
+]
+
+# create complete model
+model = Sequential()
+for l in feature_layers + classification_layers:
+ model.add(l)
+
+# train model for 5-digit classification [0..4]
+train_model(model, (X_train_lt5, y_train_lt5), (X_test_lt5, y_test_lt5), nb_classes)
+
+# freeze feature layers and rebuild model
+for l in feature_layers:
+ l.trainable = False
+
+# transfer: train dense layers for new classification task [5..9]
+train_model(model, (X_train_gte5, y_train_gte5), (X_test_gte5, y_test_gte5), nb_classes)
diff --git a/keras/layers/containers.py b/keras/layers/containers.py
index 8c4ea8604c3..422359a9037 100644
--- a/keras/layers/containers.py
+++ b/keras/layers/containers.py
@@ -2,6 +2,7 @@
from __future__ import absolute_import
from __future__ import print_function
+from collections import OrderedDict
import theano.tensor as T
from ..layers.core import Layer, Merge
from ..utils.theano_utils import ndim_tensor
@@ -20,11 +21,6 @@ class Sequential(Layer):
def __init__(self, layers=[]):
self.layers = []
- self.params = []
- self.regularizers = []
- self.constraints = []
- self.updates = []
-
for layer in layers:
self.add(layer)
@@ -38,11 +34,37 @@ def add(self, layer):
if not hasattr(self.layers[0], 'input'):
self.set_input()
- params, regularizers, constraints, updates = layer.get_params()
- self.params += params
- self.regularizers += regularizers
- self.constraints += constraints
- self.updates += updates
+ @property
+ def params(self):
+ params = []
+ for l in self.layers:
+ if l.trainable:
+ params += l.get_params()[0]
+ return params
+
+ @property
+ def regularizers(self):
+ regularizers = []
+ for l in self.layers:
+ if l.trainable:
+ regularizers += l.get_params()[1]
+ return regularizers
+
+ @property
+ def constraints(self):
+ constraints = []
+ for l in self.layers:
+ if l.trainable:
+ constraints += l.get_params()[2]
+ return constraints
+
+ @property
+ def updates(self):
+ updates = []
+ for l in self.layers:
+ if l.trainable:
+ updates += l.get_params()[3]
+ return updates
@property
def output_shape(self):
@@ -97,7 +119,6 @@ class Graph(Layer):
when it has exactly one input and one output.
inherited from Layer:
- - get_params
- get_output_mask
- supports_masked_input
- get_weights
@@ -105,7 +126,7 @@ class Graph(Layer):
'''
def __init__(self):
self.namespace = set() # strings
- self.nodes = {} # layer-like
+ self.nodes = OrderedDict() # layer-like
self.inputs = {} # layer-like
self.input_order = [] # strings
self.outputs = {} # layer-like
@@ -114,11 +135,6 @@ def __init__(self):
self.output_config = [] # dicts
self.node_config = [] # dicts
- self.params = []
- self.regularizers = []
- self.constraints = []
- self.updates = []
-
@property
def nb_input(self):
return len(self.inputs)
@@ -127,6 +143,38 @@ def nb_input(self):
def nb_output(self):
return len(self.outputs)
+ @property
+ def params(self):
+ params = []
+ for l in self.nodes.values():
+ if l.trainable:
+ params += l.get_params()[0]
+ return params
+
+ @property
+ def regularizers(self):
+ regularizers = []
+ for l in self.nodes.values():
+ if l.trainable:
+ regularizers += l.get_params()[1]
+ return regularizers
+
+ @property
+ def constraints(self):
+ constraints = []
+ for l in self.nodes.values():
+ if l.trainable:
+ constraints += l.get_params()[2]
+ return constraints
+
+ @property
+ def updates(self):
+ updates = []
+ for l in self.nodes.values():
+ if l.trainable:
+ updates += l.get_params()[3]
+ return updates
+
def set_previous(self, layer, connection_map={}):
if self.nb_input != layer.nb_output:
raise Exception('Cannot connect layers: input count does not match output count.')
@@ -220,11 +268,6 @@ def add_node(self, layer, name, input=None, inputs=[],
'merge_mode': merge_mode,
'concat_axis': concat_axis,
'create_output': create_output})
- params, regularizers, constraints, updates = layer.get_params()
- self.params += params
- self.regularizers += regularizers
- self.constraints += constraints
- self.updates += updates
if create_output:
self.add_output(name, input=name)
diff --git a/keras/layers/core.py b/keras/layers/core.py
index 5c216b0d91c..f21a3cbefae 100644
--- a/keras/layers/core.py
+++ b/keras/layers/core.py
@@ -20,9 +20,11 @@
class Layer(object):
def __init__(self, **kwargs):
for kwarg in kwargs:
- assert kwarg in {'input_shape'}, "Keyword argument not understood: " + kwarg
+ assert kwarg in {'input_shape', 'trainable'}, "Keyword argument not understood: " + kwarg
if 'input_shape' in kwargs:
self.set_input_shape(kwargs['input_shape'])
+ if 'trainable' in kwargs:
+ self._trainable = kwargs['trainable']
if not hasattr(self, 'params'):
self.params = []
@@ -45,6 +47,17 @@ def build(self):
'''
pass
+ @property
+ def trainable(self):
+ if hasattr(self, '_trainable'):
+ return self._trainable
+ else:
+ return True
+
+ @trainable.setter
+ def trainable(self, value):
+ self._trainable = value
+
@property
def nb_input(self):
return 1
@@ -133,6 +146,8 @@ def get_config(self):
config = {"name": self.__class__.__name__}
if hasattr(self, '_input_shape'):
config['input_shape'] = self._input_shape[1:]
+ if hasattr(self, '_trainable'):
+ config['trainable'] = self._trainable
return config
def get_params(self):
| Following the discussion in #622, here is a simple strategy to "freeze" specific layers during training. IT works for `Sequential` and `Graph` models.
It adds a `trainable` attribute to the core `Layer` class and check its value when creating the model. If a layer's `trainable` flag is modified after addition to a model, the model needs to be rebuilt and recompiled. For this, a `rebuild` function has been added to `Sequential` and `Graph` classes.
| https://api.github.com/repos/keras-team/keras/pulls/839 | 2015-10-15T11:41:58Z | 2015-10-23T23:40:38Z | 2015-10-23T23:40:38Z | 2015-12-31T15:30:46Z | 2,642 | keras-team/keras | 47,887 |
exmo(fix): ArgumentsRequired import missing | diff --git a/ts/src/exmo.ts b/ts/src/exmo.ts
index 180b9a2a1232..6411a68950ef 100644
--- a/ts/src/exmo.ts
+++ b/ts/src/exmo.ts
@@ -2,7 +2,7 @@
// ---------------------------------------------------------------------------
import Exchange from './abstract/exmo.js';
-import { ExchangeError, OrderNotFound, AuthenticationError, InsufficientFunds, InvalidOrder, InvalidNonce, OnMaintenance, RateLimitExceeded, BadRequest, PermissionDenied } from './base/errors.js';
+import { ArgumentsRequired, ExchangeError, OrderNotFound, AuthenticationError, InsufficientFunds, InvalidOrder, InvalidNonce, OnMaintenance, RateLimitExceeded, BadRequest, PermissionDenied } from './base/errors.js';
import { Precise } from './base/Precise.js';
import { TICK_SIZE } from './base/functions/number.js';
import { sha512 } from './static_dependencies/noble-hashes/sha512.js';
| https://api.github.com/repos/ccxt/ccxt/pulls/19307 | 2023-09-17T00:45:16Z | 2023-09-17T07:16:27Z | 2023-09-17T07:16:27Z | 2023-09-17T07:16:28Z | 207 | ccxt/ccxt | 13,407 |
|
Safety pipeline prototype | diff --git a/safety/README.md b/safety/README.md
new file mode 100644
index 0000000000..2b6f23ff0f
--- /dev/null
+++ b/safety/README.md
@@ -0,0 +1,47 @@
+# Train & Evaluate Safety models
+
+This is the Open Assistant Safety Folder and contains the following:
+
+- Model training scripts
+- Model infrence scripts
+- Data processing scripts
+
+## Mission Statment
+
+Our mission at LAION-AI OpenAssistant safety team is to create a safety pipeline
+that is not only compatible with the OpenAssistant model and project but can
+also integrate with other systems outside of it. We are dedicated to making this
+pipeline modifiable and robust to accommodate the diverse preferences of our
+users.
+
+We understand that our users come from different backgrounds and use various
+types of hardware. Therefore, we strive to make our safety pipeline accessible
+and able to run on consumer hardware, so everyone can benefit from its
+protective features.
+
+Through our commitment to innovation and collaboration, we will continue to
+provide safety solutions that ensure the well-being of our users and the wider
+community.
+
+## Why create a safety pipeline?
+
+Open source and extendable safety pipelines unfortunately do not exist on the
+same on the same scale as those in ChatGPT and other commerical systems. To
+further research in implementable, accurate, and extendable safety pipelines,
+Open Assistant Safety Team will continue to push models and code to the public.
+Much research has been done in things like toxicity detection and bias
+mitigation in LLMs, however the implementation of such research in systems that
+use language models as conversational agents in production settings has largely
+gone undocumented. Furthermore, safety systems that interact with diverse
+communities of users must be able accommodate user prefrences. This is paramount
+in introducing LLM based systems all over the world. We hope that our work will
+generate more research in this field, and allow others to create safe LLM based
+systems.
+
+## Training
+
+- Set training configuration using `config.yaml`
+
+```python
+python model_training/t5_trainer.py
+```
diff --git a/safety/model_training/__init__.py b/safety/model_training/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/safety/model_training/config/config.yaml b/safety/model_training/config/config.yaml
new file mode 100644
index 0000000000..35cace8444
--- /dev/null
+++ b/safety/model_training/config/config.yaml
@@ -0,0 +1,18 @@
+defaults:
+ - trainer: default
+padding_side: "right"
+truncation_side: "right"
+model: "t5-base"
+epochs: 1
+batch_size: 8
+save_folder: "safetyfiles"
+max_length: 256
+special_tokens:
+ context_token: "<ctx>"
+ sep_token: "<sep>"
+ label_token: "<cls>"
+ rot_token: "<rot>"
+dataset:
+ name: "allenai/prosocial-dialog"
+ train: ["train", "validation"]
+ test: "test"
diff --git a/safety/model_training/config/trainer/default.yaml b/safety/model_training/config/trainer/default.yaml
new file mode 100644
index 0000000000..b13dc0b7e7
--- /dev/null
+++ b/safety/model_training/config/trainer/default.yaml
@@ -0,0 +1,4 @@
+_target_: transformers.TrainingArguments
+output_dir: "."
+per_device_train_batch_size: 5
+fp16: False
diff --git a/safety/model_training/custom_datasets/__init__.py b/safety/model_training/custom_datasets/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/safety/model_training/custom_datasets/rot_dataset.py b/safety/model_training/custom_datasets/rot_dataset.py
new file mode 100644
index 0000000000..171451a595
--- /dev/null
+++ b/safety/model_training/custom_datasets/rot_dataset.py
@@ -0,0 +1,103 @@
+from dataclasses import dataclass
+from typing import Dict, List
+
+import torch
+from datasets import concatenate_datasets
+from torch.utils.data import Dataset
+
+LABEL2ID = {
+ "__casual__": "__casual__",
+ "__needs_caution__": "__needs_caution__",
+ "__needs_intervention__": "__needs_intervention__",
+ "__probably_needs_caution__": "__probably_needs_caution__",
+ "__possibly_needs_caution__": "__possibly_needs_caution__",
+}
+
+
+class SafetyDataset(Dataset):
+
+ """
+ Dataset to train safety model with context and ROT from prosocial-dialog
+ input format : input<ctx>context</s>
+ output format : <cls>safety_label<rot>ROTs</s>
+
+ """
+
+ def __init__(self, dataset, split, tokenizer, max_len=512):
+ super().__init__()
+
+ if isinstance(split, List):
+ self.split = "-".join(split)
+ self.dataset = concatenate_datasets([dataset[sp] for sp in split])
+ else:
+ self.split = split
+ self.dataset = dataset[split]
+
+ self.max_len = max_len
+ self.tokenizer = tokenizer
+ self.label2id = LABEL2ID
+
+ def __len__(self):
+ return len(self.dataset)
+
+ def __getitem__(self, idx):
+ idx_start = idx
+ end = self.dataset[max(0, idx_start - 1)]["episode_done"]
+ while (not end) and (idx_start > 0):
+ end = self.dataset[max(0, idx_start - 2)]["episode_done"]
+ idx_start -= 1
+ idx_start = max(0, idx_start)
+ context = [
+ f'\nUser: {self.dataset[i]["context"]}\n bot:{self.dataset[i]["response"]}' for i in range(idx_start, idx)
+ ]
+ context = self.tokenizer.sep_token.join(context)
+ rots = self.dataset[idx]["rots"]
+ label = self.label2id[self.dataset[idx]["safety_label"]]
+ input_tokens = self.tokenizer.encode(self.dataset[idx]["context"], add_special_tokens=False)
+ max_len = self.max_len - (len(input_tokens) + 2)
+ context = self.tokenizer.encode(
+ context,
+ add_special_tokens=False,
+ max_length=max_len,
+ )
+ rots = self.tokenizer.sep_token.join(rots)
+ input_ids = input_tokens + [self.tokenizer.context_token_id] + context + [self.tokenizer.eos_token_id]
+ input_ids = input_ids + [self.tokenizer.pad_token_id] * max(0, (self.max_len - len(input_ids)))
+ mask = [1] * len(input_ids) + [self.tokenizer.pad_token_id] * (self.max_len - len(input_ids))
+ target_text = self.tokenizer.label_token + label + self.tokenizer.context_token + rots
+ decoder_ids = self.tokenizer(
+ target_text,
+ add_special_tokens=True,
+ max_length=self.max_len,
+ padding="max_length",
+ )
+
+ return {
+ "input_ids": torch.LongTensor(input_ids),
+ "attention_mask": torch.LongTensor(mask),
+ "decoder_input_ids": torch.LongTensor(decoder_ids["input_ids"]),
+ "decoder_attention_mask": torch.LongTensor(decoder_ids["attention_mask"]),
+ }
+
+
+@dataclass
+class SafetyDataCollator:
+ def __call__(self, batch: List) -> Dict[str, torch.Tensor]:
+ """
+ Take a list of samples from a Dataset and collate them into a batch.
+ Returns:
+ A dictionary of tensors
+ """
+
+ input_ids = torch.stack([example["input_ids"] for example in batch])
+ lm_labels = torch.stack([example["decoder_input_ids"] for example in batch])
+ lm_labels[lm_labels[:, :] == 0] = -100
+ attention_mask = torch.stack([example["attention_mask"] for example in batch])
+ decoder_attention_mask = torch.stack([example["decoder_attention_mask"] for example in batch])
+
+ return {
+ "input_ids": input_ids,
+ "attention_mask": attention_mask,
+ "labels": lm_labels,
+ "decoder_attention_mask": decoder_attention_mask,
+ }
diff --git a/safety/model_training/t5_trainer.py b/safety/model_training/t5_trainer.py
new file mode 100644
index 0000000000..2776803b61
--- /dev/null
+++ b/safety/model_training/t5_trainer.py
@@ -0,0 +1,50 @@
+import os
+
+import hydra
+from custom_datasets.rot_dataset import SafetyDataCollator, SafetyDataset
+from datasets import load_dataset
+from hydra.utils import instantiate
+from omegaconf import DictConfig, OmegaConf
+from transformers import T5ForConditionalGeneration, T5Tokenizer, Trainer
+from utils import add_special_tokens
+
+
+@hydra.main(version_base=None, config_path="config", config_name="config")
+def train(cfg: DictConfig) -> None:
+ if not os.path.exists(cfg.save_folder):
+ os.mkdir(cfg.save_folder)
+
+ model = T5ForConditionalGeneration.from_pretrained(cfg.model)
+ tokenizer = T5Tokenizer.from_pretrained(
+ cfg.model,
+ padding_side=cfg.padding_side,
+ truncation_side=cfg.truncation_side,
+ model_max_length=model.config.n_positions,
+ )
+ add_special_tokens(cfg.special_tokens, tokenizer, model)
+ training_args = instantiate(cfg.trainer)
+
+ dataset = load_dataset(cfg.dataset.name)
+ train_dataset = SafetyDataset(
+ dataset, split=OmegaConf.to_object(cfg.dataset.train), tokenizer=tokenizer, max_len=cfg.max_length
+ )
+ valid_dataset = SafetyDataset(dataset, split=cfg.dataset.test, tokenizer=tokenizer, max_len=cfg.max_length)
+
+ # Initialize our Trainer
+ trainer = Trainer(
+ model=model,
+ args=training_args,
+ train_dataset=train_dataset,
+ eval_dataset=valid_dataset,
+ data_collator=SafetyDataCollator(),
+ )
+
+ # Training
+ trainer.train()
+
+ trainer.save_model(os.path.join(cfg.save_folder, f"{cfg.model_name}-model"))
+ tokenizer.save_vocabulary(os.path.join(cfg.save_folder, f"{cfg.model_name}-tokenizer"))
+
+
+if __name__ == "__main__":
+ train()
diff --git a/safety/model_training/utils.py b/safety/model_training/utils.py
new file mode 100644
index 0000000000..8102ea1056
--- /dev/null
+++ b/safety/model_training/utils.py
@@ -0,0 +1,7 @@
+def add_special_tokens(special_tokens, tokenizer, model):
+ for key, value in special_tokens.items():
+ setattr(tokenizer, key, value)
+ tokenizer.add_tokens([value])
+ setattr(tokenizer, key + "_id", tokenizer.encode(value)[0])
+
+ model.resize_token_embeddings(len(tokenizer))
diff --git a/safety/requirements.txt b/safety/requirements.txt
new file mode 100644
index 0000000000..dffd40c36a
--- /dev/null
+++ b/safety/requirements.txt
@@ -0,0 +1,2 @@
+hydra-core==1.3.2
+omegaconf==2.3.0
| The safety team has decided to maintain a `safety` folder. This folder will contain
1. Documentation related to safety models.
2. Reproducible training and evaluation code for safety models.
3. Inference pipelines for safety models
@ontocord
| https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/1972 | 2023-03-05T11:08:36Z | 2023-03-08T16:33:32Z | 2023-03-08T16:33:32Z | 2023-03-08T16:33:32Z | 2,637 | LAION-AI/Open-Assistant | 37,546 |
Fix issue with scrolling when package not found | diff --git a/thefuck/specific/archlinux.py b/thefuck/specific/archlinux.py
index 5816c50f0..5c95aa5bb 100644
--- a/thefuck/specific/archlinux.py
+++ b/thefuck/specific/archlinux.py
@@ -24,8 +24,11 @@ def get_pkgfile(command):
).splitlines()
return [package.split()[0] for package in packages]
- except subprocess.CalledProcessError:
- return None
+ except subprocess.CalledProcessError as err:
+ if err.returncode == 1 and err.output == "":
+ return []
+ else:
+ raise err
def archlinux_env():
| Fix issue with attempting to scroll through possible corrections when not-found package has no packages with matching names causing crash.
This behavior originates with commit 6624ecb3b85e82e9f1a08823f6e41ee805d35a9e, to my knowledge, where pacman rules were created. In both uses of `get_pkgfile` (in pacman.py and in pacman_not_found.py), it would be appropriate for `get_pkgfile` to return an empty list when the `pkgfile` does not find any packages by the correct name. However, the `pkgfile` command returns 1 in these circumstances, and as such `subprocess` raises an exception. Now, when this exception is caused by `pkgfile` returning 1 with no output (when it finds no packages), `get_pkgfile` will not cause a crash.
To reproduce bug:
```
yaourt -S e
fuck
```
then press ↑ or ↓ | https://api.github.com/repos/nvbn/thefuck/pulls/573 | 2016-11-03T07:50:06Z | 2017-03-13T12:47:17Z | 2017-03-13T12:47:17Z | 2017-03-13T12:47:24Z | 157 | nvbn/thefuck | 30,457 |
Fix typo | diff --git a/README.md b/README.md
index 0b576befe1..c11c8f3032 100644
--- a/README.md
+++ b/README.md
@@ -196,7 +196,7 @@ API | Description | Auth | HTTPS | CORS |
| [Open Library](https://openlibrary.org/developers/api) | Books, book covers and related data | No | Yes | No |
| [Penguin Publishing](http://www.penguinrandomhouse.biz/webservices/rest/) | Books, book covers and related data | No | Yes | Yes |
| [Quran](https://quran.api-docs.io/) | RESTful Quran API with multiple languages | No | Yes | Yes |
-| [Quran Cloud](https://alquran.cloud/api) | A RESTful Quran API to retrieve an Ayah, Surah, Juz or the enitre Holy Quran | No | Yes | Yes |
+| [Quran Cloud](https://alquran.cloud/api) | A RESTful Quran API to retrieve an Ayah, Surah, Juz or the entire Holy Quran | No | Yes | Yes |
| [Quran-api](https://github.com/fawazahmed0/quran-api#readme) | Free Quran API Service with 90+ different languages and 400+ translations | No | Yes | Yes |
| [Rig Veda](https://aninditabasu.github.io/indica/html/rv.html) | Gods and poets, their categories, and the verse meters, with the mandal and sukta number | No | Yes | Unknown |
| [The Bible](https://docs.api.bible) | Everything you need from the Bible in one discoverable place | `apiKey` | Yes | Unknown |
| <!-- Thank you for taking the time to work on a Pull Request for this project! -->
<!-- To ensure your PR is dealt with swiftly please check the following: -->
- [X ] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md)
- [X ] My addition is ordered alphabetically
- [X ] My submission has a useful description
- [X ] The description does not end with punctuation
- [X ] Each table column is padded with one space on either side
- [X ] I have searched the repository for any relevant issues or pull requests
- [X ] Any category I am creating has the minimum requirement of 3 items
- [X ] All changes have been [squashed][squash-link] into a single commit
[squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
| https://api.github.com/repos/public-apis/public-apis/pulls/2062 | 2021-09-21T00:44:38Z | 2021-09-22T11:31:59Z | 2021-09-22T11:31:59Z | 2021-09-22T18:06:23Z | 376 | public-apis/public-apis | 35,235 |
Inline `_make_grid()` meshgrid | diff --git a/models/yolo.py b/models/yolo.py
index 7a7308312a1..fa05fcf9a8d 100644
--- a/models/yolo.py
+++ b/models/yolo.py
@@ -81,10 +81,7 @@ def _make_grid(self, nx=20, ny=20, i=0, torch_1_10=check_version(torch.__version
t = self.anchors[i].dtype
shape = 1, self.na, ny, nx, 2 # grid shape
y, x = torch.arange(ny, device=d, dtype=t), torch.arange(nx, device=d, dtype=t)
- if torch_1_10: # torch>=1.10.0 meshgrid workaround for torch>=0.7 compatibility
- yv, xv = torch.meshgrid(y, x, indexing='ij')
- else:
- yv, xv = torch.meshgrid(y, x)
+ yv, xv = torch.meshgrid(y, x, indexing='ij') if torch_1_10 else torch.meshgrid(y, x) # torch>=0.7 compatibility
grid = torch.stack((xv, yv), 2).expand(shape) - 0.5 # add grid offset, i.e. y = 2.0 * x - 0.5
anchor_grid = (self.anchors[i] * self.stride[i]).view((1, self.na, 1, 1, 2)).expand(shape)
return grid, anchor_grid
| Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
<!--
Thank you for submitting a YOLOv5 🚀 Pull Request! We want to make contributing to YOLOv5 as easy and transparent as possible. A few tips to get you started:
- Search existing YOLOv5 [PRs](https://github.com/ultralytics/yolov5/pull) to see if a similar PR already exists.
- Link this PR to a YOLOv5 [issue](https://github.com/ultralytics/yolov5/issues) to help us understand what bug fix or feature is being implemented.
- Provide before and after profiling/inference/training results to help us quantify the improvement your PR provides (if applicable).
Please see our ✅ [Contributing Guide](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) for more details.
-->
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Refinement of grid generation in YOLOv5 for compatibility adjustment.
### 📊 Key Changes
- Consolidated conditional code for grid generation into a single line.
### 🎯 Purpose & Impact
- 🧹 Cleans up the code by removing unnecessary conditionals.
- 🚀 Simplifies maintenance by reducing code complexity.
- ✨ Ensures compatibility with various versions of PyTorch (>=0.7) without altering functionality. | https://api.github.com/repos/ultralytics/yolov5/pulls/9170 | 2022-08-26T13:07:52Z | 2022-08-26T13:29:31Z | 2022-08-26T13:29:31Z | 2024-01-19T06:39:33Z | 342 | ultralytics/yolov5 | 24,732 |
gh-108765: Python.h no longer includes <unistd.h> | diff --git a/Doc/whatsnew/3.13.rst b/Doc/whatsnew/3.13.rst
index e7b60ddbdbbfda..c4fb328db9cfa0 100644
--- a/Doc/whatsnew/3.13.rst
+++ b/Doc/whatsnew/3.13.rst
@@ -921,6 +921,11 @@ Porting to Python 3.13
also the ``HAVE_IEEEFP_H`` macro.
(Contributed by Victor Stinner in :gh:`108765`.)
+* ``Python.h`` no longer includes the ``<unistd.h>`` standard header file. If
+ needed, it should now be included explicitly. For example, it provides the
+ functions: ``close()``, ``getpagesize()``, ``getpid()`` and ``sysconf()``.
+ (Contributed by Victor Stinner in :gh:`108765`.)
+
Deprecated
----------
diff --git a/Include/Python.h b/Include/Python.h
index 002a79dbdc9362..4cc72bb23ce7a3 100644
--- a/Include/Python.h
+++ b/Include/Python.h
@@ -26,14 +26,13 @@
#ifdef HAVE_STDDEF_H
# include <stddef.h> // size_t
#endif
-#ifndef MS_WINDOWS
-# include <unistd.h> // sysconf()
+#ifdef HAVE_SYS_TYPES_H
+# include <sys/types.h> // ssize_t
#endif
-// errno.h, stdio.h, stdlib.h and string.h headers are no longer used by Python
-// headers, but kept for backward compatibility (no introduce new compiler
-// warnings). They are not included by the limited C API version 3.11 and
-// above.
+// errno.h, stdio.h, stdlib.h and string.h headers are no longer used by
+// Python, but kept for backward compatibility (avoid compiler warnings).
+// They are no longer included by limited C API version 3.11 and newer.
#if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 < 0x030b0000
# include <errno.h> // errno
# include <stdio.h> // FILE*
diff --git a/Misc/NEWS.d/next/C API/2023-09-01-21-10-29.gh-issue-108765.eeXtYF.rst b/Misc/NEWS.d/next/C API/2023-09-01-21-10-29.gh-issue-108765.eeXtYF.rst
new file mode 100644
index 00000000000000..ff8f79998fa968
--- /dev/null
+++ b/Misc/NEWS.d/next/C API/2023-09-01-21-10-29.gh-issue-108765.eeXtYF.rst
@@ -0,0 +1,4 @@
+``Python.h`` no longer includes the ``<unistd.h>`` standard header file. If
+needed, it should now be included explicitly. For example, it provides the
+functions: ``close()``, ``getpagesize()``, ``getpid()`` and ``sysconf()``.
+Patch by Victor Stinner.
diff --git a/Modules/_ctypes/malloc_closure.c b/Modules/_ctypes/malloc_closure.c
index 3a859322772ba7..bb4f8f21bd3f77 100644
--- a/Modules/_ctypes/malloc_closure.c
+++ b/Modules/_ctypes/malloc_closure.c
@@ -1,16 +1,17 @@
#ifndef Py_BUILD_CORE_BUILTIN
# define Py_BUILD_CORE_MODULE 1
#endif
+
#include <Python.h>
#include <ffi.h>
#ifdef MS_WIN32
-#include <windows.h>
+# include <windows.h>
#else
-#include <sys/mman.h>
-#include <unistd.h>
-# if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
-# define MAP_ANONYMOUS MAP_ANON
-# endif
+# include <sys/mman.h>
+# include <unistd.h> // sysconf()
+# if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
+# define MAP_ANONYMOUS MAP_ANON
+# endif
#endif
#include "ctypes.h"
diff --git a/Modules/_posixsubprocess.c b/Modules/_posixsubprocess.c
index ac2b0d4f55468c..ef76d26282e1b3 100644
--- a/Modules/_posixsubprocess.c
+++ b/Modules/_posixsubprocess.c
@@ -8,28 +8,28 @@
#include "pycore_pystate.h"
#include "pycore_signal.h" // _Py_RestoreSignals()
#if defined(HAVE_PIPE2) && !defined(_GNU_SOURCE)
-# define _GNU_SOURCE
+# define _GNU_SOURCE
#endif
-#include <unistd.h>
-#include <fcntl.h>
+#include <unistd.h> // close()
+#include <fcntl.h> // fcntl()
#ifdef HAVE_SYS_TYPES_H
-#include <sys/types.h>
+# include <sys/types.h>
#endif
#if defined(HAVE_SYS_STAT_H)
-#include <sys/stat.h>
+# include <sys/stat.h> // stat()
#endif
#ifdef HAVE_SYS_SYSCALL_H
-#include <sys/syscall.h>
+# include <sys/syscall.h>
#endif
#if defined(HAVE_SYS_RESOURCE_H)
-#include <sys/resource.h>
+# include <sys/resource.h>
#endif
#ifdef HAVE_DIRENT_H
-#include <dirent.h>
+# include <dirent.h> // opendir()
+#endif
+#if defined(HAVE_SETGROUPS)
+# include <grp.h> // setgroups()
#endif
-#ifdef HAVE_GRP_H
-#include <grp.h>
-#endif /* HAVE_GRP_H */
#include "posixmodule.h"
diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c
index 4fc354ae79bfed..ab33702cdfd872 100644
--- a/Modules/_testcapimodule.c
+++ b/Modules/_testcapimodule.c
@@ -24,9 +24,6 @@
#include <float.h> // FLT_MAX
#include <signal.h>
#include <stddef.h> // offsetof()
-#ifndef MS_WINDOWS
-# include <unistd.h>
-#endif
#ifdef HAVE_SYS_WAIT_H
# include <sys/wait.h> // W_STOPCODE
diff --git a/Modules/grpmodule.c b/Modules/grpmodule.c
index f5709296334a8f..20e83de84e8340 100644
--- a/Modules/grpmodule.c
+++ b/Modules/grpmodule.c
@@ -4,7 +4,8 @@
#include "Python.h"
#include "posixmodule.h"
-#include <grp.h>
+#include <grp.h> // getgrgid_r()
+#include <unistd.h> // sysconf()
#include "clinic/grpmodule.c.h"
/*[clinic input]
diff --git a/Modules/mmapmodule.c b/Modules/mmapmodule.c
index c8cd7e59dbab50..d11200a4042551 100644
--- a/Modules/mmapmodule.c
+++ b/Modules/mmapmodule.c
@@ -28,6 +28,9 @@
#include "pycore_fileutils.h" // _Py_stat_struct
#include <stddef.h> // offsetof()
+#ifndef MS_WINDOWS
+# include <unistd.h> // close()
+#endif
// to support MS_WINDOWS_SYSTEM OpenFileMappingA / CreateFileMappingA
// need to be replaced with OpenFileMappingW / CreateFileMappingW
diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c
index 761542866d8f96..6e829b200fa46d 100644
--- a/Modules/posixmodule.c
+++ b/Modules/posixmodule.c
@@ -286,7 +286,7 @@ corresponding Unix manual entries for more information on calls.");
#endif
#ifdef HAVE_COPY_FILE_RANGE
-# include <unistd.h>
+# include <unistd.h> // copy_file_range()
#endif
#if !defined(CPU_ALLOC) && defined(HAVE_SCHED_SETAFFINITY)
diff --git a/Modules/pwdmodule.c b/Modules/pwdmodule.c
index cc2e2a43893971..b7034369c4731e 100644
--- a/Modules/pwdmodule.c
+++ b/Modules/pwdmodule.c
@@ -4,7 +4,8 @@
#include "Python.h"
#include "posixmodule.h"
-#include <pwd.h>
+#include <pwd.h> // getpwuid()
+#include <unistd.h> // sysconf()
#include "clinic/pwdmodule.c.h"
/*[clinic input]
diff --git a/Modules/resource.c b/Modules/resource.c
index 4614f5e98cc888..f5d9972d9a8ff7 100644
--- a/Modules/resource.c
+++ b/Modules/resource.c
@@ -1,13 +1,12 @@
-
#include "Python.h"
-#include <sys/resource.h>
+#include <errno.h> // errno
+#include <string.h>
+#include <sys/resource.h> // getrusage()
#ifdef HAVE_SYS_TIME_H
-#include <sys/time.h>
+# include <sys/time.h>
#endif
#include <time.h>
-#include <string.h>
-#include <errno.h>
-#include <unistd.h>
+#include <unistd.h> // getpagesize()
/* On some systems, these aren't in any header file.
On others they are, with inconsistent prototypes.
diff --git a/Modules/selectmodule.c b/Modules/selectmodule.c
index 4987cf0f2065c2..c56e682b21e2a1 100644
--- a/Modules/selectmodule.c
+++ b/Modules/selectmodule.c
@@ -17,6 +17,9 @@
#include "pycore_time.h" // _PyTime_t
#include <stddef.h> // offsetof()
+#ifndef MS_WINDOWS
+# include <unistd.h> // close()
+#endif
#ifdef HAVE_SYS_DEVPOLL_H
#include <sys/resource.h>
diff --git a/Modules/socketmodule.c b/Modules/socketmodule.c
index 2f12c9cedbd8a6..74b1c1c661604f 100644
--- a/Modules/socketmodule.c
+++ b/Modules/socketmodule.c
@@ -269,7 +269,7 @@ shutdown(how) -- shut down traffic in one or both directions\n\
#ifdef HAVE_NETDB_H
# include <netdb.h>
#endif
-# include <unistd.h>
+#include <unistd.h> // close()
/* Headers needed for inet_ntoa() and inet_addr() */
# include <arpa/inet.h>
diff --git a/Programs/_freeze_module.c b/Programs/_freeze_module.c
index e55f1d56745c4d..f6c46fa629efba 100644
--- a/Programs/_freeze_module.c
+++ b/Programs/_freeze_module.c
@@ -19,7 +19,7 @@
#include <sys/types.h>
#include <sys/stat.h>
#ifndef MS_WINDOWS
-#include <unistd.h>
+# include <unistd.h>
#endif
uint32_t _Py_next_func_version = 1;
diff --git a/Python/dup2.c b/Python/dup2.c
index a1df0492099163..936211f27ec737 100644
--- a/Python/dup2.c
+++ b/Python/dup2.c
@@ -11,9 +11,9 @@
* Return fd2 if all went well; return BADEXIT otherwise.
*/
-#include <errno.h>
-#include <fcntl.h>
-#include <unistd.h>
+#include <errno.h> // errno
+#include <fcntl.h> // fcntl()
+#include <unistd.h> // close()
#define BADEXIT -1
diff --git a/Python/perf_trampoline.c b/Python/perf_trampoline.c
index b8885a303977d0..10675bf9f8292a 100644
--- a/Python/perf_trampoline.c
+++ b/Python/perf_trampoline.c
@@ -140,9 +140,9 @@ any DWARF information available for them).
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
-#include <sys/mman.h>
+#include <sys/mman.h> // mmap()
#include <sys/types.h>
-#include <unistd.h>
+#include <unistd.h> // sysconf()
#if defined(__arm__) || defined(__arm64__) || defined(__aarch64__)
#define PY_HAVE_INVALIDATE_ICACHE
| <!--
Thanks for your contribution!
Please read this comment in its entirety. It's quite important.
# Pull Request title
It should be in the following format:
```
gh-NNNNN: Summary of the changes made
```
Where: gh-NNNNN refers to the GitHub issue number.
Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue.
# Backport Pull Request title
If this is a backport PR (PR made against branches other than `main`),
please ensure that the PR title is in the following format:
```
[X.Y] <title from the original PR> (GH-NNNN)
```
Where: [X.Y] is the branch name, e.g. [3.6].
GH-NNNN refers to the PR number from `main`.
-->
<!-- gh-issue-number: gh-108765 -->
* Issue: gh-108765
<!-- /gh-issue-number -->
<!-- readthedocs-preview cpython-previews start -->
----
:books: Documentation preview :books:: https://cpython-previews--108783.org.readthedocs.build/
<!-- readthedocs-preview cpython-previews end --> | https://api.github.com/repos/python/cpython/pulls/108783 | 2023-09-01T19:11:49Z | 2023-09-02T14:50:18Z | 2023-09-02T14:50:18Z | 2023-10-27T20:28:40Z | 2,936 | python/cpython | 4,344 |
Simplify PreparedRequest.prepare API | diff --git a/requests/models.py b/requests/models.py
index 752c58c153..45b3ea9680 100644
--- a/requests/models.py
+++ b/requests/models.py
@@ -523,6 +523,10 @@ def prepare_cookies(self, cookies):
def prepare_hooks(self, hooks):
"""Prepares the given hooks."""
+ # hooks can be passed as None to the prepare method and to this
+ # method. To prevent iterating over None, simply use an empty list
+ # if hooks is False-y
+ hooks = hooks or []
for event in hooks:
self.register_hook(event, hooks[event])
diff --git a/test_requests.py b/test_requests.py
index 15406a22fc..cad8c055c8 100755
--- a/test_requests.py
+++ b/test_requests.py
@@ -1613,7 +1613,6 @@ def test_prepare_unicode_url():
p.prepare(
method='GET',
url=u('http://www.example.com/üniçø∂é'),
- hooks=[]
)
assert_copy(p, p.copy())
| Do not require that hooks be passed as an empty list to
PreparedRequest.prepare. In the event hooks is None in prepare or
prepare_hooks, use an empty list as a default.
Related to #2552
| https://api.github.com/repos/psf/requests/pulls/2553 | 2015-04-21T01:14:53Z | 2015-04-21T05:59:55Z | 2015-04-21T05:59:55Z | 2021-09-08T08:00:49Z | 250 | psf/requests | 33,005 |
Remove elements that don't add value in ES.84 | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 7db626d82..1d437b249 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -9884,7 +9884,7 @@ Statement rules:
* [ES.77: Minimize the use of `break` and `continue` in loops](#Res-continue)
* [ES.78: Always end a non-empty `case` with a `break`](#Res-break)
* [ES.79: Use `default` to handle common cases (only)](#Res-default)
-* [ES.84: Don't (try to) declare a local variable with no name](#Res-noname)
+* [ES.84: Don't try to declare a local variable with no name](#Res-noname)
* [ES.85: Make empty statements visible](#Res-empty)
* [ES.86: Avoid modifying loop control variables inside the body of raw for-loops](#Res-loop-counter)
* [ES.87: Don't add redundant `==` or `!=` to conditions](#Res-if)
@@ -12789,13 +12789,12 @@ Flag `switch`-statements over an enumeration that don't handle all enumerators a
This may yield too many false positives in some code bases; if so, flag only `switch`es that handle most but not all cases
(that was the strategy of the very first C++ compiler).
-### <a name="Res-noname"></a>ES.84: Don't (try to) declare a local variable with no name
+### <a name="Res-noname"></a>ES.84: Don't try to declare a local variable with no name
##### Reason
There is no such thing.
What looks to a human like a variable without a name is to the compiler a statement consisting of a temporary that immediately goes out of scope.
-To avoid unpleasant surprises.
##### Example, bad
@@ -12808,7 +12807,6 @@ To avoid unpleasant surprises.
This declares an unnamed `lock` object that immediately goes out of scope at the point of the semicolon.
This is not an uncommon mistake.
In particular, this particular example can lead to hard-to find race conditions.
-There are exceedingly clever uses of this "idiom", but they are far rarer than the mistakes.
##### Note
@@ -12816,7 +12814,7 @@ Unnamed function arguments are fine.
##### Enforcement
-Flag statements that are just a temporary
+Flag statements that are just a temporary.
### <a name="Res-empty"></a>ES.85: Make empty statements visible
| - Parentheses around "(try to)" in rule title add no meaning.
- The sentence fragment "To avoid unpleasant surprises." in Reason adds no info that hasn't already been stated.
- "There are exceedingly clever uses of this 'idiom', but..." seems like a distraction instead of a tight conclusion to the example section. We're not seeking "exceedingly clever" in this guide. Per Bjarne's own words, we're seeking the "smaller, simpler, safer language struggling to get out."
- Added a period to the end of the Enforcement sentence.
| https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/1390 | 2019-03-17T13:15:39Z | 2019-03-21T18:04:50Z | 2019-03-21T18:04:50Z | 2019-04-02T00:44:27Z | 579 | isocpp/CppCoreGuidelines | 15,926 |
fix --certs argument | diff --git a/docs/src/content/concepts-certificates.md b/docs/src/content/concepts-certificates.md
index ebc5ede990..0fc32c696c 100644
--- a/docs/src/content/concepts-certificates.md
+++ b/docs/src/content/concepts-certificates.md
@@ -93,7 +93,7 @@ The files created by mitmproxy in the .mitmproxy directory are as follows:
## Using a custom server certificate
-You can use your own (leaf) certificate by passing the `--cert
+You can use your own (leaf) certificate by passing the `--certs
[domain=]path_to_certificate` option to mitmproxy. Mitmproxy then uses the
provided certificate for interception of the specified domain instead of
generating a certificate signed by its own CA.
@@ -127,13 +127,13 @@ Now, you can run mitmproxy with the generated certificate:
**For all domain names**
```bash
-mitmproxy --cert *=cert.pem
+mitmproxy --certs *=cert.pem
```
**For specific domain names**
```bash
-mitmproxy --cert *.example.com=cert.pem
+mitmproxy --certs *.example.com=cert.pem
```
**Note:** `*.example.com` is for all the subdomains. You can also use
| the help output claims that --certs is correct | https://api.github.com/repos/mitmproxy/mitmproxy/pulls/4412 | 2021-01-24T21:05:38Z | 2021-01-24T21:12:17Z | 2021-01-24T21:12:17Z | 2021-01-31T09:05:58Z | 287 | mitmproxy/mitmproxy | 27,421 |
readme.md dataset Table Formatting | diff --git a/model/model_training/README.md b/model/model_training/README.md
index 914c82eff7..1f5dbcf9ed 100644
--- a/model/model_training/README.md
+++ b/model/model_training/README.md
@@ -212,10 +212,9 @@ deepspeed trainer_sft.py --configs defaults your-model-name --deepspeed
Here is an uncomplete overview of datasets for sft:
<!-- prettier-ignore -->
+<!-- prettier-ignore-start -->
dataset_name | train_counts | eval_counts | total_counts
-----------------------------------------------------------------
-
-<!-- prettier-ignore -->
+--|--|--|--
joke | 301 | 76 | 377
webgpt | 14251 | 3563 | 17814
gpt4all | 313552 | 78388 | 391940
@@ -233,6 +232,7 @@ prosocial_dialogue | 157160 | 26983 | 184143
explain_prosocial | 360708 | 61248 | 421956
soda | 924102 | 231026 | 1155128
oa_leet10k | 18728 | 4683 | 23411
+<!-- prettier-ignore-end -->
This list can be generated with the following command, but beware that this
downloads all available datasets (>100GB):
| change markdown Table Formatting.
that format is broken
| https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/3219 | 2023-05-23T23:43:15Z | 2023-05-26T21:18:13Z | 2023-05-26T21:18:13Z | 2023-05-26T21:18:14Z | 329 | LAION-AI/Open-Assistant | 37,250 |
Enhancement Adding not existent lambda name in response headers | diff --git a/localstack/services/awslambda/lambda_api.py b/localstack/services/awslambda/lambda_api.py
index 36e4345824805..c4c74d2d2a3e7 100644
--- a/localstack/services/awslambda/lambda_api.py
+++ b/localstack/services/awslambda/lambda_api.py
@@ -793,19 +793,23 @@ def forward_to_fallback_url(func_arn, data):
Lambda to the configured URL. """
if not config.LAMBDA_FALLBACK_URL:
return None
+
+ lambda_name = aws_stack.lambda_function_name(func_arn)
if config.LAMBDA_FALLBACK_URL.startswith('dynamodb://'):
table_name = urlparse(config.LAMBDA_FALLBACK_URL.replace('dynamodb://', 'http://')).netloc
dynamodb = aws_stack.connect_to_service('dynamodb')
item = {
'id': {'S': short_uid()},
'timestamp': {'N': str(now_utc())},
- 'payload': {'S': str(data)}
+ 'payload': {'S': str(data)},
+ 'function_name': {'S': lambda_name}
}
aws_stack.create_dynamodb_table(table_name, partition_key='id')
dynamodb.put_item(TableName=table_name, Item=item)
return ''
if re.match(r'^https?://.+', config.LAMBDA_FALLBACK_URL):
- response = safe_requests.post(config.LAMBDA_FALLBACK_URL, data)
+ headers = {'lambda-function-name': lambda_name}
+ response = safe_requests.post(config.LAMBDA_FALLBACK_URL, data, headers=headers)
return response.content
raise ClientError('Unexpected value for LAMBDA_FALLBACK_URL: %s' % config.LAMBDA_FALLBACK_URL)
@@ -1144,14 +1148,14 @@ def invoke_function(function):
# Default invocation type is RequestResponse
invocation_type = request.environ.get('HTTP_X_AMZ_INVOCATION_TYPE', 'RequestResponse')
- def _create_response(result, status_code=200):
+ def _create_response(result, status_code=200, headers={}):
""" Create the final response for the given invocation result """
if isinstance(result, Response):
return result
details = {
'StatusCode': status_code,
'Payload': result,
- 'Headers': {}
+ 'Headers': headers
}
if isinstance(result, dict):
for key in ('StatusCode', 'Payload', 'FunctionError'):
@@ -1187,7 +1191,7 @@ def _create_response(result, status_code=200):
not_found = not_found_error('{0}:{1}'.format(arn, qualifier))
if not_found:
- forward_result = forward_to_fallback_url(func_arn, data)
+ forward_result = forward_to_fallback_url(arn, data)
if forward_result is not None:
return _create_response(forward_result)
return not_found
diff --git a/tests/integration/test_lambda.py b/tests/integration/test_lambda.py
index 9089e6ef6fd70..f61952e453769 100644
--- a/tests/integration/test_lambda.py
+++ b/tests/integration/test_lambda.py
@@ -120,7 +120,7 @@ def num_items():
def test_forward_to_fallback_url_http(self):
class MyUpdateListener(ProxyListener):
def forward_request(self, method, path, data, headers):
- records.append(data)
+ records.append({'data': data, 'headers': headers})
return 200
records = []
@@ -130,9 +130,26 @@ def forward_request(self, method, path, data, headers):
items_before = len(records)
_run_forward_to_fallback_url('%s://localhost:%s' % (get_service_protocol(), local_port))
items_after = len(records)
+ for record in records:
+ self.assertIn('non-existing-lambda', record['headers']['lambda-function-name'])
+
self.assertEqual(items_after, items_before + 3)
proxy.stop()
+ def test_adding_fallback_function_name_in_headers(self):
+
+ lambda_client = aws_stack.connect_to_service('lambda')
+ ddb_client = aws_stack.connect_to_service('dynamodb')
+
+ db_table = 'lambda-records'
+ config.LAMBDA_FALLBACK_URL = 'dynamodb://%s' % db_table
+
+ lambda_client.invoke(FunctionName='non-existing-lambda',
+ Payload=b'{}', InvocationType='RequestResponse')
+
+ result = run_safe(ddb_client.scan, TableName=db_table)
+ self.assertEqual(result['Items'][0]['function_name']['S'], 'non-existing-lambda')
+
def test_dead_letter_queue(self):
sqs_client = aws_stack.connect_to_service('sqs')
lambda_client = aws_stack.connect_to_service('lambda')
| Enhancement Adding not existent lambda name in response headers. Fixes #1971 | https://api.github.com/repos/localstack/localstack/pulls/2397 | 2020-05-05T21:09:13Z | 2020-05-07T21:44:17Z | 2020-05-07T21:44:17Z | 2020-05-07T21:44:17Z | 1,054 | localstack/localstack | 28,759 |
Remove extra spider parameter in item pipeline docs | diff --git a/docs/topics/item-pipeline.rst b/docs/topics/item-pipeline.rst
index bc26bbebe55..a5f6e07b89d 100644
--- a/docs/topics/item-pipeline.rst
+++ b/docs/topics/item-pipeline.rst
@@ -215,7 +215,7 @@ item.
screenshot_url = self.SPLASH_URL.format(encoded_item_url)
request = scrapy.Request(screenshot_url, callback=NO_CALLBACK)
response = await maybe_deferred_to_future(
- spider.crawler.engine.download(request, spider)
+ spider.crawler.engine.download(request)
)
if response.status != 200:
| Fixes #6008
I looked in other files and this is the only place where we still have an example of passing an extra spider parameter. | https://api.github.com/repos/scrapy/scrapy/pulls/6009 | 2023-08-10T11:46:17Z | 2023-08-10T11:48:44Z | 2023-08-10T11:48:44Z | 2023-08-10T11:48:44Z | 143 | scrapy/scrapy | 34,279 |
[events] add tests for ErrorEvent event types | diff --git a/tests/sentry/eventtypes/__init__.py b/tests/sentry/eventtypes/__init__.py
new file mode 100644
index 0000000000000..c3961685ab8de
--- /dev/null
+++ b/tests/sentry/eventtypes/__init__.py
@@ -0,0 +1 @@
+from __future__ import absolute_import
diff --git a/tests/sentry/eventtypes/test_error.py b/tests/sentry/eventtypes/test_error.py
new file mode 100644
index 0000000000000..b4d8108df37f6
--- /dev/null
+++ b/tests/sentry/eventtypes/test_error.py
@@ -0,0 +1,21 @@
+from __future__ import absolute_import
+
+from sentry.eventtypes import ErrorEvent
+from sentry.testutils import TestCase
+
+
+class ErrorEventTest(TestCase):
+ def test_to_string_none_value(self):
+ inst = ErrorEvent({})
+ result = inst.to_string({'type': 'Error', 'value': None})
+ assert result == 'Error'
+
+ def test_to_string_eliminates_values_with_newline(self):
+ inst = ErrorEvent({})
+ result = inst.to_string({'type': 'Error', 'value': 'foo\nbar'})
+ assert result == 'Error: foo'
+
+ def test_to_string_handles_empty_value(self):
+ inst = ErrorEvent({})
+ result = inst.to_string({'type': 'Error', 'value': ''})
+ assert result == 'Error'
| https://api.github.com/repos/getsentry/sentry/pulls/4261 | 2016-10-03T23:26:17Z | 2016-10-03T23:36:52Z | 2016-10-03T23:36:52Z | 2020-12-23T10:01:48Z | 335 | getsentry/sentry | 44,182 |
|
R.3_609: changed owner<T> to owner<T*> in R.3 per issue #609 | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 082bbf524..26c3de932 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -7232,7 +7232,7 @@ We can fix that problem by making ownership explicit:
class X2 {
// ...
public:
- owner<T> p; // OK: p is owning
+ owner<T*> p; // OK: p is owning
T* q; // OK: q is not owning
};
@@ -7256,9 +7256,9 @@ Some interfaces cannot be simply annotated with `owner` because they need to rem
##### Note
-`owner<T>` has no default semantics beyond `T*`. It can be used without changing any code using it and without affecting ABIs.
+`owner<T*>` has no default semantics beyond `T*`. It can be used without changing any code using it and without affecting ABIs.
It is simply a indicator to programmers and analysis tools.
-For example, if an `owner<T>` is a member of a class, that class better have a destructor that `delete`s it.
+For example, if an `owner<T*>` is a member of a class, that class better have a destructor that `delete`s it.
##### Example, bad
| In guideline R.3: changed owner<T> references to owner<T*> described in issue #609.
| https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/612 | 2016-05-19T01:46:53Z | 2016-05-19T15:18:30Z | 2016-05-19T15:18:30Z | 2016-05-19T17:07:01Z | 301 | isocpp/CppCoreGuidelines | 15,869 |
Fortinet's FortiOS user adgrp | diff --git a/lib/ansible/modules/network/fortios/fortios_user_adgrp.py b/lib/ansible/modules/network/fortios/fortios_user_adgrp.py
new file mode 100644
index 00000000000000..7cc8a1c8378594
--- /dev/null
+++ b/lib/ansible/modules/network/fortios/fortios_user_adgrp.py
@@ -0,0 +1,260 @@
+#!/usr/bin/python
+from __future__ import (absolute_import, division, print_function)
+# Copyright 2019 Fortinet, Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <https://www.gnu.org/licenses/>.
+
+__metaclass__ = type
+
+ANSIBLE_METADATA = {'status': ['preview'],
+ 'supported_by': 'community',
+ 'metadata_version': '1.1'}
+
+DOCUMENTATION = '''
+---
+module: fortios_user_adgrp
+short_description: Configure FSSO groups in Fortinet's FortiOS and FortiGate.
+description:
+ - This module is able to configure a FortiGate or FortiOS by allowing the
+ user to set and modify user feature and adgrp category.
+ Examples include all parameters and values need to be adjusted to datasources before usage.
+ Tested with FOS v6.0.2
+version_added: "2.8"
+author:
+ - Miguel Angel Munoz (@mamunozgonzalez)
+ - Nicolas Thomas (@thomnico)
+notes:
+ - Requires fortiosapi library developed by Fortinet
+ - Run as a local_action in your playbook
+requirements:
+ - fortiosapi>=0.9.8
+options:
+ host:
+ description:
+ - FortiOS or FortiGate ip address.
+ required: true
+ username:
+ description:
+ - FortiOS or FortiGate username.
+ required: true
+ password:
+ description:
+ - FortiOS or FortiGate password.
+ default: ""
+ vdom:
+ description:
+ - Virtual domain, among those defined previously. A vdom is a
+ virtual instance of the FortiGate that can be configured and
+ used as a different unit.
+ default: root
+ https:
+ description:
+ - Indicates if the requests towards FortiGate must use HTTPS
+ protocol
+ type: bool
+ default: true
+ user_adgrp:
+ description:
+ - Configure FSSO groups.
+ default: null
+ suboptions:
+ state:
+ description:
+ - Indicates whether to create or remove the object
+ choices:
+ - present
+ - absent
+ name:
+ description:
+ - Name.
+ required: true
+ server-name:
+ description:
+ - FSSO agent name. Source user.fsso.name.
+'''
+
+EXAMPLES = '''
+- hosts: localhost
+ vars:
+ host: "192.168.122.40"
+ username: "admin"
+ password: ""
+ vdom: "root"
+ tasks:
+ - name: Configure FSSO groups.
+ fortios_user_adgrp:
+ host: "{{ host }}"
+ username: "{{ username }}"
+ password: "{{ password }}"
+ vdom: "{{ vdom }}"
+ https: "False"
+ user_adgrp:
+ state: "present"
+ name: "default_name_3"
+ server-name: "<your_own_value> (source user.fsso.name)"
+'''
+
+RETURN = '''
+build:
+ description: Build number of the fortigate image
+ returned: always
+ type: str
+ sample: '1547'
+http_method:
+ description: Last method used to provision the content into FortiGate
+ returned: always
+ type: str
+ sample: 'PUT'
+http_status:
+ description: Last result given by FortiGate on last operation applied
+ returned: always
+ type: str
+ sample: "200"
+mkey:
+ description: Master key (id) used in the last call to FortiGate
+ returned: success
+ type: str
+ sample: "id"
+name:
+ description: Name of the table used to fulfill the request
+ returned: always
+ type: str
+ sample: "urlfilter"
+path:
+ description: Path of the table used to fulfill the request
+ returned: always
+ type: str
+ sample: "webfilter"
+revision:
+ description: Internal revision number
+ returned: always
+ type: str
+ sample: "17.0.2.10658"
+serial:
+ description: Serial number of the unit
+ returned: always
+ type: str
+ sample: "FGVMEVYYQT3AB5352"
+status:
+ description: Indication of the operation's result
+ returned: always
+ type: str
+ sample: "success"
+vdom:
+ description: Virtual domain used
+ returned: always
+ type: str
+ sample: "root"
+version:
+ description: Version of the FortiGate
+ returned: always
+ type: str
+ sample: "v5.6.3"
+
+'''
+
+from ansible.module_utils.basic import AnsibleModule
+
+
+def login(data, fos):
+ host = data['host']
+ username = data['username']
+ password = data['password']
+
+ fos.debug('on')
+ if 'https' in data and not data['https']:
+ fos.https('off')
+ else:
+ fos.https('on')
+
+ fos.login(host, username, password)
+
+
+def filter_user_adgrp_data(json):
+ option_list = ['name', 'server-name']
+ dictionary = {}
+
+ for attribute in option_list:
+ if attribute in json and json[attribute] is not None:
+ dictionary[attribute] = json[attribute]
+
+ return dictionary
+
+
+def user_adgrp(data, fos):
+ vdom = data['vdom']
+ user_adgrp_data = data['user_adgrp']
+ filtered_data = filter_user_adgrp_data(user_adgrp_data)
+
+ if user_adgrp_data['state'] == "present":
+ return fos.set('user',
+ 'adgrp',
+ data=filtered_data,
+ vdom=vdom)
+
+ elif user_adgrp_data['state'] == "absent":
+ return fos.delete('user',
+ 'adgrp',
+ mkey=filtered_data['name'],
+ vdom=vdom)
+
+
+def fortios_user(data, fos):
+ login(data, fos)
+
+ if data['user_adgrp']:
+ resp = user_adgrp(data, fos)
+
+ fos.logout()
+ return not resp['status'] == "success", resp['status'] == "success", resp
+
+
+def main():
+ fields = {
+ "host": {"required": True, "type": "str"},
+ "username": {"required": True, "type": "str"},
+ "password": {"required": False, "type": "str", "no_log": True},
+ "vdom": {"required": False, "type": "str", "default": "root"},
+ "https": {"required": False, "type": "bool", "default": True},
+ "user_adgrp": {
+ "required": False, "type": "dict",
+ "options": {
+ "state": {"required": True, "type": "str",
+ "choices": ["present", "absent"]},
+ "name": {"required": True, "type": "str"},
+ "server-name": {"required": False, "type": "str"}
+
+ }
+ }
+ }
+
+ module = AnsibleModule(argument_spec=fields,
+ supports_check_mode=False)
+ try:
+ from fortiosapi import FortiOSAPI
+ except ImportError:
+ module.fail_json(msg="fortiosapi module is required")
+
+ fos = FortiOSAPI()
+
+ is_error, has_changed, result = fortios_user(module.params, fos)
+
+ if not is_error:
+ module.exit_json(changed=has_changed, meta=result)
+ else:
+ module.fail_json(msg="Error in repo", meta=result)
+
+
+if __name__ == '__main__':
+ main()
| ##### SUMMARY
Fortinet is adding Ansible support for FortiOS and FortiGate products. This module follows the same structure, guidelines and ideas given in previous approved module for a parallel feature of FortiGate (webfiltering): https://github.com/ansible/ansible/pull/37196
In this case we are providing a different functionality: "User Adgrp".
Please note that this will be part of other modules to come for FortiGate, including different functionalities: system, wireless-controller, firewall, webfilter, ips, web-proxy, wanopt, application, dlp spamfilter, log, vpn, certificate, user, dnsfilter, antivirus, report, waf, authentication, switch controller, endpoint-control and router. We plan to follow the same style, structure and usage as in the previous module in order to make it easier to comply with Ansible guidelines.
##### ISSUE TYPE
- New Module Pull Request
##### COMPONENT NAME
fortios_user_addgrp
##### ANSIBLE VERSION
```
ansible 2.8.0.dev0 (new_module ddbbe5dfa5) last updated 2018/09/24 14:54:57 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/magonzalez/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/magonzalez/ansible/lib/ansible
executable location = /home/magonzalez/ansible/bin/ansible
python version = 2.7.15rc1 (default, Apr 15 2018, 21:51:34) [GCC 7.3.0]
``` | https://api.github.com/repos/ansible/ansible/pulls/52831 | 2019-02-22T16:48:48Z | 2019-03-05T11:18:48Z | 2019-03-05T11:18:48Z | 2019-07-25T16:57:44Z | 2,065 | ansible/ansible | 48,937 |
Touch up venv docs | diff --git a/Doc/library/venv.rst b/Doc/library/venv.rst
index d3d5ae2b007d5f..62732d22438672 100644
--- a/Doc/library/venv.rst
+++ b/Doc/library/venv.rst
@@ -47,7 +47,7 @@ Creating virtual environments
A virtual environment is a directory tree which contains Python executable
files and other files which indicate that it is a virtual environment.
- Common installation tools such as ``Setuptools`` and ``pip`` work as
+ Common installation tools such as setuptools_ and pip_ work as
expected with virtual environments. In other words, when a virtual
environment is active, they install Python packages into the virtual
environment without needing to be told to do so explicitly.
@@ -64,24 +64,25 @@ Creating virtual environments
Python installation).
When a virtual environment is active, any options that change the
- installation path will be ignored from all distutils configuration files to
- prevent projects being inadvertently installed outside of the virtual
- environment.
+ installation path will be ignored from all :mod:`distutils` configuration
+ files to prevent projects being inadvertently installed outside of the
+ virtual environment.
When working in a command shell, users can make a virtual environment active
by running an ``activate`` script in the virtual environment's executables
- directory (the precise filename is shell-dependent), which prepends the
- virtual environment's directory for executables to the ``PATH`` environment
- variable for the running shell. There should be no need in other
- circumstances to activate a virtual environment—scripts installed into
- virtual environments have a "shebang" line which points to the virtual
- environment's Python interpreter. This means that the script will run with
- that interpreter regardless of the value of ``PATH``. On Windows, "shebang"
- line processing is supported if you have the Python Launcher for Windows
- installed (this was added to Python in 3.3 - see :pep:`397` for more
- details). Thus, double-clicking an installed script in a Windows Explorer
- window should run the script with the correct interpreter without there
- needing to be any reference to its virtual environment in ``PATH``.
+ directory (the precise filename and command to use the file is
+ shell-dependent), which prepends the virtual environment's directory for
+ executables to the ``PATH`` environment variable for the running shell. There
+ should be no need in other circumstances to activate a virtual
+ environment; scripts installed into virtual environments have a "shebang"
+ line which points to the virtual environment's Python interpreter. This means
+ that the script will run with that interpreter regardless of the value of
+ ``PATH``. On Windows, "shebang" line processing is supported if you have the
+ Python Launcher for Windows installed (this was added to Python in 3.3 - see
+ :pep:`397` for more details). Thus, double-clicking an installed script in a
+ Windows Explorer window should run the script with the correct interpreter
+ without there needing to be any reference to its virtual environment in
+ ``PATH``.
.. _venv-api:
@@ -135,20 +136,20 @@ creation according to their needs, the :class:`EnvBuilder` class.
Added the ``upgrade_deps`` parameter
Creators of third-party virtual environment tools will be free to use the
- provided ``EnvBuilder`` class as a base class.
+ provided :class:`EnvBuilder` class as a base class.
The returned env-builder is an object which has a method, ``create``:
.. method:: create(env_dir)
- This method takes as required argument the path (absolute or relative to
- the current directory) of the target directory which is to contain the
+ Create a virtual environment by specifying the target directory
+ (absolute or relative to the current directory) which is to contain the
virtual environment. The ``create`` method will either create the
environment in the specified directory, or raise an appropriate
exception.
- The ``create`` method of the ``EnvBuilder`` class illustrates the hooks
- available for subclass customization::
+ The ``create`` method of the :class:`EnvBuilder` class illustrates the
+ hooks available for subclass customization::
def create(self, env_dir):
"""
@@ -476,3 +477,7 @@ subclass which installs setuptools and pip into a created virtual environment::
This script is also available for download `online
<https://gist.github.com/vsajip/4673395>`_.
+
+
+.. _setuptools: https://pypi.org/project/setuptools/
+.. _pip: https://pypi.org/project/pip/
| https://api.github.com/repos/python/cpython/pulls/14922 | 2019-07-23T20:51:49Z | 2019-07-23T21:34:33Z | 2019-07-23T21:34:33Z | 2019-07-23T21:34:53Z | 1,074 | python/cpython | 4,321 |
|
add: test for proxy.py | diff --git a/test_proxy.py b/test_proxy.py
new file mode 100644
index 00000000..fdf9188a
--- /dev/null
+++ b/test_proxy.py
@@ -0,0 +1,105 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+from proxy import Proxy, NoTalkProxy
+from io import StringIO
+import sys
+from time import time
+
+if sys.version_info < (2, 7):
+ import unittest2 as unittest
+else:
+ import unittest
+
+
+class ProxyTest(unittest.TestCase):
+
+ @classmethod
+ def setUpClass(cls):
+ """ Class scope setup. """
+ cls.p = Proxy()
+
+ def setUp(cls):
+ """ Function/test case scope setup. """
+ cls.output = StringIO()
+ cls.saved_stdout = sys.stdout
+ sys.stdout = cls.output
+
+ def tearDown(cls):
+ """ Function/test case scope teardown. """
+ cls.output.close()
+ sys.stdout = cls.saved_stdout
+
+ def test_sales_manager_shall_talk_through_proxy_with_delay(cls):
+ cls.p.busy = 'No'
+ start_time = time()
+ cls.p.talk()
+ end_time = time()
+ execution_time = end_time - start_time
+ print_output = cls.output.getvalue()
+ expected_print_output = 'Proxy checking for Sales Manager availability\n\
+Sales Manager ready to talk\n'
+ cls.assertEqual(print_output, expected_print_output)
+ expected_execution_time = 2
+ cls.assertEqual(int(execution_time), expected_execution_time)
+
+ def test_sales_manager_shall_respond_through_proxy_with_delay(cls):
+ cls.p.busy = 'Yes'
+ start_time = time()
+ cls.p.talk()
+ end_time = time()
+ execution_time = end_time - start_time
+ print_output = cls.output.getvalue()
+ expected_print_output = 'Proxy checking for Sales Manager availability\n\
+Sales Manager is busy\n'
+ cls.assertEqual(print_output, expected_print_output)
+ expected_execution_time = 2
+ cls.assertEqual(int(execution_time), expected_execution_time)
+
+
+class NoTalkProxyTest(unittest.TestCase):
+
+ @classmethod
+ def setUpClass(cls):
+ """ Class scope setup. """
+ cls.ntp = NoTalkProxy()
+
+ def setUp(cls):
+ """ Function/test case scope setup. """
+ cls.output = StringIO()
+ cls.saved_stdout = sys.stdout
+ sys.stdout = cls.output
+
+ def tearDown(cls):
+ """ Function/test case scope teardown. """
+ cls.output.close()
+ sys.stdout = cls.saved_stdout
+
+ def test_sales_manager_shall_not_talk_through_proxy_with_delay(cls):
+ cls.ntp.busy = 'No'
+ start_time = time()
+ cls.ntp.talk()
+ end_time = time()
+ execution_time = end_time - start_time
+ print_output = cls.output.getvalue()
+ expected_print_output = 'Proxy checking for Sales Manager availability\n\
+This Sales Manager will not talk to you whether he/she is busy or not\n'
+ cls.assertEqual(print_output, expected_print_output)
+ expected_execution_time = 2
+ cls.assertEqual(int(execution_time), expected_execution_time)
+
+ def test_sales_manager_shall_not_respond_through_proxy_with_delay(cls):
+ cls.ntp.busy = 'Yes'
+ start_time = time()
+ cls.ntp.talk()
+ end_time = time()
+ execution_time = end_time - start_time
+ print_output = cls.output.getvalue()
+ expected_print_output = 'Proxy checking for Sales Manager availability\n\
+This Sales Manager will not talk to you whether he/she is busy or not\n'
+ cls.assertEqual(print_output, expected_print_output)
+ expected_execution_time = 2
+ cls.assertEqual(int(execution_time), expected_execution_time)
+
+if __name__ == "__main__":
+ unittest.main()
| #138 brought some tests for the actual proxy.py version on my local machine back to my mind. Runs with Python3.4.1 on Linux.
| https://api.github.com/repos/faif/python-patterns/pulls/139 | 2016-04-25T18:33:21Z | 2016-05-21T18:37:31Z | 2016-05-21T18:37:31Z | 2016-05-21T18:37:31Z | 884 | faif/python-patterns | 33,696 |
fix json tool | diff --git a/libs/langchain/langchain/tools/json/tool.py b/libs/langchain/langchain/tools/json/tool.py
index 6f6473d51e6b47..6c75de20ce5cb2 100644
--- a/libs/langchain/langchain/tools/json/tool.py
+++ b/libs/langchain/langchain/tools/json/tool.py
@@ -20,7 +20,7 @@ def _parse_input(text: str) -> List[Union[str, int]]:
"""Parse input of the form data["key1"][0]["key2"] into a list of keys."""
_res = re.findall(r"\[.*?]", text)
# strip the brackets and quotes, convert to int if possible
- res = [i[1:-1].replace('"', "") for i in _res]
+ res = [i[1:-1].replace('"', "").replace("'", "") for i in _res]
res = [int(i) if i.isdigit() else i for i in res]
return res
diff --git a/libs/langchain/tests/unit_tests/tools/test_json.py b/libs/langchain/tests/unit_tests/tools/test_json.py
index 36a96595e03d36..b677b1577d3933 100644
--- a/libs/langchain/tests/unit_tests/tools/test_json.py
+++ b/libs/langchain/tests/unit_tests/tools/test_json.py
@@ -30,6 +30,10 @@ def test_json_spec_value() -> None:
assert spec.value('data["baz"]') == "{'test': {'foo': [1, 2, 3]}}"
assert spec.value('data["baz"]["test"]') == "{'foo': [1, 2, 3]}"
assert spec.value('data["baz"]["test"]["foo"]') == "[1, 2, 3]"
+ assert spec.value("data['foo']") == "bar"
+ assert spec.value("data['baz']") == "{'test': {'foo': [1, 2, 3]}}"
+ assert spec.value("data['baz']['test']") == "{'foo': [1, 2, 3]}"
+ assert spec.value("data['baz']['test']['foo']") == "[1, 2, 3]"
def test_json_spec_value_max_length() -> None:
| https://api.github.com/repos/langchain-ai/langchain/pulls/9096 | 2023-08-11T05:50:06Z | 2023-08-11T06:39:26Z | 2023-08-11T06:39:26Z | 2023-08-11T06:39:26Z | 499 | langchain-ai/langchain | 42,962 |
|
[Bilibili] fix bilibili 4k | diff --git a/src/you_get/extractors/bilibili.py b/src/you_get/extractors/bilibili.py
index 94e5479f65..7ea626f89d 100644
--- a/src/you_get/extractors/bilibili.py
+++ b/src/you_get/extractors/bilibili.py
@@ -62,7 +62,7 @@ def bilibili_headers(referer=None, cookie=None):
@staticmethod
def bilibili_api(avid, cid, qn=0):
- return 'https://api.bilibili.com/x/player/playurl?avid=%s&cid=%s&qn=%s&type=&otype=json&fnver=0&fnval=16' % (avid, cid, qn)
+ return 'https://api.bilibili.com/x/player/playurl?avid=%s&cid=%s&qn=%s&type=&otype=json&fnver=0&fnval=16&fourk=1' % (avid, cid, qn)
@staticmethod
def bilibili_audio_api(sid):
| B站的playurl API加了一个参数 fourk=1 时才会输出4K的播放地址,否则不显示。
bilibili has changed their playurl api, we should add fork=1 to the url params or it will not output the 4k url. | https://api.github.com/repos/soimort/you-get/pulls/2827 | 2020-09-07T06:34:16Z | 2020-10-06T13:22:24Z | 2020-10-06T13:22:24Z | 2020-10-06T13:22:29Z | 238 | soimort/you-get | 21,402 |
Fix `test_load_img_url_timeout` | diff --git a/tests/utils/test_image_utils.py b/tests/utils/test_image_utils.py
index 1813c2a21f254..5d899c2f1ddf7 100644
--- a/tests/utils/test_image_utils.py
+++ b/tests/utils/test_image_utils.py
@@ -21,7 +21,7 @@
import numpy as np
import pytest
from huggingface_hub.file_download import http_get
-from requests import ReadTimeout
+from requests import ConnectTimeout, ReadTimeout
from tests.pipelines.test_pipelines_document_question_answering import INVOICE_URL
from transformers import is_torch_available, is_vision_available
@@ -491,7 +491,7 @@ def test_load_img_url(self):
@is_flaky()
def test_load_img_url_timeout(self):
- with self.assertRaises(ReadTimeout):
+ with self.assertRaises((ReadTimeout, ConnectTimeout)):
load_image(INVOICE_URL, timeout=0.001)
def test_load_img_local(self):
| # What does this PR do?
#25184 added timeout parameter to some function and also a test. But the expected exception in the test is `ConnectTimeout` (on daily CI) instead of `ReadTimeout`, but it is `ReadTimeout` on `CircleCI`.
I haven't looked why there are such difference. But this PR updates the expected value to `(ReadTimeout, ConnectTimeout)` so the test added in #25184 won't fail.
(let me know if you think we should dive into this) | https://api.github.com/repos/huggingface/transformers/pulls/25976 | 2023-09-05T07:08:01Z | 2023-09-05T09:34:28Z | 2023-09-05T09:34:28Z | 2023-09-05T09:35:12Z | 212 | huggingface/transformers | 12,793 |
Fix Amazon Linux bootstrapping error. | diff --git a/letsencrypt-auto b/letsencrypt-auto
index b3e380f9d6c..a3009fe52a2 100755
--- a/letsencrypt-auto
+++ b/letsencrypt-auto
@@ -85,6 +85,8 @@ ExperimentalBootstrap() {
DeterminePythonVersion() {
if command -v python2.7 > /dev/null ; then
export LE_PYTHON=${LE_PYTHON:-python2.7}
+ elif command -v python27 > /dev/null ; then
+ export LE_PYTHON=${LE_PYTHON:-python27}
elif command -v python2 > /dev/null ; then
export LE_PYTHON=${LE_PYTHON:-python2}
elif command -v python > /dev/null ; then
@@ -135,7 +137,7 @@ then
elif uname | grep -iq Darwin ; then
ExperimentalBootstrap "Mac OS X" mac.sh
elif grep -iq "Amazon Linux" /etc/issue ; then
- ExperimentalBootstrap "Amazon Linux" amazon_linux.sh
+ ExperimentalBootstrap "Amazon Linux" _rpm_common.sh
else
echo "Sorry, I don't know how to bootstrap Let's Encrypt on your operating system!"
echo
| https://api.github.com/repos/certbot/certbot/pulls/1516 | 2015-11-16T08:26:44Z | 2015-11-16T20:19:55Z | 2015-11-16T20:19:55Z | 2016-05-06T19:22:06Z | 271 | certbot/certbot | 565 |
|
[workflow] changed to doc build to be on schedule and release | diff --git a/.github/workflows/doc_build_after_merge.yml b/.github/workflows/doc_build_on_schedule_after_release.yml
similarity index 69%
rename from .github/workflows/doc_build_after_merge.yml
rename to .github/workflows/doc_build_on_schedule_after_release.yml
index b6fd57b8d2b4..62dfdc67257c 100644
--- a/.github/workflows/doc_build_after_merge.yml
+++ b/.github/workflows/doc_build_on_schedule_after_release.yml
@@ -1,18 +1,16 @@
-name: Build Documentation After Merge
+name: Build Documentation On Schedule & After Release
on:
workflow_dispatch:
- push:
- paths:
- - "version.txt"
- - "docs/**"
- branches:
- - "main"
+ schedule:
+ - cron: "0 12 * * *" # build doc every day at 8pm Singapore time (12pm UTC time)
+ release:
+ types: [published]
jobs:
build-doc:
name: Trigger Documentation Build Workflow
- if: ( github.event_name == 'workflow_dispatch' || github.event.pull_request.merged == true ) && github.repository == 'hpcaitech/ColossalAI'
+ if: github.repository == 'hpcaitech/ColossalAI'
runs-on: ubuntu-latest
steps:
- name: trigger workflow in ColossalAI-Documentation
| ## 📌 Checklist before creating the PR
- [x] I have created an issue for this PR for traceability
- [x] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [x] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
Fixed #3814
## 📝 What does this PR do?
> Summarize your work here.
> if you have any plots/diagrams/screenshots/tables, please attach them here.
This PR is a continuation of #3815 , this PR changed the workflow to be triggered on schedule (8pm Beijing time every day) and upon release. This is because that secrets are not passed to pull request workflows from a forked repository. There is currently no way to get around with it.
## 💥 Checklist before requesting a review
- [x] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [x] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [x] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [x] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/3825 | 2023-05-24T02:11:52Z | 2023-05-24T02:50:19Z | 2023-05-24T02:50:19Z | 2023-05-24T02:50:19Z | 308 | hpcaitech/ColossalAI | 11,136 |
scrapy parse: fix the signature of callbacks from the CLI | diff --git a/scrapy/commands/parse.py b/scrapy/commands/parse.py
index ac937e46495..c9f8586d3d8 100644
--- a/scrapy/commands/parse.py
+++ b/scrapy/commands/parse.py
@@ -1,3 +1,4 @@
+import functools
import inspect
import json
import logging
@@ -251,39 +252,40 @@ def scraped_data(self, args):
return scraped_data
+ def _get_callback(self, *, spider, opts, response=None):
+ cb = None
+ if response:
+ cb = response.meta["_callback"]
+ if not cb:
+ if opts.callback:
+ cb = opts.callback
+ elif response and opts.rules and self.first_response == response:
+ cb = self.get_callback_from_rules(spider, response)
+ if not cb:
+ raise ValueError(
+ f"Cannot find a rule that matches {response.url!r} in spider: "
+ f"{spider.name}"
+ )
+ else:
+ cb = "parse"
+
+ if not callable(cb):
+ cb_method = getattr(spider, cb, None)
+ if callable(cb_method):
+ cb = cb_method
+ else:
+ raise ValueError(
+ f"Cannot find callback {cb!r} in spider: {spider.name}"
+ )
+ return cb
+
def prepare_request(self, spider, request, opts):
def callback(response, **cb_kwargs):
# memorize first request
if not self.first_response:
self.first_response = response
- # determine real callback
- cb = response.meta["_callback"]
- if not cb:
- if opts.callback:
- cb = opts.callback
- elif opts.rules and self.first_response == response:
- cb = self.get_callback_from_rules(spider, response)
-
- if not cb:
- logger.error(
- "Cannot find a rule that matches %(url)r in spider: %(spider)s",
- {"url": response.url, "spider": spider.name},
- )
- return
- else:
- cb = "parse"
-
- if not callable(cb):
- cb_method = getattr(spider, cb, None)
- if callable(cb_method):
- cb = cb_method
- else:
- logger.error(
- "Cannot find callback %(callback)r in spider: %(spider)s",
- {"callback": cb, "spider": spider.name},
- )
- return
+ cb = self._get_callback(spider=spider, opts=opts, response=response)
# parse items and requests
depth = response.meta["_depth"]
@@ -303,6 +305,9 @@ def callback(response, **cb_kwargs):
request.meta["_depth"] = 1
request.meta["_callback"] = request.callback
+ if not request.callback and not opts.rules:
+ cb = self._get_callback(spider=spider, opts=opts)
+ functools.update_wrapper(callback, cb)
request.callback = callback
return request
diff --git a/tests/test_command_check.py b/tests/test_command_check.py
index 129ef01215a..592494aba6e 100644
--- a/tests/test_command_check.py
+++ b/tests/test_command_check.py
@@ -16,11 +16,11 @@ def _write_contract(self, contracts, parse_def):
class CheckSpider(scrapy.Spider):
name = '{self.spider_name}'
- start_urls = ['http://toscrape.com']
+ start_urls = ['data:,']
def parse(self, response, **cb_kwargs):
\"\"\"
- @url http://toscrape.com
+ @url data:,
{contracts}
\"\"\"
{parse_def}
diff --git a/tests/test_command_parse.py b/tests/test_command_parse.py
index 037333c03af..9356d6b79b0 100644
--- a/tests/test_command_parse.py
+++ b/tests/test_command_parse.py
@@ -78,9 +78,21 @@ async def parse(self, response):
if i > 5:
raise ValueError("Stopping the processing")
+class CallbackSignatureDownloaderMiddleware:
+ def process_request(self, request, spider):
+ from inspect import signature
+ spider.logger.debug(f"request.callback signature: {{signature(request.callback)}}")
+
+
class MySpider(scrapy.Spider):
name = '{self.spider_name}'
+ custom_settings = {{
+ "DOWNLOADER_MIDDLEWARES": {{
+ CallbackSignatureDownloaderMiddleware: 0,
+ }}
+ }}
+
def parse(self, response):
if getattr(self, 'test_arg', None):
self.logger.debug('It Works!')
@@ -220,7 +232,11 @@ def test_request_with_cb_kwargs(self):
self.url("/html"),
]
)
- self.assertIn("DEBUG: It Works!", _textmode(stderr))
+ log = _textmode(stderr)
+ self.assertIn("DEBUG: It Works!", log)
+ self.assertIn(
+ "DEBUG: request.callback signature: (response, foo=None, key=None)", log
+ )
@defer.inlineCallbacks
def test_request_without_meta(self):
| Fixes https://github.com/scrapinghub/scrapy-poet/issues/39.
It feels kind of hacky, specially the fact that it will not work for “rules”, but I see no way about it given rules depend on the response. | https://api.github.com/repos/scrapy/scrapy/pulls/6182 | 2023-12-20T11:02:38Z | 2024-01-15T12:37:04Z | 2024-01-15T12:37:04Z | 2024-01-15T12:37:04Z | 1,161 | scrapy/scrapy | 34,350 |
replay: fix hangling on shutdown while downloading. | diff --git a/selfdrive/ui/replay/route.cc b/selfdrive/ui/replay/route.cc
index a7aa4a28e63185..6d2eff18ae5b64 100644
--- a/selfdrive/ui/replay/route.cc
+++ b/selfdrive/ui/replay/route.cc
@@ -4,7 +4,6 @@
#include <QJsonArray>
#include <QJsonDocument>
#include <QRegExp>
-#include <QtConcurrent>
#include "selfdrive/hardware/hw.h"
#include "selfdrive/ui/qt/api.h"
@@ -108,15 +107,17 @@ Segment::Segment(int n, const SegmentFile &files, bool load_dcam, bool load_ecam
for (int i = 0; i < std::size(file_list); i++) {
if (!file_list[i].isEmpty()) {
loading_++;
- synchronizer_.addFuture(QtConcurrent::run(this, &Segment::loadFile, i, file_list[i].toStdString()));
+ loading_threads_.emplace_back(QThread::create(&Segment::loadFile, this, i, file_list[i].toStdString()))->start();
}
}
}
Segment::~Segment() {
aborting_ = true;
- synchronizer_.setCancelOnWait(true);
- synchronizer_.waitForFinished();
+ for (QThread *t : loading_threads_) {
+ if (t->isRunning()) t->wait();
+ delete t;
+ }
}
void Segment::loadFile(int id, const std::string file) {
diff --git a/selfdrive/ui/replay/route.h b/selfdrive/ui/replay/route.h
index c4ab0cd2a37b02..80d275f6d3570f 100644
--- a/selfdrive/ui/replay/route.h
+++ b/selfdrive/ui/replay/route.h
@@ -1,7 +1,7 @@
#pragma once
#include <QDir>
-#include <QFutureSynchronizer>
+#include <QThread>
#include "selfdrive/common/util.h"
#include "selfdrive/ui/replay/framereader.h"
@@ -57,5 +57,5 @@ class Segment : public QObject {
std::atomic<bool> success_ = true, aborting_ = false;
std::atomic<int> loading_ = 0;
- QFutureSynchronizer<void> synchronizer_;
+ std::list<QThread*> loading_threads_;
};
| QFuture/QtConcurrent will block the qApp->exit() until it finishes executing. `aborting_ = true` in` Segment::~Segment() `will not be called before downloading finished. | https://api.github.com/repos/commaai/openpilot/pulls/22592 | 2021-10-17T19:09:26Z | 2021-10-18T09:03:31Z | 2021-10-18T09:03:31Z | 2021-10-18T09:49:22Z | 542 | commaai/openpilot | 9,498 |
[screenwavemedia] remove | diff --git a/youtube_dl/extractor/extractors.py b/youtube_dl/extractor/extractors.py
index 578359a5e2b..5723ace8ecf 100644
--- a/youtube_dl/extractor/extractors.py
+++ b/youtube_dl/extractor/extractors.py
@@ -804,7 +804,6 @@
from .screencast import ScreencastIE
from .screencastomatic import ScreencastOMaticIE
from .screenjunkies import ScreenJunkiesIE
-from .screenwavemedia import ScreenwaveMediaIE, TeamFourIE
from .seeker import SeekerIE
from .senateisvp import SenateISVPIE
from .sendtonews import SendtoNewsIE
@@ -897,6 +896,7 @@
)
from .teachingchannel import TeachingChannelIE
from .teamcoco import TeamcocoIE
+from .teamfourstar import TeamFourStarIE
from .techtalks import TechTalksIE
from .ted import TEDIE
from .tele13 import Tele13IE
diff --git a/youtube_dl/extractor/generic.py b/youtube_dl/extractor/generic.py
index bde65fa270f..63e1962841d 100644
--- a/youtube_dl/extractor/generic.py
+++ b/youtube_dl/extractor/generic.py
@@ -56,7 +56,6 @@
)
from .onionstudios import OnionStudiosIE
from .viewlift import ViewLiftEmbedIE
-from .screenwavemedia import ScreenwaveMediaIE
from .mtv import MTVServicesEmbeddedIE
from .pladform import PladformIE
from .videomore import VideomoreIE
@@ -1189,16 +1188,6 @@ class GenericIE(InfoExtractor):
'duration': 248.667,
},
},
- # ScreenwaveMedia embed
- {
- 'url': 'http://www.thecinemasnob.com/the-cinema-snob/a-nightmare-on-elm-street-2-freddys-revenge1',
- 'md5': '24ace5baba0d35d55c6810b51f34e9e0',
- 'info_dict': {
- 'id': 'cinemasnob-55d26273809dd',
- 'ext': 'mp4',
- 'title': 'cinemasnob',
- },
- },
# BrightcoveInPageEmbed embed
{
'url': 'http://www.geekandsundry.com/tabletop-bonus-wils-final-thoughts-on-dread/',
@@ -2206,11 +2195,6 @@ def _playlist_from_matches(matches, getter=None, ie=None):
if jwplatform_url:
return self.url_result(jwplatform_url, 'JWPlatform')
- # Look for ScreenwaveMedia embeds
- mobj = re.search(ScreenwaveMediaIE.EMBED_PATTERN, webpage)
- if mobj is not None:
- return self.url_result(unescapeHTML(mobj.group('url')), 'ScreenwaveMedia')
-
# Look for Digiteka embeds
digiteka_url = DigitekaIE._extract_url(webpage)
if digiteka_url:
diff --git a/youtube_dl/extractor/normalboots.py b/youtube_dl/extractor/normalboots.py
index 6aa0895b82e..61fe571dfea 100644
--- a/youtube_dl/extractor/normalboots.py
+++ b/youtube_dl/extractor/normalboots.py
@@ -2,7 +2,7 @@
from __future__ import unicode_literals
from .common import InfoExtractor
-from .screenwavemedia import ScreenwaveMediaIE
+from .jwplatform import JWPlatformIE
from ..utils import (
unified_strdate,
@@ -25,7 +25,7 @@ class NormalbootsIE(InfoExtractor):
# m3u8 download
'skip_download': True,
},
- 'add_ie': ['ScreenwaveMedia'],
+ 'add_ie': ['JWPlatform'],
}
def _real_extract(self, url):
@@ -39,15 +39,13 @@ def _real_extract(self, url):
r'<span style="text-transform:uppercase; font-size:inherit;">[A-Za-z]+, (?P<date>.*)</span>',
webpage, 'date', fatal=False))
- screenwavemedia_url = self._html_search_regex(
- ScreenwaveMediaIE.EMBED_PATTERN, webpage, 'screenwave URL',
- group='url')
+ jwplatform_url = JWPlatformIE._extract_url(webpage)
return {
'_type': 'url_transparent',
'id': video_id,
- 'url': screenwavemedia_url,
- 'ie_key': ScreenwaveMediaIE.ie_key(),
+ 'url': jwplatform_url,
+ 'ie_key': JWPlatformIE.ie_key(),
'title': self._og_search_title(webpage),
'description': self._og_search_description(webpage),
'thumbnail': self._og_search_thumbnail(webpage),
diff --git a/youtube_dl/extractor/screenwavemedia.py b/youtube_dl/extractor/screenwavemedia.py
deleted file mode 100644
index 7d77e8825d7..00000000000
--- a/youtube_dl/extractor/screenwavemedia.py
+++ /dev/null
@@ -1,146 +0,0 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
-import re
-
-from .common import InfoExtractor
-from ..utils import (
- int_or_none,
- unified_strdate,
- js_to_json,
-)
-
-
-class ScreenwaveMediaIE(InfoExtractor):
- _VALID_URL = r'(?:https?:)?//player\d?\.screenwavemedia\.com/(?:play/)?[a-zA-Z]+\.php\?.*\bid=(?P<id>[A-Za-z0-9-]+)'
- EMBED_PATTERN = r'src=(["\'])(?P<url>(?:https?:)?//player\d?\.screenwavemedia\.com/(?:play/)?[a-zA-Z]+\.php\?.*\bid=.+?)\1'
- _TESTS = [{
- 'url': 'http://player.screenwavemedia.com/play/play.php?playerdiv=videoarea&companiondiv=squareAd&id=Cinemassacre-19911',
- 'only_matching': True,
- }]
-
- def _real_extract(self, url):
- video_id = self._match_id(url)
-
- playerdata = self._download_webpage(
- 'http://player.screenwavemedia.com/player.php?id=%s' % video_id,
- video_id, 'Downloading player webpage')
-
- vidtitle = self._search_regex(
- r'\'vidtitle\'\s*:\s*"([^"]+)"', playerdata, 'vidtitle').replace('\\/', '/')
-
- playerconfig = self._download_webpage(
- 'http://player.screenwavemedia.com/player.js',
- video_id, 'Downloading playerconfig webpage')
-
- videoserver = self._search_regex(r'SWMServer\s*=\s*"([\d\.]+)"', playerdata, 'videoserver')
-
- sources = self._parse_json(
- js_to_json(
- re.sub(
- r'(?s)/\*.*?\*/', '',
- self._search_regex(
- r'sources\s*:\s*(\[[^\]]+?\])', playerconfig,
- 'sources',
- ).replace(
- "' + thisObj.options.videoserver + '",
- videoserver
- ).replace(
- "' + playerVidId + '",
- video_id
- )
- )
- ),
- video_id, fatal=False
- )
-
- # Fallback to hardcoded sources if JS changes again
- if not sources:
- self.report_warning('Falling back to a hardcoded list of streams')
- sources = [{
- 'file': 'http://%s/vod/%s_%s.mp4' % (videoserver, video_id, format_id),
- 'type': 'mp4',
- 'label': format_label,
- } for format_id, format_label in (
- ('low', '144p Low'), ('med', '160p Med'), ('high', '360p High'), ('hd1', '720p HD1'))]
- sources.append({
- 'file': 'http://%s/vod/smil:%s.smil/playlist.m3u8' % (videoserver, video_id),
- 'type': 'hls',
- })
-
- formats = []
- for source in sources:
- file_ = source.get('file')
- if not file_:
- continue
- if source.get('type') == 'hls':
- formats.extend(self._extract_m3u8_formats(file_, video_id, ext='mp4'))
- else:
- format_id = self._search_regex(
- r'_(.+?)\.[^.]+$', file_, 'format id', default=None)
- if not self._is_valid_url(file_, video_id, format_id or 'video'):
- continue
- format_label = source.get('label')
- height = int_or_none(self._search_regex(
- r'^(\d+)[pP]', format_label, 'height', default=None))
- formats.append({
- 'url': file_,
- 'format_id': format_id,
- 'format': format_label,
- 'ext': source.get('type'),
- 'height': height,
- })
- self._sort_formats(formats, field_preference=('height', 'width', 'tbr', 'format_id'))
-
- return {
- 'id': video_id,
- 'title': vidtitle,
- 'formats': formats,
- }
-
-
-class TeamFourIE(InfoExtractor):
- _VALID_URL = r'https?://(?:www\.)?teamfourstar\.com/video/(?P<id>[a-z0-9\-]+)/?'
- _TEST = {
- 'url': 'http://teamfourstar.com/video/a-moment-with-tfs-episode-4/',
- 'info_dict': {
- 'id': 'TeamFourStar-5292a02f20bfa',
- 'ext': 'mp4',
- 'upload_date': '20130401',
- 'description': 'Check out this and more on our website: http://teamfourstar.com\nTFS Store: http://sharkrobot.com/team-four-star\nFollow on Twitter: http://twitter.com/teamfourstar\nLike on FB: http://facebook.com/teamfourstar',
- 'title': 'A Moment With TFS Episode 4',
- },
- 'params': {
- # m3u8 download
- 'skip_download': True,
- },
- }
-
- def _real_extract(self, url):
- display_id = self._match_id(url)
- webpage = self._download_webpage(url, display_id)
-
- playerdata_url = self._search_regex(
- r'src="(http://player\d?\.screenwavemedia\.com/(?:play/)?[a-zA-Z]+\.php\?[^"]*\bid=.+?)"',
- webpage, 'player data URL')
-
- video_title = self._html_search_regex(
- r'<div class="heroheadingtitle">(?P<title>.+?)</div>',
- webpage, 'title')
- video_date = unified_strdate(self._html_search_regex(
- r'<div class="heroheadingdate">(?P<date>.+?)</div>',
- webpage, 'date', fatal=False))
- video_description = self._html_search_regex(
- r'(?s)<div class="postcontent">(?P<description>.+?)</div>',
- webpage, 'description', fatal=False)
- video_thumbnail = self._og_search_thumbnail(webpage)
-
- return {
- '_type': 'url_transparent',
- 'display_id': display_id,
- 'title': video_title,
- 'description': video_description,
- 'upload_date': video_date,
- 'thumbnail': video_thumbnail,
- 'url': playerdata_url,
- }
diff --git a/youtube_dl/extractor/teamfourstar.py b/youtube_dl/extractor/teamfourstar.py
new file mode 100644
index 00000000000..a4db2ca9823
--- /dev/null
+++ b/youtube_dl/extractor/teamfourstar.py
@@ -0,0 +1,48 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from .jwplatform import JWPlatformIE
+from ..utils import unified_strdate
+
+
+class TeamFourStarIE(InfoExtractor):
+ _VALID_URL = r'https?://(?:www\.)?teamfourstar\.com/(?P<id>[a-z0-9\-]+)/?'
+ _TEST = {
+ 'url': 'http://teamfourstar.com/tfs-abridged-parody-episode-1-2/',
+ 'info_dict': {
+ 'id': '0WdZO31W',
+ 'title': 'TFS Abridged Parody Episode 1',
+ 'description': 'Episode 1: The Return of Raditz! … Wait…\nCast\nMasakoX – Goku, Roshi\nLanipator – Piccolo, Radditz, Krillin, Vegeta\nVegeta3986 – Radditz, Yamcha, Oolong, Gohan\nHbi2k – Farmer with Shotgun\nMegami33 – Bulma, Puar\nTakahata101 – Nappa\nKaiserNeko – SpacePod\nSongs\nMorgenstemning by Edvard Hagerup Grieg\nCha-La-Head-Cha-La by Kageyama Hiranobu\nWE DO NOT OWN DRAGONBALL. DragonBall is Owned by TOEI ANIMATION, Ltd. and Licensed by FUNimation Productions, Ltd.. All Rights Reserved. DragonBall, DragonBall Z, DragonBall GT and all logos, character names and distinctive likenesses thereof are trademarks of TOEI ANIMATION, Ltd.\nThis is nothing more than a Parody made for entertainment purposes only.',
+ 'ext': 'mp4',
+ 'timestamp': 1394168400,
+ 'upload_date': '20080508',
+ },
+ }
+
+ def _real_extract(self, url):
+ display_id = self._match_id(url)
+ webpage = self._download_webpage(url, display_id)
+
+ jwplatform_url = JWPlatformIE._extract_url(webpage)
+
+ video_title = self._html_search_regex(
+ r'<h1 class="entry-title">(?P<title>.+?)</h1>',
+ webpage, 'title')
+ video_date = unified_strdate(self._html_search_regex(
+ r'<span class="meta-date date updated">(?P<date>.+?)</span>',
+ webpage, 'date', fatal=False))
+ video_description = self._html_search_regex(
+ r'(?s)<div class="content-inner">.*?(?P<description><p>.+?)</div>',
+ webpage, 'description', fatal=False)
+ video_thumbnail = self._og_search_thumbnail(webpage)
+
+ return {
+ '_type': 'url_transparent',
+ 'display_id': display_id,
+ 'title': video_title,
+ 'description': video_description,
+ 'upload_date': video_date,
+ 'thumbnail': video_thumbnail,
+ 'url': jwplatform_url,
+ }
| Screenwave Media no longer offers video streaming services (the video player URLs return a CloudFlare "DNS resolution error"). This PR removes the relevant extractor.
The two extractors that depended on ScreenwaveMediaIE were converted to depend on JWPlatformIE. TeamFourIE was renamed TeamFourStarIE, moved into its own file, and updated. | https://api.github.com/repos/ytdl-org/youtube-dl/pulls/11184 | 2016-11-13T15:18:13Z | 2016-11-28T16:17:56Z | 2016-11-28T16:17:56Z | 2016-11-29T12:27:57Z | 3,515 | ytdl-org/youtube-dl | 50,300 |
Fix inconsistent NMS IoU value for COCO | diff --git a/train.py b/train.py
index 5a434773eff..e58d7c4f034 100644
--- a/train.py
+++ b/train.py
@@ -457,8 +457,6 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
results, _, _ = test.run(data_dict,
batch_size=batch_size // WORLD_SIZE * 2,
imgsz=imgsz_test,
- conf_thres=0.001,
- iou_thres=0.7,
model=attempt_load(m, device).half(),
single_cls=single_cls,
dataloader=testloader,
| Evaluation of 'best' and 'last' models will use the same params as the evaluation during the training phase.
This PR fixes https://github.com/ultralytics/yolov5/issues/3907
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Enhanced default testing configuration in YOLOv5 training script 🛠️.
### 📊 Key Changes
- Removed hardcoded confidence threshold (`conf_thres=0.001`) during testing.
- Removed hardcoded Intersection over Union (IoU) threshold (`iou_thres=0.7`) during testing.
### 🎯 Purpose & Impact
- 🎨 **Customizability**: Users can rely on default settings defined elsewhere or specify their own thresholds, leading to more flexible and customized model evaluations.
- 📈 **Generalization**: Eliminating hardcoded values can potentially lead to more accurate performance assessments across different datasets and use cases.
- 💡 **Transparency**: Allows users to have clearer insights into how their models perform with default parameters, fostering a better understanding and trust in the model's evaluation metrics. | https://api.github.com/repos/ultralytics/yolov5/pulls/3934 | 2021-07-08T11:39:34Z | 2021-07-08T13:29:03Z | 2021-07-08T13:29:02Z | 2024-01-19T17:05:57Z | 143 | ultralytics/yolov5 | 25,706 |
Improve current word selection | diff --git a/web/extensions/core/editAttention.js b/web/extensions/core/editAttention.js
index bebc80b122..cc51a04e51 100644
--- a/web/extensions/core/editAttention.js
+++ b/web/extensions/core/editAttention.js
@@ -89,24 +89,17 @@ app.registerExtension({
end = nearestEnclosure.end;
selectedText = inputField.value.substring(start, end);
} else {
- // Select the current word, find the start and end of the word (first space before and after)
- const wordStart = inputField.value.substring(0, start).lastIndexOf(" ") + 1;
- const wordEnd = inputField.value.substring(end).indexOf(" ");
- // If there is no space after the word, select to the end of the string
- if (wordEnd === -1) {
- end = inputField.value.length;
- } else {
- end += wordEnd;
+ // Select the current word, find the start and end of the word
+ const delimiters = " .,\\/!?%^*;:{}=-_`~()\r\n\t";
+
+ while (!delimiters.includes(inputField.value[start - 1]) && start > 0) {
+ start--;
}
- start = wordStart;
-
- // Remove all punctuation at the end and beginning of the word
- while (inputField.value[start].match(/[.,\/#!$%\^&\*;:{}=\-_`~()]/)) {
- start++;
- }
- while (inputField.value[end - 1].match(/[.,\/#!$%\^&\*;:{}=\-_`~()]/)) {
- end--;
+
+ while (!delimiters.includes(inputField.value[end]) && end < inputField.value.length) {
+ end++;
}
+
selectedText = inputField.value.substring(start, end);
if (!selectedText) return;
}
| It currently assumes that words are always separated by spaces, but that's not necessarily the case. For instance, I'm sure you've seen prompts with `commas,but not spaces,like this`, in which case `commas,but` and `spaces,like` each are treated as one word.
This way simply seeks forwards and backwards until it finds a delimiter. | https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/543 | 2023-04-21T00:42:04Z | 2023-04-21T04:05:31Z | 2023-04-21T04:05:31Z | 2023-04-21T05:19:42Z | 424 | comfyanonymous/ComfyUI | 17,859 |
Bump llama-index-core from 0.10.12 to 0.10.24 in /llama-index-integrations/llms/llama-index-llms-friendli | diff --git a/llama-index-integrations/llms/llama-index-llms-friendli/poetry.lock b/llama-index-integrations/llms/llama-index-llms-friendli/poetry.lock
index bc69700e34256..bdb6af63c50b1 100644
--- a/llama-index-integrations/llms/llama-index-llms-friendli/poetry.lock
+++ b/llama-index-integrations/llms/llama-index-llms-friendli/poetry.lock
@@ -2537,13 +2537,13 @@ files = [
[[package]]
name = "llama-index-core"
-version = "0.10.12"
+version = "0.10.24"
description = "Interface between LLMs and your data"
optional = false
-python-versions = ">=3.8.1,<4.0"
+python-versions = "<4.0,>=3.8.1"
files = [
- {file = "llama_index_core-0.10.12-py3-none-any.whl", hash = "sha256:47663cc3282684e6b7f06e905d98382aa3dbec5191ab72c239b4f19e0b08c041"},
- {file = "llama_index_core-0.10.12.tar.gz", hash = "sha256:071e3a9ab2071c900657149cabf39199818e7244d16ef5cc096e5c0bff8174f4"},
+ {file = "llama_index_core-0.10.24-py3-none-any.whl", hash = "sha256:c4b979160e813f2f41b6aeaa243293b5297f83aed1c3654d2673aa881551d479"},
+ {file = "llama_index_core-0.10.24.tar.gz", hash = "sha256:0bb28871a44d697a06df8668602828c9132ffe3289b4a598f8928b887ac70901"},
]
[package.dependencies]
@@ -2575,7 +2575,7 @@ gradientai = ["gradientai (>=1.4.0)"]
html = ["beautifulsoup4 (>=4.12.2,<5.0.0)"]
langchain = ["langchain (>=0.0.303)"]
local-models = ["optimum[onnxruntime] (>=1.13.2,<2.0.0)", "sentencepiece (>=0.1.99,<0.2.0)", "transformers[torch] (>=4.33.1,<5.0.0)"]
-postgres = ["asyncpg (>=0.28.0,<0.29.0)", "pgvector (>=0.1.0,<0.2.0)", "psycopg2-binary (>=2.9.9,<3.0.0)"]
+postgres = ["asyncpg (>=0.29.0,<0.30.0)", "pgvector (>=0.2.4,<0.3.0)", "psycopg2-binary (>=2.9.9,<3.0.0)"]
query-tools = ["guidance (>=0.0.64,<0.0.65)", "jsonpath-ng (>=1.6.0,<2.0.0)", "lm-format-enforcer (>=0.4.3,<0.5.0)", "rank-bm25 (>=0.2.2,<0.3.0)", "scikit-learn", "spacy (>=3.7.1,<4.0.0)"]
[[package]]
| Bumps [llama-index-core](https://github.com/run-llama/llama_index) from 0.10.12 to 0.10.24.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/run-llama/llama_index/releases">llama-index-core's releases</a>.</em></p>
<blockquote>
<h2>v0.10.24</h2>
<p>No release notes provided.</p>
<h2>v0.10.23</h2>
<p>No release notes provided.</p>
<h2>v0.10.22</h2>
<p>No release notes provided.</p>
<h2>v0.10.20</h2>
<p>No release notes provided.</p>
<h2>v0.10.19</h2>
<h3><code>llama-index-cli</code> [0.1.9]</h3>
<ul>
<li>Removed chroma as a bundled dep to reduce <code>llama-index</code> deps</li>
</ul>
<h3><code>llama-index-core</code> [0.10.19]</h3>
<ul>
<li>Introduce retries for rate limits in <code>OpenAI</code> llm class (<a href="https://redirect.github.com/run-llama/llama_index/issues/11867">#11867</a>)</li>
<li>Added table comments to SQL table schemas in <code>SQLDatabase</code> (<a href="https://redirect.github.com/run-llama/llama_index/issues/11774">#11774</a>)</li>
<li>Added <code>LogProb</code> type to <code>ChatResponse</code> object (<a href="https://redirect.github.com/run-llama/llama_index/issues/11795">#11795</a>)</li>
<li>Introduced <code>LabelledSimpleDataset</code> (<a href="https://redirect.github.com/run-llama/llama_index/issues/11805">#11805</a>)</li>
<li>Fixed insert <code>IndexNode</code> objects with unserializable objects (<a href="https://redirect.github.com/run-llama/llama_index/issues/11836">#11836</a>)</li>
<li>Fixed stream chat type error when writing response to history in <code>CondenseQuestionChatEngine</code> (<a href="https://redirect.github.com/run-llama/llama_index/issues/11856">#11856</a>)</li>
<li>Improve post-processing for json query engine (<a href="https://redirect.github.com/run-llama/llama_index/issues/11862">#11862</a>)</li>
</ul>
<h3><code>llama-index-embeddings-cohere</code> [0.1.4]</h3>
<ul>
<li>Fixed async kwarg error (<a href="https://redirect.github.com/run-llama/llama_index/issues/11822">#11822</a>)</li>
</ul>
<h3><code>llama-index-embeddings-dashscope</code> [0.1.2]</h3>
<ul>
<li>Fixed pydantic import (<a href="https://redirect.github.com/run-llama/llama_index/issues/11765">#11765</a>)</li>
</ul>
<h3><code>llama-index-graph-stores-neo4j</code> [0.1.3]</h3>
<ul>
<li>Properly close connection after verifying connectivity (<a href="https://redirect.github.com/run-llama/llama_index/issues/11821">#11821</a>)</li>
</ul>
<h3><code>llama-index-llms-cohere</code> [0.1.3]</h3>
<ul>
<li>Add support for new <code>command-r</code> model (<a href="https://redirect.github.com/run-llama/llama_index/issues/11852">#11852</a>)</li>
</ul>
<h3><code>llama-index-llms-huggingface</code> [0.1.4]</h3>
<ul>
<li>Fixed streaming decoding with special tokens (<a href="https://redirect.github.com/run-llama/llama_index/issues/11807">#11807</a>)</li>
</ul>
<h3><code>llama-index-llms-mistralai</code> [0.1.5]</h3>
<ul>
<li>Added support for latest and open models (<a href="https://redirect.github.com/run-llama/llama_index/issues/11792">#11792</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md">llama-index-core's changelog</a>.</em></p>
<blockquote>
<h3><code>llama-index-core</code> [0.10.24]</h3>
<ul>
<li>pretty prints in <code>LlamaDebugHandler</code> (<a href="https://redirect.github.com/run-llama/llama_index/issues/12216">#12216</a>)</li>
<li>stricter interpreter constraints on pandas query engine (<a href="https://redirect.github.com/run-llama/llama_index/issues/12278">#12278</a>)</li>
<li>PandasQueryEngine can now execute 'pd.*' functions (<a href="https://redirect.github.com/run-llama/llama_index/issues/12240">#12240</a>)</li>
<li>delete proper metadata in docstore delete function (<a href="https://redirect.github.com/run-llama/llama_index/issues/12276">#12276</a>)</li>
<li>improved openai agent parsing function hook (<a href="https://redirect.github.com/run-llama/llama_index/issues/12062">#12062</a>)</li>
<li>add raise_on_error flag for SimpleDirectoryReader (<a href="https://redirect.github.com/run-llama/llama_index/issues/12263">#12263</a>)</li>
<li>remove un-caught openai import in core (<a href="https://redirect.github.com/run-llama/llama_index/issues/12262">#12262</a>)</li>
<li>Fix download_llama_dataset and download_llama_pack (<a href="https://redirect.github.com/run-llama/llama_index/issues/12273">#12273</a>)</li>
<li>Implement EvalQueryEngineTool (<a href="https://redirect.github.com/run-llama/llama_index/issues/11679">#11679</a>)</li>
<li>Expand instrumenation Span coverage for AgentRunner (<a href="https://redirect.github.com/run-llama/llama_index/issues/12249">#12249</a>)</li>
<li>Adding concept of function calling agent/llm (mistral supported for now) (<a href="https://redirect.github.com/run-llama/llama_index/issues/12222">#12222</a>, )</li>
</ul>
<h3><code>llama-index-embeddings-huggingface</code> [0.2.0]</h3>
<ul>
<li>Use <code>sentence-transformers</code> as a backend (<a href="https://redirect.github.com/run-llama/llama_index/issues/12277">#12277</a>)</li>
</ul>
<h3><code>llama-index-postprocessor-voyageai-rerank</code> [0.1.0]</h3>
<ul>
<li>Added voyageai as a reranker (<a href="https://redirect.github.com/run-llama/llama_index/issues/12111">#12111</a>)</li>
</ul>
<h3><code>llama-index-readers-gcs</code> [0.1.0]</h3>
<ul>
<li>Added google cloud storage reader (<a href="https://redirect.github.com/run-llama/llama_index/issues/12259">#12259</a>)</li>
</ul>
<h3><code>llama-index-readers-google</code> [0.2.1]</h3>
<ul>
<li>Support for different drives (<a href="https://redirect.github.com/run-llama/llama_index/issues/12146">#12146</a>)</li>
<li>Remove unnecessary PyDrive dependency from Google Drive Reader (<a href="https://redirect.github.com/run-llama/llama_index/issues/12257">#12257</a>)</li>
</ul>
<h3><code>llama-index-readers-readme</code> [0.1.0]</h3>
<ul>
<li>added readme.com reader (<a href="https://redirect.github.com/run-llama/llama_index/issues/12246">#12246</a>)</li>
</ul>
<h3><code>llama-index-packs-raft</code> [0.1.3]</h3>
<ul>
<li>added pack for RAFT (<a href="https://redirect.github.com/run-llama/llama_index/issues/12275">#12275</a>)</li>
</ul>
<h2>[2024-03-23]</h2>
<h3><code>llama-index-core</code> [0.10.23]</h3>
<ul>
<li>Added <code>(a)predict_and_call()</code> function to base LLM class + openai + mistralai (<a href="https://redirect.github.com/run-llama/llama_index/issues/12188">#12188</a>)</li>
<li>fixed bug with <code>wait()</code> in async agent streaming (<a href="https://redirect.github.com/run-llama/llama_index/issues/12187">#12187</a>)</li>
</ul>
<h3><code>llama-index-embeddings-alephalpha</code> [0.1.0]</h3>
<ul>
<li>Added alephalpha embeddings (<a href="https://redirect.github.com/run-llama/llama_index/issues/12149">#12149</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/run-llama/llama_index/commit/3d3a8b9608af9f7aea30bcd9e115dcf37379a5d6"><code>3d3a8b9</code></a> v0.10.24 (<a href="https://redirect.github.com/run-llama/llama_index/issues/12291">#12291</a>)</li>
<li><a href="https://github.com/run-llama/llama_index/commit/178e0684ebad203dcc004148daac7c87c6a9d82e"><code>178e068</code></a> Add class name method (<a href="https://redirect.github.com/run-llama/llama_index/issues/12290">#12290</a>)</li>
<li><a href="https://github.com/run-llama/llama_index/commit/070104d0bd81e2cc2b91d7d3e7d490488cc7a195"><code>070104d</code></a> Small fixes to instrumentation (<a href="https://redirect.github.com/run-llama/llama_index/issues/12287">#12287</a>)</li>
<li><a href="https://github.com/run-llama/llama_index/commit/53c33e18b74dd842207fae0edf72d49308ba8148"><code>53c33e1</code></a> Make Google Drive Reader serializable (<a href="https://redirect.github.com/run-llama/llama_index/issues/12286">#12286</a>)</li>
<li><a href="https://github.com/run-llama/llama_index/commit/f30d024a2e1f40be896325552287841c91d0959a"><code>f30d024</code></a> Bump langchain-core from 0.1.30 to 0.1.34 in /llama-index-core (<a href="https://redirect.github.com/run-llama/llama_index/issues/12289">#12289</a>)</li>
<li><a href="https://github.com/run-llama/llama_index/commit/d3de336e6f665266f9c5e7241e0d7fa23a2c9408"><code>d3de336</code></a> [enhancement] Core callback handlers: Allow people to print with loggers -- m...</li>
<li><a href="https://github.com/run-llama/llama_index/commit/c5bd9ed4280cc5c7c147bd3c730630b282fc0810"><code>c5bd9ed</code></a> Added support for different drives (<a href="https://redirect.github.com/run-llama/llama_index/issues/12146">#12146</a>)</li>
<li><a href="https://github.com/run-llama/llama_index/commit/2c92e88838a5f481d50840240b1dd3180066c6f5"><code>2c92e88</code></a> stricter access to builting in pandas query engine (<a href="https://redirect.github.com/run-llama/llama_index/issues/12278">#12278</a>)</li>
<li><a href="https://github.com/run-llama/llama_index/commit/6882171a95db8dbaa65295c5b750d08cda970db4"><code>6882171</code></a> Add new reader for GCS (<a href="https://redirect.github.com/run-llama/llama_index/issues/12259">#12259</a>)</li>
<li><a href="https://github.com/run-llama/llama_index/commit/20724b0819974234987d30f83774ecbaee5cd27d"><code>20724b0</code></a> fix(core): delete the metadata with the wrong key when delete ref doc… (<a href="https://redirect.github.com/run-llama/llama_index/issues/12276">#12276</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/run-llama/llama_index/compare/v0.10.12...v0.10.24">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=llama-index-core&package-manager=pip&previous-version=0.10.12&new-version=0.10.24)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/run-llama/llama_index/network/alerts).
</details> | https://api.github.com/repos/run-llama/llama_index/pulls/12730 | 2024-04-10T22:25:48Z | 2024-04-11T00:51:07Z | 2024-04-11T00:51:06Z | 2024-04-11T00:51:07Z | 831 | run-llama/llama_index | 6,788 |
Show CAN error if message counters are invalid | diff --git a/opendbc b/opendbc
index 3270c931c07bd3..b913296c912344 160000
--- a/opendbc
+++ b/opendbc
@@ -1 +1 @@
-Subproject commit 3270c931c07bd3a47839a1a84c109eb2a7d295a6
+Subproject commit b913296c9123441b2b271c00239929ed388169b5
| Prevents a possible controls mismatch where panda notices a message's counter go invalid and set controls_allowed to 0, but openpilot stays engaged | https://api.github.com/repos/commaai/openpilot/pulls/25497 | 2022-08-19T22:02:50Z | 2022-08-20T02:27:17Z | 2022-08-20T02:27:17Z | 2022-08-20T02:27:17Z | 108 | commaai/openpilot | 9,518 |
Remove Sogou proxy code | diff --git a/README.md b/README.md
index fe312c72d1..8a9ac7cce2 100644
--- a/README.md
+++ b/README.md
@@ -186,8 +186,6 @@ For a complete list of all available options, see:
-x | --http-proxy <HOST:PORT> Use specific HTTP proxy for downloading.
-y | --extractor-proxy <HOST:PORT> Use specific HTTP proxy for extracting stream data.
--no-proxy Don't use any proxy. (ignore $http_proxy)
- -S | --sogou Use a Sogou proxy server for downloading.
- --sogou-proxy <HOST:PORT> Run a standalone Sogou proxy server.
--debug Show traceback on KeyboardInterrupt.
## License
diff --git a/README.txt b/README.txt
index f6106aed18..f8ffd878c7 100644
--- a/README.txt
+++ b/README.txt
@@ -194,8 +194,6 @@ For a complete list of all available options, see::
-x | --http-proxy <HOST:PORT> Use specific HTTP proxy for downloading.
-y | --extractor-proxy <HOST:PORT> Use specific HTTP proxy for extracting stream data.
--no-proxy Don't use any proxy. (ignore $http_proxy)
- -S | --sogou Use a Sogou proxy server for downloading.
- --sogou-proxy <HOST:PORT> Run a standalone Sogou proxy server.
--debug Show traceback on KeyboardInterrupt.
License
diff --git a/src/you_get/common.py b/src/you_get/common.py
index d2ab5c7b4f..cea8781621 100644
--- a/src/you_get/common.py
+++ b/src/you_get/common.py
@@ -11,14 +11,13 @@
import threading
from .version import __version__
-from .util import log, sogou_proxy_server, get_filename, unescape_html
+from .util import log
+from .util.strings import get_filename, unescape_html
dry_run = False
force = False
player = None
extractor_proxy = None
-sogou_proxy = None
-sogou_env = None
cookies_txt = None
fake_headers = {
@@ -764,9 +763,6 @@ def parse_host(host):
port = o.port or 0
return (hostname, port)
-def get_sogou_proxy():
- return sogou_proxy
-
def set_proxy(proxy):
proxy_handler = request.ProxyHandler({
'http': '%s:%s' % proxy,
@@ -803,18 +799,8 @@ def download_main(download, download_playlist, urls, playlist, **kwargs):
else:
download(url, **kwargs)
-def get_version():
- try:
- import subprocess
- real_dir = os.path.dirname(os.path.realpath(__file__))
- git_hash = subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD'], cwd=real_dir, stderr=subprocess.DEVNULL).decode('utf-8').strip()
- assert git_hash
- return '%s-%s' % (__version__, git_hash)
- except:
- return __version__
-
def script_main(script_name, download, download_playlist = None):
- version = 'You-Get %s, a video downloader.' % get_version()
+ version = 'You-Get %s, a video downloader.' % __version__
help = 'Usage: %s [OPTION]... [URL]...\n' % script_name
help += '''\nStartup options:
-V | --version Display the version and exit.
@@ -832,13 +818,11 @@ def script_main(script_name, download, download_playlist = None):
-x | --http-proxy <HOST:PORT> Use specific HTTP proxy for downloading.
-y | --extractor-proxy <HOST:PORT> Use specific HTTP proxy for extracting stream data.
--no-proxy Don't use any proxy. (ignore $http_proxy)
- -S | --sogou Use a Sogou proxy server for downloading.
- --sogou-proxy <HOST:PORT> Run a standalone Sogou proxy server.
--debug Show traceback on KeyboardInterrupt.
'''
- short_opts = 'Vhfiuc:nSF:o:p:x:y:'
- opts = ['version', 'help', 'force', 'info', 'url', 'cookies', 'no-merge', 'no-proxy', 'debug', 'sogou', 'format=', 'stream=', 'itag=', 'output-dir=', 'player=', 'http-proxy=', 'extractor-proxy=', 'sogou-proxy=', 'sogou-env=', 'lang=']
+ short_opts = 'Vhfiuc:nF:o:p:x:y:'
+ opts = ['version', 'help', 'force', 'info', 'url', 'cookies', 'no-merge', 'no-proxy', 'debug', 'format=', 'stream=', 'itag=', 'output-dir=', 'player=', 'http-proxy=', 'extractor-proxy=', 'lang=']
if download_playlist:
short_opts = 'l' + short_opts
opts = ['playlist'] + opts
@@ -854,8 +838,6 @@ def script_main(script_name, download, download_playlist = None):
global dry_run
global player
global extractor_proxy
- global sogou_proxy
- global sogou_env
global cookies_txt
cookies_txt = None
@@ -904,33 +886,14 @@ def script_main(script_name, download, download_playlist = None):
proxy = a
elif o in ('-y', '--extractor-proxy'):
extractor_proxy = a
- elif o in ('-S', '--sogou'):
- sogou_proxy = ("0.0.0.0", 0)
- elif o in ('--sogou-proxy',):
- sogou_proxy = parse_host(a)
- elif o in ('--sogou-env',):
- sogou_env = a
elif o in ('--lang',):
lang = a
else:
log.e("try 'you-get --help' for more options")
sys.exit(2)
if not args:
- if sogou_proxy is not None:
- try:
- if sogou_env is not None:
- server = sogou_proxy_server(sogou_proxy, network_env=sogou_env)
- else:
- server = sogou_proxy_server(sogou_proxy)
- server.serve_forever()
- except KeyboardInterrupt:
- if traceback:
- raise
- else:
- sys.exit()
- else:
- print(help)
- sys.exit()
+ print(help)
+ sys.exit()
set_http_proxy(proxy)
diff --git a/src/you_get/extractor/sohu.py b/src/you_get/extractor/sohu.py
index 9a1e109b81..6ee472e08a 100644
--- a/src/you_get/extractor/sohu.py
+++ b/src/you_get/extractor/sohu.py
@@ -19,14 +19,6 @@ def sohu_download(url, output_dir = '.', merge = True, info_only = False):
vid = r1(r'\Wvid\s*[\:=]\s*[\'"]?(\d+)[\'"]?', html)
assert vid
- # Open Sogou proxy if required
- if get_sogou_proxy() is not None:
- server = sogou_proxy_server(get_sogou_proxy(), ostream=open(os.devnull, 'w'))
- server_thread = threading.Thread(target=server.serve_forever)
- server_thread.daemon = True
- server_thread.start()
- set_proxy(server.server_address)
-
if re.match(r'http://tv.sohu.com/', url):
data = json.loads(get_decoded_html('http://hot.vrs.sohu.com/vrs_flash.action?vid=%s' % vid))
for qtyp in ["oriVid","superVid","highVid" ,"norVid","relativeId"]:
@@ -58,11 +50,6 @@ def sohu_download(url, output_dir = '.', merge = True, info_only = False):
urls.append(real_url(host, prot, file, new))
assert data['clipsURL'][0].endswith('.mp4')
- # Close Sogou proxy if required
- if get_sogou_proxy() is not None:
- server.shutdown()
- unset_proxy()
-
print_info(site_info, title, 'mp4', size)
if not info_only:
download_urls(urls, title, 'mp4', size, output_dir, refer = url, merge = merge)
diff --git a/src/you_get/util/__init__.py b/src/you_get/util/__init__.py
index 947ea46509..5345f0ac20 100644
--- a/src/you_get/util/__init__.py
+++ b/src/you_get/util/__init__.py
@@ -2,5 +2,4 @@
from .fs import *
from .log import *
-from .sogou_proxy import *
from .strings import *
diff --git a/src/you_get/util/sogou_proxy.py b/src/you_get/util/sogou_proxy.py
deleted file mode 100644
index 01ffb5720c..0000000000
--- a/src/you_get/util/sogou_proxy.py
+++ /dev/null
@@ -1,141 +0,0 @@
-#!/usr/bin/env python
-
-# Original code from:
-# http://xiaoxia.org/2011/03/26/using-python-to-write-a-local-sogou-proxy-server-procedures/
-
-from . import log
-
-from http.client import HTTPResponse
-from http.server import BaseHTTPRequestHandler, HTTPServer
-from socketserver import ThreadingMixIn
-from threading import Thread
-import random, socket, struct, sys, time
-
-def sogou_proxy_server(
- host=("0.0.0.0", 0),
- network_env='CERNET',
- ostream=sys.stderr):
- """Returns a Sogou proxy server object.
- """
-
- x_sogou_auth = '9CD285F1E7ADB0BD403C22AD1D545F40/30/853edc6d49ba4e27'
- proxy_host = 'h0.cnc.bj.ie.sogou.com'
- proxy_port = 80
-
- def sogou_hash(t, host):
- s = (t + host + 'SogouExplorerProxy').encode('ascii')
- code = len(s)
- dwords = int(len(s) / 4)
- rest = len(s) % 4
- v = struct.unpack(str(dwords) + 'i' + str(rest) + 's', s)
- for vv in v:
- if type(vv) != bytes:
- a = (vv & 0xFFFF)
- b = (vv >> 16)
- code += a
- code = code ^ (((code << 5) ^ b) << 0xb)
- # To avoid overflows
- code &= 0xffffffff
- code += code >> 0xb
- if rest == 3:
- code += s[len(s) - 2] * 256 + s[len(s) - 3]
- code = code ^ ((code ^ (s[len(s) - 1]) * 4) << 0x10)
- code &= 0xffffffff
- code += code >> 0xb
- elif rest == 2:
- code += (s[len(s) - 1]) * 256 + (s[len(s) - 2])
- code ^= code << 0xb
- code &= 0xffffffff
- code += code >> 0x11
- elif rest == 1:
- code += s[len(s) - 1]
- code ^= code << 0xa
- code &= 0xffffffff
- code += code >> 0x1
- code ^= code * 8
- code &= 0xffffffff
- code += code >> 5
- code ^= code << 4
- code = code & 0xffffffff
- code += code >> 0x11
- code ^= code << 0x19
- code = code & 0xffffffff
- code += code >> 6
- code = code & 0xffffffff
- return hex(code)[2:].rstrip('L').zfill(8)
-
- class Handler(BaseHTTPRequestHandler):
- _socket = None
- def do_proxy(self):
- try:
- if self._socket is None:
- self._socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- self._socket.connect((proxy_host, proxy_port))
- self._socket.send(self.requestline.encode('ascii') + b'\r\n')
- log.d(self.requestline, ostream)
-
- # Add Sogou Verification Tags
- self.headers['X-Sogou-Auth'] = x_sogou_auth
- t = hex(int(time.time()))[2:].rstrip('L').zfill(8)
- self.headers['X-Sogou-Tag'] = sogou_hash(t, self.headers['Host'])
- self.headers['X-Sogou-Timestamp'] = t
- self._socket.send(str(self.headers).encode('ascii') + b'\r\n')
-
- # Send POST data
- if self.command == 'POST':
- self._socket.send(self.rfile.read(int(self.headers['Content-Length'])))
- response = HTTPResponse(self._socket, method=self.command)
- response.begin()
-
- # Response
- status = 'HTTP/1.1 %s %s' % (response.status, response.reason)
- self.wfile.write(status.encode('ascii') + b'\r\n')
- h = ''
- for hh, vv in response.getheaders():
- if hh.upper() != 'TRANSFER-ENCODING':
- h += hh + ': ' + vv + '\r\n'
- self.wfile.write(h.encode('ascii') + b'\r\n')
- while True:
- response_data = response.read(8192)
- if len(response_data) == 0:
- break
- self.wfile.write(response_data)
-
- except socket.error:
- log.e('Socket error for ' + self.requestline, ostream)
-
- def do_POST(self):
- self.do_proxy()
-
- def do_GET(self):
- self.do_proxy()
-
- class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
- pass
-
- # Server starts
- log.printlog('Sogou Proxy Mini-Server', color='bold-green', ostream=ostream)
-
- try:
- server = ThreadingHTTPServer(host, Handler)
- except Exception as ex:
- log.wtf("Socket error: %s" % ex, ostream)
- exit(1)
- host = server.server_address
-
- if network_env.upper() == 'CERNET':
- proxy_host = 'h%s.edu.bj.ie.sogou.com' % random.randint(0, 10)
- elif network_env.upper() == 'CTCNET':
- proxy_host = 'h%s.ctc.bj.ie.sogou.com' % random.randint(0, 3)
- elif network_env.upper() == 'CNCNET':
- proxy_host = 'h%s.cnc.bj.ie.sogou.com' % random.randint(0, 3)
- elif network_env.upper() == 'DXT':
- proxy_host = 'h%s.dxt.bj.ie.sogou.com' % random.randint(0, 10)
- else:
- proxy_host = 'h%s.edu.bj.ie.sogou.com' % random.randint(0, 10)
-
- log.i('Remote host: %s' % log.underlined(proxy_host), ostream)
- log.i('Proxy server running on %s' %
- log.underlined("%s:%s" % host), ostream)
-
- return server
| From now on, we no longer support the embedded Sogou proxy in our `develop` branch. It's broken, irrelevant to maintain and will be removed in the next release.
Please use `-x` + **_your own proxy**_ instead.
For Youku, we have an experimental `-y` option, in which you can make use of [Unblock-Youku](https://github.com/zhuzhuor/Unblock-Youku) using `-y proxy.uku.im:8888`. By far this option only works for Youku - soon it will be available for other sites like Tudou or Sohu.
**[Notice] [Unblock-Youku](https://github.com/zhuzhuor/Unblock-Youku) is another project. Read their [disclaimer](https://github.com/zhuzhuor/Unblock-Youku/blob/master/README.md) before you use their proxy.**
Closing #263 and #346 .
| https://api.github.com/repos/soimort/you-get/pulls/370 | 2014-07-20T20:23:43Z | 2014-07-20T20:26:01Z | 2014-07-20T20:26:01Z | 2014-07-20T20:26:01Z | 3,611 | soimort/you-get | 21,454 |
Restore DNS settings in container after dns server shuts down | diff --git a/localstack/dns/plugins.py b/localstack/dns/plugins.py
index 684f9f08c6c1d..05566573cfec8 100644
--- a/localstack/dns/plugins.py
+++ b/localstack/dns/plugins.py
@@ -39,6 +39,7 @@ def stop_server():
try:
from localstack.dns import server
+ server.revert_network_configuration()
server.stop_servers()
except Exception as e:
LOG.warning("Unable to stop DNS servers: %s", e)
diff --git a/localstack/dns/server.py b/localstack/dns/server.py
index eadc41322c82e..d7b399441f65d 100644
--- a/localstack/dns/server.py
+++ b/localstack/dns/server.py
@@ -69,6 +69,7 @@
RCODE_REFUSED = 5
DNS_SERVER: "DnsServerProtocol" = None
+PREVIOUS_RESOLV_CONF_FILE: str | None = None
REQUEST_TIMEOUT_SECS = 7
@@ -791,6 +792,7 @@ def get_available_dns_server():
# ###### LEGACY METHODS ######
def add_resolv_entry(file_path: Path | str = Path("/etc/resolv.conf")):
+ global PREVIOUS_RESOLV_CONF_FILE
# never overwrite the host configuration without the user's permission
if not in_docker():
LOG.warning("Incorrectly attempted to alter host networking config")
@@ -805,14 +807,38 @@ def add_resolv_entry(file_path: Path | str = Path("/etc/resolv.conf")):
)
file_path = Path(file_path)
try:
- with file_path.open("w") as outfile:
- print(content, file=outfile)
+ with file_path.open("r+") as outfile:
+ PREVIOUS_RESOLV_CONF_FILE = outfile.read()
+ outfile.seek(0)
+ outfile.write(content)
+ outfile.truncate()
except Exception:
LOG.warning(
"Could not update container DNS settings", exc_info=LOG.isEnabledFor(logging.DEBUG)
)
+def revert_resolv_entry(file_path: Path | str = Path("/etc/resolv.conf")):
+ # never overwrite the host configuration without the user's permission
+ if not in_docker():
+ LOG.warning("Incorrectly attempted to alter host networking config")
+ return
+
+ if not PREVIOUS_RESOLV_CONF_FILE:
+ LOG.warning("resolv.conf file to restore not found.")
+ return
+
+ LOG.debug("Reverting container DNS config")
+ file_path = Path(file_path)
+ try:
+ with file_path.open("w") as outfile:
+ outfile.write(PREVIOUS_RESOLV_CONF_FILE)
+ except Exception:
+ LOG.warning(
+ "Could not revert container DNS settings", exc_info=LOG.isEnabledFor(logging.DEBUG)
+ )
+
+
def setup_network_configuration():
# check if DNS is disabled
if not config.use_custom_dns():
@@ -823,6 +849,16 @@ def setup_network_configuration():
add_resolv_entry()
+def revert_network_configuration():
+ # check if DNS is disabled
+ if not config.use_custom_dns():
+ return
+
+ # add entry to /etc/resolv.conf
+ if in_docker():
+ revert_resolv_entry()
+
+
def start_server(upstream_dns: str, host: str, port: int = config.DNS_PORT):
global DNS_SERVER
| <!-- Please refer to the contribution guidelines before raising a PR: https://github.com/localstack/localstack/blob/master/CONTRIBUTING.md -->
<!-- Why am I raising this PR? Add context such as related issues, PRs, or documentation. -->
## Motivation
We are currently seeing failures due to DNS resolution issues on pytest session finish hooks in LS.
This is likely due to the shutdown of the DNS server after LS shuts down (which can happen before the last session finish hooks are executed).
After we stop the DNS server, we should restore the `resolv.conf` config to avoid all subsequent requests failing.
<!-- What notable changes does this PR make? -->
## Changes
* Cache and restore `resolv.conf` on DNS server start/stop.
<!-- The following sections are optional, but can be useful!
## Testing
Description of how to test the changes
## TODO
What's left to do:
- [ ] ...
- [ ] ...
-->
| https://api.github.com/repos/localstack/localstack/pulls/10221 | 2024-02-12T10:44:19Z | 2024-02-12T12:35:39Z | 2024-02-12T12:35:39Z | 2024-02-12T12:35:40Z | 759 | localstack/localstack | 28,656 |
[Binary Tree] Different views of binary tree added | diff --git a/data_structures/binary_tree/diff_views_of_binary_tree.py b/data_structures/binary_tree/diff_views_of_binary_tree.py
new file mode 100644
index 000000000000..3198d8065918
--- /dev/null
+++ b/data_structures/binary_tree/diff_views_of_binary_tree.py
@@ -0,0 +1,210 @@
+r"""
+Problem: Given root of a binary tree, return the:
+1. binary-tree-right-side-view
+2. binary-tree-left-side-view
+3. binary-tree-top-side-view
+4. binary-tree-bottom-side-view
+"""
+
+from __future__ import annotations
+
+from collections import defaultdict
+from dataclasses import dataclass
+
+
+@dataclass
+class TreeNode:
+ val: int
+ left: TreeNode | None = None
+ right: TreeNode | None = None
+
+
+def make_tree() -> TreeNode:
+ """
+ >>> make_tree().val
+ 3
+ """
+ return TreeNode(3, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
+
+
+def binary_tree_right_side_view(root: TreeNode) -> list[int]:
+ r"""
+ Function returns the right side view of binary tree.
+
+ 3 <- 3
+ / \
+ 9 20 <- 20
+ / \
+ 15 7 <- 7
+
+ >>> binary_tree_right_side_view(make_tree())
+ [3, 20, 7]
+ >>> binary_tree_right_side_view(None)
+ []
+ """
+
+ def depth_first_search(
+ root: TreeNode | None, depth: int, right_view: list[int]
+ ) -> None:
+ """
+ A depth first search preorder traversal to append the values at
+ right side of tree.
+ """
+ if not root:
+ return
+
+ if depth == len(right_view):
+ right_view.append(root.val)
+
+ depth_first_search(root.right, depth + 1, right_view)
+ depth_first_search(root.left, depth + 1, right_view)
+
+ right_view: list = []
+ if not root:
+ return right_view
+
+ depth_first_search(root, 0, right_view)
+ return right_view
+
+
+def binary_tree_left_side_view(root: TreeNode) -> list[int]:
+ r"""
+ Function returns the left side view of binary tree.
+
+ 3 -> 3
+ / \
+ 9 -> 9 20
+ / \
+ 15 -> 15 7
+
+ >>> binary_tree_left_side_view(make_tree())
+ [3, 9, 15]
+ >>> binary_tree_left_side_view(None)
+ []
+ """
+
+ def depth_first_search(
+ root: TreeNode | None, depth: int, left_view: list[int]
+ ) -> None:
+ """
+ A depth first search preorder traversal to append the values
+ at left side of tree.
+ """
+ if not root:
+ return
+
+ if depth == len(left_view):
+ left_view.append(root.val)
+
+ depth_first_search(root.left, depth + 1, left_view)
+ depth_first_search(root.right, depth + 1, left_view)
+
+ left_view: list = []
+ if not root:
+ return left_view
+
+ depth_first_search(root, 0, left_view)
+ return left_view
+
+
+def binary_tree_top_side_view(root: TreeNode) -> list[int]:
+ r"""
+ Function returns the top side view of binary tree.
+
+ 9 3 20 7
+ ⬇ ⬇ ⬇ ⬇
+
+ 3
+ / \
+ 9 20
+ / \
+ 15 7
+
+ >>> binary_tree_top_side_view(make_tree())
+ [9, 3, 20, 7]
+ >>> binary_tree_top_side_view(None)
+ []
+ """
+
+ def breadth_first_search(root: TreeNode, top_view: list[int]) -> None:
+ """
+ A breadth first search traversal with defaultdict ds to append
+ the values of tree from top view
+ """
+ queue = [(root, 0)]
+ lookup = defaultdict(list)
+
+ while queue:
+ first = queue.pop(0)
+ node, hd = first
+
+ lookup[hd].append(node.val)
+
+ if node.left:
+ queue.append((node.left, hd - 1))
+ if node.right:
+ queue.append((node.right, hd + 1))
+
+ for pair in sorted(lookup.items(), key=lambda each: each[0]):
+ top_view.append(pair[1][0])
+
+ top_view: list = []
+ if not root:
+ return top_view
+
+ breadth_first_search(root, top_view)
+ return top_view
+
+
+def binary_tree_bottom_side_view(root: TreeNode) -> list[int]:
+ r"""
+ Function returns the bottom side view of binary tree
+
+ 3
+ / \
+ 9 20
+ / \
+ 15 7
+ ↑ ↑ ↑ ↑
+ 9 15 20 7
+
+ >>> binary_tree_bottom_side_view(make_tree())
+ [9, 15, 20, 7]
+ >>> binary_tree_bottom_side_view(None)
+ []
+ """
+ from collections import defaultdict
+
+ def breadth_first_search(root: TreeNode, bottom_view: list[int]) -> None:
+ """
+ A breadth first search traversal with defaultdict ds to append
+ the values of tree from bottom view
+ """
+ queue = [(root, 0)]
+ lookup = defaultdict(list)
+
+ while queue:
+ first = queue.pop(0)
+ node, hd = first
+ lookup[hd].append(node.val)
+
+ if node.left:
+ queue.append((node.left, hd - 1))
+ if node.right:
+ queue.append((node.right, hd + 1))
+
+ for pair in sorted(lookup.items(), key=lambda each: each[0]):
+ bottom_view.append(pair[1][-1])
+
+ bottom_view: list = []
+ if not root:
+ return bottom_view
+
+ breadth_first_search(root, bottom_view)
+ return bottom_view
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
| ### Describe your change:
Problem:
Given root of binary Tree, print the
1. binary-tree-right-side-view
2. binary-tree-left-side-view
3. binary-tree-top-side-view
4. binary-tree-bottom-side-view
* [X] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [X] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [X] This pull request is all my own work -- I have not plagiarized.
* [X] I know that pull requests will not be merged if they fail the automated tests.
* [X] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [X] All new Python files are placed inside an existing directory.
* [X] All filenames are in all lowercase characters with no spaces or dashes.
* [X] All functions and variable names follow Python naming conventions.
* [X] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [X] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [X] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [X] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| https://api.github.com/repos/TheAlgorithms/Python/pulls/6965 | 2022-10-11T00:27:46Z | 2022-10-17T20:30:01Z | 2022-10-17T20:30:01Z | 2022-10-22T16:09:04Z | 1,513 | TheAlgorithms/Python | 29,669 |
[Restudy] Add new extractor for restudy.dk | diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py
index 119ec2044ef..3ae7a8a527f 100644
--- a/youtube_dl/extractor/__init__.py
+++ b/youtube_dl/extractor/__init__.py
@@ -316,6 +316,7 @@
from .rai import RaiIE
from .rbmaradio import RBMARadioIE
from .redtube import RedTubeIE
+from .restudy import RestudyIE
from .reverbnation import ReverbNationIE
from .ringtv import RingTVIE
from .ro220 import Ro220IE
diff --git a/youtube_dl/extractor/restudy.py b/youtube_dl/extractor/restudy.py
new file mode 100644
index 00000000000..56a6c0f93b4
--- /dev/null
+++ b/youtube_dl/extractor/restudy.py
@@ -0,0 +1,41 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+
+
+class RestudyIE(InfoExtractor):
+ _VALID_URL = r'https://www.restudy.dk/video/play/id/(?P<id>[0-9]+)'
+ _TEST = {
+ 'url': 'https://www.restudy.dk/video/play/id/1637',
+ # MD5 sum of first 10241 bytes of the video file, as reported by
+ # head -c 10241 Leiden-frosteffekt-1637.mp4 | md5sum
+ 'md5': '4e755c4287f292a1fe5363834a683818',
+ 'info_dict': {
+ 'id': '1637',
+ 'ext': 'mp4',
+ 'title': 'Leiden-frosteffekt',
+ }
+ }
+
+ def _real_extract(self, url):
+ video_id = self._match_id(url)
+ webpage = self._download_webpage(url, video_id)
+ xml_url = (
+ 'https://www.restudy.dk/awsmedia/SmilDirectory/video_%s.xml'
+ % video_id)
+ xml = self._download_webpage(xml_url, video_id)
+
+ base = self._search_regex(
+ r'<meta base="([^"]+)', xml, 'meta base')
+ # TODO: Provide multiple video qualities instead of forcing highest
+ filename = self._search_regex(
+ r'<video src="mp4:([^"]+_high\.mp4)', xml, 'filename')
+ url = '%s%s' % (base, filename)
+ title = self._og_search_title(webpage)
+ return {
+ 'id': video_id,
+ 'title': title,
+ 'url': url,
+ 'protocol': 'rtmp',
+ }
| https://api.github.com/repos/ytdl-org/youtube-dl/pulls/4463 | 2014-12-13T21:27:28Z | 2014-12-13T21:41:40Z | 2014-12-13T21:41:40Z | 2014-12-13T21:41:55Z | 649 | ytdl-org/youtube-dl | 49,855 |
|
Removed 'when you lie' image as it is 404 not found and some grammer … | diff --git a/README.md b/README.md
index 74c073f9..40eb32f7 100644
--- a/README.md
+++ b/README.md
@@ -424,8 +424,6 @@ She: Yes! Perfect! Exactly what I wanted to see! |
Of course, this is just for fun, and you should never cheat in your coding interviews,
because you know what happens when you do.
-![when you lie in your interview](http://cheat.sh/files/when-you-lie-katze.png)
-
### Windows command line client
You can access cheat.sh from Windows command line too.
@@ -446,7 +444,7 @@ scoop install cht
### Docker
-Currently the easiest way to get a self-hosted instance running is by using the docker-compose.yml file provided in the extra/docker folder.
+Currently, the easiest way to get a self-hosted instance running is by using the docker-compose.yml file provided in the extra/docker folder.
This pulls down the latest image with baked in cheatsheets and starts the app and a Redis instance to back it, making the service available on port 8002 of the local host. This is currently an early implementation and should probably not be used for anything outside of internal/dev/personal use right now.
## Editors integration
| …changes with Docker section. | https://api.github.com/repos/chubin/cheat.sh/pulls/143 | 2019-05-31T19:49:05Z | 2019-07-13T18:21:30Z | 2019-07-13T18:21:30Z | 2019-07-13T18:21:30Z | 280 | chubin/cheat.sh | 15,255 |
[CPU] Add CPU AutoTP UT. | diff --git a/.github/workflows/cpu-inference.yml b/.github/workflows/cpu-inference.yml
index 2c555203e950..8bba51dab6fd 100644
--- a/.github/workflows/cpu-inference.yml
+++ b/.github/workflows/cpu-inference.yml
@@ -76,4 +76,5 @@ jobs:
source oneCCL/build/_install/env/setvars.sh
unset TORCH_CUDA_ARCH_LIST # only jit compile for current arch
cd tests
- TRANSFORMERS_CACHE=~/tmp/transformers_cache/ TORCH_EXTENSIONS_DIR=./torch-extensions pytest -m 'seq_inference' -m 'inference_ops' -m 'inference' unit/
+ TRANSFORMERS_CACHE=~/tmp/transformers_cache/ TORCH_EXTENSIONS_DIR=./torch-extensions pytest -m 'seq_inference' unit/
+ TRANSFORMERS_CACHE=~/tmp/transformers_cache/ TORCH_EXTENSIONS_DIR=./torch-extensions pytest -m 'inference_ops' -m 'inference' unit/
diff --git a/tests/unit/hybrid_engine/test_he_all.py b/tests/unit/hybrid_engine/test_he_all.py
index 86eabb1add0c..aa1f120645b1 100644
--- a/tests/unit/hybrid_engine/test_he_all.py
+++ b/tests/unit/hybrid_engine/test_he_all.py
@@ -12,6 +12,10 @@
from deepspeed.accelerator import get_accelerator
from transformers import (AutoConfig, AutoTokenizer, AutoModelForCausalLM)
+from deepspeed.ops.op_builder import InferenceBuilder
+
+if not deepspeed.ops.__compatible_ops__[InferenceBuilder.NAME]:
+ pytest.skip("This op had not been implemented on this system.", allow_module_level=True)
rocm_version = OpBuilder.installed_rocm_version()
if rocm_version != (0, 0):
diff --git a/tests/unit/hybrid_engine/test_he_llama.py b/tests/unit/hybrid_engine/test_he_llama.py
index 5f992f69b402..fcf5b8ffb89b 100644
--- a/tests/unit/hybrid_engine/test_he_llama.py
+++ b/tests/unit/hybrid_engine/test_he_llama.py
@@ -12,6 +12,10 @@
from deepspeed.accelerator import get_accelerator
from transformers import (AutoConfig, AutoTokenizer, AutoModelForCausalLM)
+from deepspeed.ops.op_builder import InferenceBuilder
+
+if not deepspeed.ops.__compatible_ops__[InferenceBuilder.NAME]:
+ pytest.skip("This op had not been implemented on this system.", allow_module_level=True)
rocm_version = OpBuilder.installed_rocm_version()
if rocm_version != (0, 0):
diff --git a/tests/unit/hybrid_engine/test_he_lora.py b/tests/unit/hybrid_engine/test_he_lora.py
index f61fdeb3a9f9..ea27239ed55e 100644
--- a/tests/unit/hybrid_engine/test_he_lora.py
+++ b/tests/unit/hybrid_engine/test_he_lora.py
@@ -14,6 +14,10 @@
from deepspeed.utils import safe_get_full_grad
import numpy.testing as npt
from unit.common import DistributedTest
+from deepspeed.ops.op_builder import InferenceBuilder
+
+if not deepspeed.ops.__compatible_ops__[InferenceBuilder.NAME]:
+ pytest.skip("This op had not been implemented on this system.", allow_module_level=True)
from transformers import (AutoConfig, AutoTokenizer, AutoModelForCausalLM)
diff --git a/tests/unit/inference/test_inference.py b/tests/unit/inference/test_inference.py
index e591a214c3f7..64b64e3cae85 100644
--- a/tests/unit/inference/test_inference.py
+++ b/tests/unit/inference/test_inference.py
@@ -22,9 +22,6 @@
from deepspeed.accelerator import get_accelerator
from deepspeed.ops.op_builder import InferenceBuilder
-if not deepspeed.ops.__compatible_ops__[InferenceBuilder.NAME]:
- pytest.skip("This op had not been implemented on this system.", allow_module_level=True)
-
rocm_version = OpBuilder.installed_rocm_version()
if rocm_version != (0, 0):
pytest.skip("skip inference tests on rocm for now", allow_module_level=True)
@@ -361,6 +358,9 @@ def test(
if invalid_test_msg:
pytest.skip(invalid_test_msg)
+ if not deepspeed.ops.__compatible_ops__[InferenceBuilder.NAME]:
+ pytest.skip("This op had not been implemented on this system.", allow_module_level=True)
+
model, task = model_w_task
local_rank = int(os.getenv("LOCAL_RANK", "0"))
@@ -397,6 +397,9 @@ def test(
):
model, task = model_w_task
dtype = torch.float16
+ if dtype not in get_accelerator().supported_dtypes():
+ pytest.skip(f"Acceleraor {get_accelerator().device_name()} does not support {dtype}.")
+
local_rank = int(os.getenv("LOCAL_RANK", "0"))
pipe = pipeline(task, model=model, model_kwargs={"low_cpu_mem_usage": True}, device=local_rank, framework="pt")
@@ -510,7 +513,7 @@ def test(
[("Helsinki-NLP/opus-mt-en-de", "translation"), ("Salesforce/codegen-350M-mono", "text-generation")],
ids=["marian", "codegen"], #codegen has fusedqkv weight.
)
-@pytest.mark.parametrize("dtype", [torch.float16], ids=["fp16"])
+@pytest.mark.parametrize("dtype", [torch.float16, torch.bfloat16], ids=["fp16", "bf16"])
class TestAutoTensorParallelism(DistributedTest):
world_size = [2]
@@ -526,6 +529,13 @@ def test(
if invalid_test_msg:
pytest.skip(invalid_test_msg)
+ if dtype not in get_accelerator().supported_dtypes():
+ pytest.skip(f"Acceleraor {get_accelerator().device_name()} does not support {dtype}.")
+
+ # TODO: enable this test after torch 2.1 stable release
+ if dtype == torch.bfloat16 and model_w_task[0] == "Salesforce/codegen-350M-mono":
+ pytest.skip("Codegen model(bf16) need to use torch version > 2.0.")
+
model, task = model_w_task
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "2"))
| This PR aims to add CPU AutoTP UT.
We found that "Salesforce/codegen-350M-mono" model(bf16) cannot pass TestAutoTensorParallelism testcases under torch 2.0. So we skipped this test temporarily. | https://api.github.com/repos/microsoft/DeepSpeed/pulls/4263 | 2023-09-05T09:34:03Z | 2023-09-27T23:59:25Z | 2023-09-27T23:59:25Z | 2023-09-27T23:59:26Z | 1,437 | microsoft/DeepSpeed | 10,337 |
Add alarmdotcom sensor status | diff --git a/homeassistant/components/alarm_control_panel/alarmdotcom.py b/homeassistant/components/alarm_control_panel/alarmdotcom.py
index 0e96e6448ff01b..31d933732862cb 100644
--- a/homeassistant/components/alarm_control_panel/alarmdotcom.py
+++ b/homeassistant/components/alarm_control_panel/alarmdotcom.py
@@ -17,7 +17,7 @@
from homeassistant.helpers.aiohttp_client import async_get_clientsession
import homeassistant.helpers.config_validation as cv
-REQUIREMENTS = ['pyalarmdotcom==0.3.1']
+REQUIREMENTS = ['pyalarmdotcom==0.3.2']
_LOGGER = logging.getLogger(__name__)
@@ -93,6 +93,13 @@ def state(self):
return STATE_ALARM_ARMED_AWAY
return STATE_UNKNOWN
+ @property
+ def device_state_attributes(self):
+ """Return the state attributes."""
+ return {
+ 'sensor_status': self._alarm.sensor_status
+ }
+
@asyncio.coroutine
def async_alarm_disarm(self, code=None):
"""Send disarm command."""
diff --git a/requirements_all.txt b/requirements_all.txt
index 1283011d7ac61c..f052a3ae6560ba 100644
--- a/requirements_all.txt
+++ b/requirements_all.txt
@@ -695,7 +695,7 @@ pyads==2.2.6
pyairvisual==1.0.0
# homeassistant.components.alarm_control_panel.alarmdotcom
-pyalarmdotcom==0.3.1
+pyalarmdotcom==0.3.2
# homeassistant.components.arlo
pyarlo==0.1.2
| ## Description:
This PR adds an `sensor_status` attribute to alarm.com alarm control panels as described in https://github.com/Xorso/pyalarmdotcom/pull/5.
![image](https://user-images.githubusercontent.com/47/39555554-b1cc1ff4-4e3f-11e8-80e8-865b11aad97b.png)
## Checklist:
- [x] The code change is tested and works locally.
- [x] Local tests pass with `tox`. **Your PR cannot be merged unless tests pass** | https://api.github.com/repos/home-assistant/core/pulls/14254 | 2018-05-03T00:33:44Z | 2018-05-05T14:21:59Z | 2018-05-05T14:21:59Z | 2019-03-21T04:24:21Z | 386 | home-assistant/core | 39,097 |
Drop Python 3.7 support | diff --git a/.github/workflows/tests-macos.yml b/.github/workflows/tests-macos.yml
index 174d245ca99..3044a1af331 100644
--- a/.github/workflows/tests-macos.yml
+++ b/.github/workflows/tests-macos.yml
@@ -7,7 +7,7 @@ jobs:
strategy:
fail-fast: false
matrix:
- python-version: ["3.7", "3.8", "3.9", "3.10", "3.11"]
+ python-version: ["3.8", "3.9", "3.10", "3.11"]
steps:
- uses: actions/checkout@v3
diff --git a/.github/workflows/tests-ubuntu.yml b/.github/workflows/tests-ubuntu.yml
index 96b26a1f89a..39e3b0af7c2 100644
--- a/.github/workflows/tests-ubuntu.yml
+++ b/.github/workflows/tests-ubuntu.yml
@@ -8,9 +8,6 @@ jobs:
fail-fast: false
matrix:
include:
- - python-version: 3.8
- env:
- TOXENV: py
- python-version: 3.9
env:
TOXENV: py
@@ -28,19 +25,19 @@ jobs:
TOXENV: pypy3
# pinned deps
- - python-version: 3.7.13
+ - python-version: 3.8.17
env:
TOXENV: pinned
- - python-version: 3.7.13
+ - python-version: 3.8.17
env:
TOXENV: asyncio-pinned
- - python-version: pypy3.7
+ - python-version: pypy3.8
env:
TOXENV: pypy3-pinned
- - python-version: 3.7.13
+ - python-version: 3.8.17
env:
TOXENV: extra-deps-pinned
- - python-version: 3.7.13
+ - python-version: 3.8.17
env:
TOXENV: botocore-pinned
diff --git a/.github/workflows/tests-windows.yml b/.github/workflows/tests-windows.yml
index f60c48841d3..5bcf74d5e7b 100644
--- a/.github/workflows/tests-windows.yml
+++ b/.github/workflows/tests-windows.yml
@@ -8,12 +8,9 @@ jobs:
fail-fast: false
matrix:
include:
- - python-version: 3.7
- env:
- TOXENV: windows-pinned
- python-version: 3.8
env:
- TOXENV: py
+ TOXENV: windows-pinned
- python-version: 3.9
env:
TOXENV: py
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index faf8808f2b9..31e9ed1adcd 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -5,7 +5,7 @@ repos:
- id: bandit
args: [-r, -c, .bandit.yml]
- repo: https://github.com/PyCQA/flake8
- rev: 5.0.4 # 6.0.0 drops Python 3.7 support
+ rev: 6.0.0
hooks:
- id: flake8
- repo: https://github.com/psf/black.git
@@ -13,7 +13,7 @@ repos:
hooks:
- id: black
- repo: https://github.com/pycqa/isort
- rev: 5.11.5 # 5.12 drops Python 3.7 support
+ rev: 5.12.0
hooks:
- id: isort
- repo: https://github.com/adamchainz/blacken-docs
diff --git a/README.rst b/README.rst
index 970bf2c3573..1918850d6c0 100644
--- a/README.rst
+++ b/README.rst
@@ -58,7 +58,7 @@ including a list of features.
Requirements
============
-* Python 3.7+
+* Python 3.8+
* Works on Linux, Windows, macOS, BSD
Install
diff --git a/docs/contributing.rst b/docs/contributing.rst
index eef92e14881..2b324960163 100644
--- a/docs/contributing.rst
+++ b/docs/contributing.rst
@@ -265,15 +265,15 @@ To run a specific test (say ``tests/test_loader.py``) use:
To run the tests on a specific :doc:`tox <tox:index>` environment, use
``-e <name>`` with an environment name from ``tox.ini``. For example, to run
-the tests with Python 3.7 use::
+the tests with Python 3.10 use::
- tox -e py37
+ tox -e py310
You can also specify a comma-separated list of environments, and use :ref:`tox’s
parallel mode <tox:parallel_mode>` to run the tests on multiple environments in
parallel::
- tox -e py37,py38 -p auto
+ tox -e py39,py310 -p auto
To pass command-line options to :doc:`pytest <pytest:index>`, add them after
``--`` in your call to :doc:`tox <tox:index>`. Using ``--`` overrides the
@@ -283,9 +283,9 @@ default positional arguments (``scrapy tests``) after ``--`` as well::
tox -- scrapy tests -x # stop after first failure
You can also use the `pytest-xdist`_ plugin. For example, to run all tests on
-the Python 3.7 :doc:`tox <tox:index>` environment using all your CPU cores::
+the Python 3.10 :doc:`tox <tox:index>` environment using all your CPU cores::
- tox -e py37 -- scrapy tests -n auto
+ tox -e py310 -- scrapy tests -n auto
To see coverage report install :doc:`coverage <coverage:index>`
(``pip install coverage``) and run:
@@ -322,4 +322,4 @@ And their unit-tests are in::
.. _pytest-xdist: https://github.com/pytest-dev/pytest-xdist
.. _good first issues: https://github.com/scrapy/scrapy/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22
.. _help wanted issues: https://github.com/scrapy/scrapy/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22
-.. _test coverage: https://app.codecov.io/gh/scrapy/scrapy
\ No newline at end of file
+.. _test coverage: https://app.codecov.io/gh/scrapy/scrapy
diff --git a/docs/intro/install.rst b/docs/intro/install.rst
index 2c2079f68a7..c90c1d2bf26 100644
--- a/docs/intro/install.rst
+++ b/docs/intro/install.rst
@@ -9,7 +9,7 @@ Installation guide
Supported Python versions
=========================
-Scrapy requires Python 3.7+, either the CPython implementation (default) or
+Scrapy requires Python 3.8+, either the CPython implementation (default) or
the PyPy implementation (see :ref:`python:implementations`).
.. _intro-install-scrapy:
diff --git a/scrapy/__init__.py b/scrapy/__init__.py
index a757a9290fb..cc0e539c4e1 100644
--- a/scrapy/__init__.py
+++ b/scrapy/__init__.py
@@ -34,8 +34,8 @@
# Check minimum required Python version
-if sys.version_info < (3, 7):
- print(f"Scrapy {__version__} requires Python 3.7+")
+if sys.version_info < (3, 8):
+ print(f"Scrapy {__version__} requires Python 3.8+")
sys.exit(1)
diff --git a/setup.py b/setup.py
index c6bcf2439ee..ccfe20ae558 100644
--- a/setup.py
+++ b/setup.py
@@ -20,7 +20,7 @@ def has_environment_marker_platform_impl_support():
install_requires = [
"Twisted>=18.9.0",
- "cryptography>=3.4.6",
+ "cryptography>=36.0.0",
"cssselect>=0.9.1",
"itemloaders>=1.0.1",
"parsel>=1.5.0",
@@ -34,7 +34,7 @@ def has_environment_marker_platform_impl_support():
"setuptools",
"packaging",
"tldextract",
- "lxml>=4.3.0",
+ "lxml>=4.4.1",
]
extras_require = {}
cpython_dependencies = [
@@ -80,7 +80,6 @@ def has_environment_marker_platform_impl_support():
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
@@ -91,7 +90,7 @@ def has_environment_marker_platform_impl_support():
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
],
- python_requires=">=3.7",
+ python_requires=">=3.8",
install_requires=install_requires,
extras_require=extras_require,
)
diff --git a/tox.ini b/tox.ini
index 5f8bf85f282..ec3a593666d 100644
--- a/tox.ini
+++ b/tox.ini
@@ -16,8 +16,6 @@ deps =
#mitmproxy >= 5.3.0; python_version >= '3.9' and implementation_name != 'pypy'
# The tests hang with mitmproxy 8.0.0: https://github.com/scrapy/scrapy/issues/5454
mitmproxy >= 4.0.4, < 8; python_version < '3.9' and implementation_name != 'pypy'
- # newer markupsafe is incompatible with deps of old mitmproxy (which we get on Python 3.7 and lower)
- markupsafe < 2.1.0; python_version < '3.8' and implementation_name != 'pypy'
passenv =
S3_TEST_FILE_URI
AWS_ACCESS_KEY_ID
@@ -71,7 +69,7 @@ commands =
[pinned]
deps =
- cryptography==3.4.6
+ cryptography==36.0.0
cssselect==0.9.1
h2==3.0
itemadapter==0.1.0
@@ -83,7 +81,7 @@ deps =
Twisted[http2]==18.9.0
w3lib==1.17.0
zope.interface==5.1.0
- lxml==4.3.0
+ lxml==4.4.1
-rtests/requirements.txt
# mitmproxy 4.0.4+ requires upgrading some of the pinned dependencies
@@ -96,7 +94,7 @@ commands =
pytest --cov=scrapy --cov-report=xml --cov-report= {posargs:--durations=10 scrapy tests}
[testenv:pinned]
-basepython = python3.7
+basepython = python3.8
deps =
{[pinned]deps}
PyDispatcher==2.0.5
@@ -129,7 +127,7 @@ deps =
Twisted[http2]
[testenv:extra-deps-pinned]
-basepython = python3.7
+basepython = python3.8
deps =
{[pinned]deps}
boto3==1.20.0
@@ -211,7 +209,7 @@ commands =
pytest --cov=scrapy --cov-report=xml --cov-report= {posargs:tests -k s3}
[testenv:botocore-pinned]
-basepython = python3.7
+basepython = python3.8
deps =
{[pinned]deps}
botocore==1.4.87
| https://api.github.com/repos/scrapy/scrapy/pulls/5953 | 2023-06-18T14:46:01Z | 2023-06-19T09:59:14Z | 2023-06-19T09:59:14Z | 2023-06-19T09:59:18Z | 2,898 | scrapy/scrapy | 34,182 |
|
♻️ Refactor parts that use optional requirements to make them compatible with installations without them | diff --git a/fastapi/openapi/models.py b/fastapi/openapi/models.py
index 7284fa5ba8e56..7176e59dda905 100644
--- a/fastapi/openapi/models.py
+++ b/fastapi/openapi/models.py
@@ -1,7 +1,14 @@
from enum import Enum
-from typing import Any, Callable, Dict, Iterable, List, Optional, Union
-
-from fastapi._compat import PYDANTIC_V2, _model_rebuild
+from typing import Any, Callable, Dict, Iterable, List, Optional, Type, Union
+
+from fastapi._compat import (
+ PYDANTIC_V2,
+ CoreSchema,
+ GetJsonSchemaHandler,
+ JsonSchemaValue,
+ _model_rebuild,
+ general_plain_validator_function,
+)
from fastapi.logger import logger
from pydantic import AnyUrl, BaseModel, Field
from typing_extensions import Literal
@@ -26,6 +33,26 @@ def validate(cls, v: Any) -> str:
)
return str(v)
+ @classmethod
+ def _validate(cls, __input_value: Any, _: Any) -> str:
+ logger.warning(
+ "email-validator not installed, email fields will be treated as str.\n"
+ "To install, run: pip install email-validator"
+ )
+ return str(__input_value)
+
+ @classmethod
+ def __get_pydantic_json_schema__(
+ cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
+ ) -> JsonSchemaValue:
+ return {"type": "string", "format": "email"}
+
+ @classmethod
+ def __get_pydantic_core_schema__(
+ cls, source: Type[Any], handler: Callable[[Any], CoreSchema]
+ ) -> CoreSchema:
+ return general_plain_validator_function(cls._validate)
+
class Contact(BaseModel):
name: Optional[str] = None
diff --git a/fastapi/utils.py b/fastapi/utils.py
index 18ae2365db802..2efe7f15a9232 100644
--- a/fastapi/utils.py
+++ b/fastapi/utils.py
@@ -15,7 +15,6 @@
from weakref import WeakKeyDictionary
import fastapi
-from dirty_equals import IsStr
from fastapi._compat import (
PYDANTIC_V2,
BaseConfig,
@@ -219,5 +218,7 @@ def get_value_or_default(
return first_item
-def match_pydantic_error_url(error_type: str) -> IsStr:
+def match_pydantic_error_url(error_type: str) -> Any:
+ from dirty_equals import IsStr
+
return IsStr(regex=rf"^https://errors\.pydantic\.dev/.*/v/{error_type}")
| ♻️ Refactor parts that use optional requirements to make them compatible with installations without them | https://api.github.com/repos/tiangolo/fastapi/pulls/9707 | 2023-06-20T16:13:23Z | 2023-06-20T16:22:04Z | 2023-06-20T16:22:04Z | 2023-06-20T16:22:05Z | 617 | tiangolo/fastapi | 22,705 |
Add Python 3.7 to CI. | diff --git a/.appveyor.yml b/.appveyor.yml
index 11bb6d4b0a..2551a9d96a 100644
--- a/.appveyor.yml
+++ b/.appveyor.yml
@@ -3,7 +3,7 @@ environment:
TOXENV: py,codecov
matrix:
- - PYTHON: C:\Python36-x64
+ - PYTHON: C:\Python37-x64
- PYTHON: C:\Python27-x64
init:
diff --git a/.travis.yml b/.travis.yml
index b3ba1e1951..86d7cfea62 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,34 +1,30 @@
os: linux
-sudo: false
+dist: xenial
language: python
+python:
+ - "3.7"
+ - "3.6"
+ - "3.5"
+ - "3.4"
+ - "2.7"
+ - nightly
+ - pypy3.5-6.0
+env: TOXENV=py,codecov
matrix:
include:
- - python: 3.6
- env: TOXENV=py,simplejson,devel,lowest,codecov
- - python: 3.6
- env: TOXENV=docs-html
- - python: 3.5
- env: TOXENV=py,codecov
- - python: 3.4
- env: TOXENV=py,codecov
- - python: 2.7
- env: TOXENV=py,simplejson,devel,lowest,codecov
- - python: pypy3
- env: TOXENV=py,codecov
- - python: nightly
- env: TOXENV=py
+ - env: TOXENV=docs-html
+ - env: TOXENV=devel,lowest,codecov
- os: osx
language: generic
- env: TOXENV=py3,py2,codecov
+ env: TOXENV=py3,codecov
cache:
- pip: false
directories:
- $HOME/Library/Caches/Homebrew
- $HOME/Library/Caches/pip
allow_failures:
- - python: pypy3
- python: nightly
+ - python: pypy3.5-6.0
- os: osx
fast_finish: true
@@ -36,7 +32,6 @@ before_install:
- |
if [[ $TRAVIS_OS_NAME == 'osx' ]]; then
brew upgrade python
- brew install python@2;
export PATH="/usr/local/opt/python/libexec/bin:${PATH}"
fi
@@ -47,12 +42,15 @@ script:
- tox
cache:
- - pip
+ directories:
+ - $HOME/.cache/pip
+ - $HOME/.cache/pre-commit
branches:
only:
- master
- - /^.*-maintenance$/
+ - /^\d+(\.\d+)*-maintenance$/
+ - /^\d+(\.\d+)*(\.x)?$/
notifications:
email: false
diff --git a/tox.ini b/tox.ini
index cdeee4c35d..d37d248678 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,9 +1,9 @@
[tox]
envlist =
- py{36,35,34,27,py}
- py{36,27,py}-simplejson
- py{36,27,py}-devel
- py{36,27,py}-lowest
+ py{37,36,35,34,27,py}
+ py{37,27,py}-simplejson
+ py{37,27,py}-devel
+ py{37,27,py}-lowest
docs-html
coverage-report
| Python 3.7.0 has been released at 2018-06-27. So I think we need to add it to CI. | https://api.github.com/repos/pallets/flask/pulls/3044 | 2018-12-23T16:24:21Z | 2019-01-06T16:00:23Z | 2019-01-06T16:00:23Z | 2020-11-14T02:33:42Z | 924 | pallets/flask | 20,203 |
Update version.txt after 0.13.0 release | diff --git a/README.md b/README.md
index 6c3d4c0712fb..7bd2c3f928c7 100755
--- a/README.md
+++ b/README.md
@@ -28,7 +28,7 @@
<summary>More news</summary>
<ul>
<li>[2023/08] <a href="https://github.com/microsoft/DeepSpeedExamples/blob/master/inference/huggingface/zero_inference/README.md">DeepSpeed ZeRO-Inference: 20x faster inference through weight quantization and KV cache offloading</a></li>
-
+
<li>[2023/08] <a href="https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat/ds-chat-release-8-31/README.md">DeepSpeed-Chat: Llama/Llama-2 system support, efficiency boost, and training stability improvements</a></li>
<li>[2023/08] <a href="https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-ulysses">DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models</a> [<a href="https://github.com/microsoft/DeepSpeed/blob/master/blogs/deepspeed-ulysses/chinese/README.md">中文</a>] [<a href="https://github.com/microsoft/DeepSpeed/blob/master/blogs/deepspeed-ulysses/japanese/README.md">日本語</a>]</li>
diff --git a/version.txt b/version.txt
index 54d1a4f2a4a7..c317a91891f9 100644
--- a/version.txt
+++ b/version.txt
@@ -1 +1 @@
-0.13.0
+0.13.1
| **Auto-generated PR to update version.txt after a DeepSpeed release**
Released version - 0.13.0
Author - @mrwyattii | https://api.github.com/repos/microsoft/DeepSpeed/pulls/4982 | 2024-01-19T23:28:04Z | 2024-01-19T23:37:17Z | 2024-01-19T23:37:17Z | 2024-01-19T23:37:20Z | 403 | microsoft/DeepSpeed | 10,527 |
Fixed #32182 -- Fixed crash of JSONField nested key transforms with subquery annotations on PostgreSQL. | diff --git a/django/db/models/fields/json.py b/django/db/models/fields/json.py
index 5b0272a945d44..94596556d240c 100644
--- a/django/db/models/fields/json.py
+++ b/django/db/models/fields/json.py
@@ -302,7 +302,8 @@ def as_oracle(self, compiler, connection):
def as_postgresql(self, compiler, connection):
lhs, params, key_transforms = self.preprocess_lhs(compiler, connection)
if len(key_transforms) > 1:
- return '(%s %s %%s)' % (lhs, self.postgres_nested_operator), params + [key_transforms]
+ sql = '(%s %s %%s)' % (lhs, self.postgres_nested_operator)
+ return sql, tuple(params) + (key_transforms,)
try:
lookup = int(self.key_name)
except ValueError:
diff --git a/docs/releases/3.1.4.txt b/docs/releases/3.1.4.txt
index 6641b0aaf5d51..02408cca68851 100644
--- a/docs/releases/3.1.4.txt
+++ b/docs/releases/3.1.4.txt
@@ -14,3 +14,6 @@ Bugfixes
* Fixed passing extra HTTP headers to ``AsyncRequestFactory`` request methods
(:ticket:`32159`).
+
+* Fixed crash of key transforms for :class:`~django.db.models.JSONField` on
+ PostgreSQL when using on a ``Subquery()`` annotation (:ticket:`32182`).
diff --git a/tests/model_fields/test_jsonfield.py b/tests/model_fields/test_jsonfield.py
index 1c63d70bf961a..e39e3fe7572c8 100644
--- a/tests/model_fields/test_jsonfield.py
+++ b/tests/model_fields/test_jsonfield.py
@@ -408,6 +408,18 @@ def test_nested_key_transform_expression(self):
[self.objs[4]],
)
+ def test_nested_key_transform_on_subquery(self):
+ self.assertSequenceEqual(
+ NullableJSONModel.objects.filter(value__d__0__isnull=False).annotate(
+ subquery_value=Subquery(
+ NullableJSONModel.objects.filter(pk=OuterRef('pk')).values('value')
+ ),
+ key=KeyTransform('d', 'subquery_value'),
+ chain=KeyTransform('f', KeyTransform('1', 'key')),
+ ).filter(chain='g'),
+ [self.objs[4]],
+ )
+
def test_expression_wrapper_key_transform(self):
self.assertSequenceEqual(
NullableJSONModel.objects.annotate(
| Ticket: https://code.djangoproject.com/ticket/32182 | https://api.github.com/repos/django/django/pulls/13657 | 2020-11-09T18:57:02Z | 2020-11-10T07:13:07Z | 2020-11-10T07:13:07Z | 2020-11-10T09:36:30Z | 586 | django/django | 50,857 |
Add regexp operator to format filters | diff --git a/README.md b/README.md
index b00cdfdcb2b..7446cc2c23e 100644
--- a/README.md
+++ b/README.md
@@ -1399,7 +1399,7 @@ The following numeric meta fields can be used with comparisons `<`, `<=`, `>`, `
- `asr`: Audio sampling rate in Hertz
- `fps`: Frame rate
-Also filtering work for comparisons `=` (equals), `^=` (starts with), `$=` (ends with), `*=` (contains) and following string meta fields:
+Also filtering work for comparisons `=` (equals), `^=` (starts with), `$=` (ends with), `*=` (contains), `~=` (matches regex) and following string meta fields:
- `ext`: File extension
- `acodec`: Name of the audio codec in use
@@ -1409,7 +1409,7 @@ Also filtering work for comparisons `=` (equals), `^=` (starts with), `$=` (ends
- `format_id`: A short description of the format
- `language`: Language code
-Any string comparison may be prefixed with negation `!` in order to produce an opposite comparison, e.g. `!*=` (does not contain).
+Any string comparison may be prefixed with negation `!` in order to produce an opposite comparison, e.g. `!*=` (does not contain). The comparand of a string comparison needs to be quoted with either double or single quotes if it contains spaces or special characters other than `._-`.
Note that none of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by particular extractor, i.e. the metadata offered by the website. Any other field made available by the extractor can also be used for filtering.
@@ -1552,8 +1552,9 @@ $ yt-dlp -S "proto"
-# Download the best video with h264 codec, or the best video if there is no such video
-$ yt-dlp -f "(bv*[vcodec^=avc1]+ba) / (bv*+ba/b)"
+# Download the best video with either h264 or h265 codec,
+# or the best video if there is no such video
+$ yt-dlp -f "(bv*[vcodec~='^((he|a)vc|h26[45])']+ba) / (bv*+ba/b)"
# Download the best video with best codec no better than h264,
# or the best video with worst codec if there is no such video
diff --git a/yt_dlp/YoutubeDL.py b/yt_dlp/YoutubeDL.py
index 74684dea3f4..9892ed328e4 100644
--- a/yt_dlp/YoutubeDL.py
+++ b/yt_dlp/YoutubeDL.py
@@ -1842,15 +1842,21 @@ def _build_format_filter(self, filter_spec):
'^=': lambda attr, value: attr.startswith(value),
'$=': lambda attr, value: attr.endswith(value),
'*=': lambda attr, value: value in attr,
+ '~=': lambda attr, value: value.search(attr) is not None
}
str_operator_rex = re.compile(r'''(?x)\s*
(?P<key>[a-zA-Z0-9._-]+)\s*
- (?P<negation>!\s*)?(?P<op>%s)(?P<none_inclusive>\s*\?)?\s*
- (?P<value>[a-zA-Z0-9._-]+)\s*
+ (?P<negation>!\s*)?(?P<op>%s)\s*(?P<none_inclusive>\?\s*)?
+ (?P<quote>["'])?
+ (?P<value>(?(quote)(?:(?!(?P=quote))[^\\]|\\.)+|[\w.-]+))
+ (?(quote)(?P=quote))\s*
''' % '|'.join(map(re.escape, STR_OPERATORS.keys())))
m = str_operator_rex.fullmatch(filter_spec)
if m:
- comparison_value = m.group('value')
+ if m.group('op') == '~=':
+ comparison_value = re.compile(m.group('value'))
+ else:
+ comparison_value = re.sub(r'''\\([\\"'])''', r'\1', m.group('value'))
str_op = STR_OPERATORS[m.group('op')]
if m.group('negation'):
op = lambda attr, value: not str_op(attr, value)
| ## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [x] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
This adds a new format filter operater that allows to filter video formats by matching the string based format fields against a regex.
Example usage: -f `bestvideo*[vcodec~="(he|a)vc|[hx]26[45]"]+bestaudio/best`
Alternatively, single quotes can be used to delimit the regex.
This implements my feature request #2681. | https://api.github.com/repos/yt-dlp/yt-dlp/pulls/2698 | 2022-02-09T04:19:29Z | 2022-02-11T21:35:35Z | 2022-02-11T21:35:34Z | 2022-02-11T21:35:35Z | 1,007 | yt-dlp/yt-dlp | 7,651 |
DOC do not mention freenode.net in README.rst | diff --git a/README.rst b/README.rst
index b5ee90a304eff..4912793942d31 100644
--- a/README.rst
+++ b/README.rst
@@ -177,10 +177,10 @@ Communication
~~~~~~~~~~~~~
- Mailing list: https://mail.python.org/mailman/listinfo/scikit-learn
-- IRC channel: ``#scikit-learn`` at ``webchat.freenode.net``
- Gitter: https://gitter.im/scikit-learn/scikit-learn
- Twitter: https://twitter.com/scikit_learn
- Stack Overflow: https://stackoverflow.com/questions/tagged/scikit-learn
+- Github Discussions: https://github.com/scikit-learn/scikit-learn/discussions
- Website: https://scikit-learn.org
Citation
| Let's stop pointing our users to freenode after the [hostile take over](https://en.wikipedia.org/wiki/Freenode#Ownership_change_and_conflict) of the platform.
I added GitHub Discussions instead. If there are any IRC addict around here we could always open an "official" scikit-learn channel on https://libera.chat/ or similar but I don't think it's necessary of none of the maintainers plan to connect to it. | https://api.github.com/repos/scikit-learn/scikit-learn/pulls/20271 | 2021-06-15T16:23:38Z | 2021-06-15T16:51:35Z | 2021-06-15T16:51:35Z | 2021-06-16T09:25:34Z | 181 | scikit-learn/scikit-learn | 46,668 |
Update Ganjoor API url | diff --git a/README.md b/README.md
index 688e0baebb..5b1cf94581 100644
--- a/README.md
+++ b/README.md
@@ -287,7 +287,7 @@ API | Description | Auth | HTTPS | CORS |
| [Bible-api](https://bible-api.com/) | Free Bible API with multiple languages | No | Yes | Yes |
| [British National Bibliography](http://bnb.data.bl.uk/) | Books | No | No | Unknown |
| [Crossref Metadata Search](https://github.com/CrossRef/rest-api-doc) | Books & Articles Metadata | No | Yes | Unknown |
-| [Ganjoor](https://ganjgah.ir) | Classic Persian poetry works including access to related manuscripts, recitations and music tracks | `OAuth` | Yes | Yes |
+| [Ganjoor](https://api.ganjoor.net) | Classic Persian poetry works including access to related manuscripts, recitations and music tracks | `OAuth` | Yes | Yes |
| [Google Books](https://developers.google.com/books/) | Books | `OAuth` | Yes | Unknown |
| [GurbaniNow](https://github.com/GurbaniNow/api) | Fast and Accurate Gurbani RESTful API | No | Yes | Unknown |
| [Gutendex](https://gutendex.com/) | Web-API for fetching data from Project Gutenberg Books Library | No | Yes | Unknown |
| <!-- Thank you for taking the time to work on a Pull Request for this project! -->
<!-- To ensure your PR is dealt with swiftly please check the following: -->
- [x] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md)
- [x] My addition is ordered alphabetically
- [x] My submission has a useful description
- [x] The description does not have more than 100 characters
- [x] The description does not end with punctuation
- [x] Each table column is padded with one space on either side
- [x] I have searched the repository for any relevant issues or pull requests
- [x] Any category I am creating has the minimum requirement of 3 items
- [x] All changes have been [squashed][squash-link] into a single commit
[squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
| https://api.github.com/repos/public-apis/public-apis/pulls/3043 | 2022-02-03T06:33:12Z | 2022-02-07T06:10:24Z | 2022-02-07T06:10:24Z | 2022-02-07T06:10:25Z | 316 | public-apis/public-apis | 35,866 |
add ppocrv4 introduction | diff --git a/doc/doc_ch/PP-OCRv4_introduction.md b/doc/doc_ch/PP-OCRv4_introduction.md
new file mode 100644
index 0000000000..aba981022f
--- /dev/null
+++ b/doc/doc_ch/PP-OCRv4_introduction.md
@@ -0,0 +1,162 @@
+# PP-OCRv4
+
+- [1. 简介](#1)
+- [2. 检测优化](#2)
+- [3. 识别优化](#3)
+- [4. 端到端评估](#4)
+
+
+<a name="1"></a>
+## 1. 简介
+
+PP-OCRv4在PP-OCRv3的基础上进一步升级。整体的框架图保持了与PP-OCRv3相同的pipeline,针对检测模型和识别模型进行了数据、网络结构、训练策略等多个模块的优化。 PP-OCRv4系统框图如下所示:
+
+<div align="center">
+ <img src="../ppocr_v4/ppocrv4_framework.png" width="800">
+</div>
+
+
+从算法改进思路上看,分别针对检测和识别模型,进行了共10个方面的改进:
+* 检测模块:
+ * LCNetV3:精度更高的骨干网络
+ * PFHead:并行head分支融合结构
+ * DSR: 训练中动态增加shrink ratio
+ * CML:添加Student和Teacher网络输出的KL div loss
+* 识别模块:
+ * SVTR_LCNetV3:精度更高的骨干网络
+ * Lite-Neck:精简的Neck结构
+ * GTC-NRTR:稳定的Attention指导分支
+ * Multi-Scale:多尺度训练策略
+ * DF: 数据挖掘方案
+ * DKD :DKD蒸馏策略
+
+从效果上看,速度可比情况下,多种场景精度均有大幅提升:
+* 中文场景,相对于PP-OCRv3中文模型提升超4%;
+* 英文数字场景,相比于PP-OCRv3英文模型提升18%;
+* 多语言场景,优化80个语种识别效果,平均准确率提升超8%。
+
+
+<a name="2"></a>
+## 2. 检测优化
+
+PP-OCRv4检测模型在PP-OCRv3检测模型的基础上,在网络结构,训练策略,蒸馏策略三个方面做了优化。首先,PP-OCRv4检测模型使用PP-LCNetV3替换MobileNetv3,并提出并行分支融合的PFhead结构;其次,训练时动态调整shrink ratio的比例;最后,PP-OCRv4对CML的蒸馏loss进行优化,进一步提升文字检测效果。
+
+消融实验如下:
+
+|序号|策略|模型大小|hmean|速度(cpu + mkldnn)|
+|-|-|-|-|-|
+|baseline|PP-OCRv3|3.4M|78.84%|69ms|
+|baseline student|PP-OCRv3 student|3.4M|76.22%|69ms|
+|01|+PFHead|3.6M|76.97%|96ms|
+|02|+Dynamic Shrink Ratio|3.6M|78.24%|96ms|
+|03|+PP-LCNetv3|4.8M|79.08%|94ms|
+|03|+CML|4.8M|79.87%|67ms|
+
+测试环境: Intel Gold 6148 CPU,预测引擎使用openvino。
+
+**(1)PFhead:多分支融合Head结构**
+
+PFhead结构如下图所示,PFHead在经过第一个转置卷积后,分别进行上采样和转置卷积,上采样的输出通过3x3卷积得到输出结果,然后和转置卷积的分支的结果级联并经过1x1卷积层,最后1x1卷积的结果和转置卷积的结果相加得到最后输出的概率图。PP-OCRv4学生检测模型使用PFhead,hmean从76.22%增加到76.97%。
+
+<div align="center">
+ <img src="../ppocr_v4/PFHead.png" width="500">
+</div>
+
+**(2)DSR: 收缩比例动态调整策略**
+
+动态shrink ratio(dynamic shrink ratio): 在训练中,shrink ratio由固定值调整为动态变化,随着训练epoch的增加,shrink ratio从0.4线性增加到0.6。该策略在PP-OCRv4学生检测模型上,hmean从76.97%提升到78.24%。
+
+**(3) PP-LCNetV3:精度更高的骨干网络**
+
+PP-LCNetV3系列模型是PP-LCNet系列模型的延续,覆盖了更大的精度范围,能够适应不同下游任务的需要。PP-LCNetV3系列模型从多个方面进行了优化,提出了可学习仿射变换模块,对重参数化策略、激活函数进行了改进,同时调整了网络深度与宽度。最终,PP-LCNetV3系列模型能够在性能与效率之间达到最佳的平衡,在不同精度范围内取得极致的推理速度。使用PP-LCNetV3替换MobileNetv3 backbone,PP-OCRv4学生检测模型hmean从78.24%提升到79.08%。
+
+**(4)CML: 融合KD的互学习策略**
+
+PP-OCRv4检测模型对PP-OCRv3中的CML(Collaborative Mutual Learning) 协同互学习文本检测蒸馏策略进行了优化。如下图所示,在计算Student Model和Teacher Model的distill Loss时,额外添加KL div loss,让两者输出的response maps分布接近,由此进一步提升Student网络的精度,检测Hmean从79.08%增加到79.56%,端到端指标从61.31%增加到61.87%。
+
+<div align="center">
+ <img src="../ppocr_v4/ppocrv4_det_cml.png" width="500">
+</div>
+
+<a name="3"></a>
+## 3. 识别优化
+
+PP-OCRv3的识别模块是基于文本识别算法[SVTR](https://arxiv.org/abs/2205.00159)优化。SVTR不再采用RNN结构,通过引入Transformers结构更加有效地挖掘文本行图像的上下文信息,从而提升文本识别能力。直接将PP-OCRv2的识别模型,替换成SVTR_Tiny,识别准确率从74.8%提升到80.1%(+5.3%),但是预测速度慢了将近11倍,CPU上预测一条文本行,将近100ms。因此,如下图所示,PP-OCRv3采用如下6个优化策略进行识别模型加速。
+
+<div align="center">
+ <img src="../ppocr_v4/v4_rec_pipeline.png" width=800>
+</div>
+
+基于上述策略,PP-OCRv4识别模型相比PP-OCRv3,在速度可比的情况下,精度进一步提升4%。 具体消融实验如下所示:
+
+| ID | 策略 | 模型大小 | 精度 | 预测耗时(CPU openvino)|
+|-----|-----|--------|----| --- |
+| 01 | PP-OCRv3 | 12M | 71.50% | 8.54ms |
+| 02 | +DF | 12M | 72.70% | 8.54ms |
+| 03 | + LiteNeck + GTC | 9.6M | 73.21% | 9.09ms |
+| 04 | + PP-LCNetV3 | 11M | 74.18% | 9.8ms |
+| 05 | + multi-scale | 11M | 74.20% | 9.8ms |
+| 06 | + TextConAug | 11M | 74.72% | 9.8ms |
+| 08 | + UDML | 11M | 75.45% | 9.8ms |
+
+注: 测试速度时,输入图片尺寸均为(3,48,320)。在实际预测时,图像为变长输入,速度会有所变化。测试环境: Intel Gold 6148 CPU,预测时使用Openvino预测引擎。
+
+**(1)DF:数据挖掘方案**
+
+DF(Data Filter) 是一种简单有效的数据挖掘方案。核心思想是利用已有模型预测训练数据,通过置信度和预测结果等信息,对全量数据进行筛选。具体的:首先使用少量数据快速训练得到一个低精度模型,使用该低精度模型对千万级的数据进行预测,去除置信度大于0.95的样本,该部分被认为是对提升模型精度无效的冗余数据。其次使用PP-OCRv3作为高精度模型,对剩余数据进行预测,去除置信度小于0.15的样本,该部分被认为是难以识别或质量很差的样本。
+使用该策略,千万级别训练数据被精简至百万级,显著提升模型训练效率,模型训练时间从2周减少到5天,同时精度提升至72.7%(+1.2%)。
+
+
+<div align="center">
+ <img src="../ppocr_v4/DF.png" width=800>
+</div>
+
+
+**(2)PP-LCNetV3:精度更优的骨干网络**
+
+PP-LCNetV3系列模型是PP-LCNet系列模型的延续,覆盖了更大的精度范围,能够适应不同下游任务的需要。PP-LCNetV3系列模型从多个方面进行了优化,提出了可学习仿射变换模块,对重参数化策略、激活函数进行了改进,同时调整了网络深度与宽度。最终,PP-LCNetV3系列模型能够在性能与效率之间达到最佳的平衡,在不同精度范围内取得极致的推理速度。
+
+**(3)Lite-Neck:精简参数的Neck结构**
+
+Lite-Neck整体结构沿用PP-OCRv3版本,在参数上稍作精简,识别模型整体的模型大小可从12M降低到8.5M,而精度不变;在CTCHead中,将Neck输出特征的维度从64提升到120,此时模型大小从8.5M提升到9.6M,精度提升0.5%。
+
+
+**(4)GTC-NRTR:Attention指导CTC训练策略**
+
+GTC(Guided Training of CTC),是在PP-OCRv3中使用过的策略,融合多种文本特征的表达,有效的提升文本识别精度。在PP-OCRv4中使用训练更稳定的Transformer模型NRTR作为指导,相比SAR基于循环神经网络的结构,NRTR基于Transformer实现解码过程泛化能力更强,能有效指导CTC分支学习。解决简单场景下快速过拟合的问题。模型大小不变,识别精度提升至73.21%(+0.5%)。
+
+<div align="center">
+ <img src="../ppocr_v4/ppocrv4_gtc.png" width="500">
+</div>
+
+
+**(5)Multi-Scale:多尺度训练策略**
+
+动态尺度训练策略,是在训练过程中随机resize输入图片的高度,以增大模型的鲁棒性。在训练过程中随机选择(32,48,64)三种高度进行resize,实验证明在测试集上评估精度不掉,在端到端串联推理时,指标可以提升0.5%。
+
+<div align="center">
+ <img src="../ppocr_v4/multi_scale.png" width="500">
+</div>
+
+
+**(6)DKD:蒸馏策略**
+
+识别模型的蒸馏包含两个部分,NRTRhead蒸馏和CTCHead蒸馏;
+
+对于NRTR head,使用了DKD loss蒸馏,使学生模型NRTR head输出的logits与教师NRTR head接近。最终NRTR head的loss是学生与教师间的DKD loss和与ground truth的cross entropy loss的加权和,用于监督学生模型的backbone训练。通过实验,我们发现加入DKD loss后,计算与ground truth的cross entropy loss时去除label smoothing可以进一步提高精度,因此我们在这里使用的是不带label smoothing的cross entropy loss。
+
+对于CTCHead,由于CTC的输出中存在Blank位,即使教师模型和学生模型的预测结果一样,二者的输出的logits分布也会存在差异,影响教师模型向学生模型的知识传递。PP-OCRv4识别模型蒸馏策略中,将CTC输出logits沿着文本长度维度计算均值,将多字符识别问题转换为多字符分类问题,用于监督CTC Head的训练。使用该策略融合NRTRhead DKD蒸馏策略,指标从0.7377提升到0.7545。
+
+
+
+<a name="4"></a>
+## 4. 端到端评估
+
+经过以上优化,最终PP-OCRv4在速度可比情况下,中文场景端到端Hmean指标相比于PP-OCRv3提升4.5%,效果大幅提升。具体指标如下表所示:
+
+| Model | Hmean | Model Size (M) | Time Cost (CPU, ms) |
+|-----|-----|--------|----| --- |
+| PP-OCRv3 | 57.99% | 15.6 | 78 |
+| PP-OCRv4 | 62.24% | 15.8 | 76 |
+
+测试环境:CPU型号为Intel Gold 6148,CPU预测时使用openvino。
diff --git a/doc/ppocr_v4/DF.png b/doc/ppocr_v4/DF.png
new file mode 100644
index 0000000000..f14953d481
Binary files /dev/null and b/doc/ppocr_v4/DF.png differ
diff --git a/doc/ppocr_v4/PFHead.png b/doc/ppocr_v4/PFHead.png
new file mode 100644
index 0000000000..3728dc44e5
Binary files /dev/null and b/doc/ppocr_v4/PFHead.png differ
diff --git a/doc/ppocr_v4/multi_scale.png b/doc/ppocr_v4/multi_scale.png
new file mode 100644
index 0000000000..673d306399
Binary files /dev/null and b/doc/ppocr_v4/multi_scale.png differ
diff --git a/doc/ppocr_v4/ppocrv4_det_cml.png b/doc/ppocr_v4/ppocrv4_det_cml.png
new file mode 100644
index 0000000000..9132c0a67c
Binary files /dev/null and b/doc/ppocr_v4/ppocrv4_det_cml.png differ
diff --git a/doc/ppocr_v4/ppocrv4_framework.png b/doc/ppocr_v4/ppocrv4_framework.png
new file mode 100644
index 0000000000..4aac40bae8
Binary files /dev/null and b/doc/ppocr_v4/ppocrv4_framework.png differ
diff --git a/doc/ppocr_v4/ppocrv4_gtc.png b/doc/ppocr_v4/ppocrv4_gtc.png
new file mode 100644
index 0000000000..7e6a3f5c13
Binary files /dev/null and b/doc/ppocr_v4/ppocrv4_gtc.png differ
diff --git a/doc/ppocr_v4/v4_rec_pipeline.png b/doc/ppocr_v4/v4_rec_pipeline.png
new file mode 100644
index 0000000000..b1ec7a9689
Binary files /dev/null and b/doc/ppocr_v4/v4_rec_pipeline.png differ
| att
| https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/10122 | 2023-06-08T12:34:27Z | 2023-06-09T02:08:31Z | 2023-06-09T02:08:31Z | 2023-06-09T02:08:31Z | 3,906 | PaddlePaddle/PaddleOCR | 42,127 |
Add Paddle exports to benchmarks | diff --git a/benchmarks.py b/benchmarks.py
index 58e083c95d5..161af73c1ed 100644
--- a/benchmarks.py
+++ b/benchmarks.py
@@ -65,7 +65,7 @@ def run(
model_type = type(attempt_load(weights, fuse=False)) # DetectionModel, SegmentationModel, etc.
for i, (name, f, suffix, cpu, gpu) in export.export_formats().iterrows(): # index, (name, file, suffix, CPU, GPU)
try:
- assert i not in (9, 10, 11), 'inference not supported' # Edge TPU, TF.js and Paddle are unsupported
+ assert i not in (9, 10), 'inference not supported' # Edge TPU and TF.js are unsupported
assert i != 5 or platform.system() == 'Darwin', 'inference only supported on macOS>=10.13' # CoreML
if 'cpu' in device.type:
assert cpu, 'inference not supported on CPU'
diff --git a/models/common.py b/models/common.py
index 9c08120fe7f..2b61307ad46 100644
--- a/models/common.py
+++ b/models/common.py
@@ -460,8 +460,8 @@ def wrap_frozen_graph(gd, inputs, outputs):
if cuda:
config.enable_use_gpu(memory_pool_init_size_mb=2048, device_id=0)
predictor = pdi.create_predictor(config)
- input_names = predictor.get_input_names()
- input_handle = predictor.get_input_handle(input_names[0])
+ input_handle = predictor.get_input_handle(predictor.get_input_names()[0])
+ output_names = predictor.get_output_names()
else:
raise NotImplementedError(f'ERROR: {w} is not a supported format')
@@ -517,12 +517,10 @@ def forward(self, im, augment=False, visualize=False):
k = 'var_' + str(sorted(int(k.replace('var_', '')) for k in y)[-1]) # output key
y = y[k] # output
elif self.paddle: # PaddlePaddle
- im = im.cpu().numpy().astype("float32")
+ im = im.cpu().numpy().astype(np.float32)
self.input_handle.copy_from_cpu(im)
self.predictor.run()
- output_names = self.predictor.get_output_names()
- output_handle = self.predictor.get_output_handle(output_names[0])
- y = output_handle.copy_to_cpu()
+ y = [self.predictor.get_output_handle(x).copy_to_cpu() for x in self.output_names]
else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)
im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3)
if self.saved_model: # SavedModel
diff --git a/utils/segment/plots.py b/utils/segment/plots.py
index e882c14390f..9b90900b377 100644
--- a/utils/segment/plots.py
+++ b/utils/segment/plots.py
@@ -99,9 +99,9 @@ def plot_images_and_masks(images, targets, masks, paths=None, fname='images.jpg'
if mh != h or mw != w:
mask = image_masks[j].astype(np.uint8)
mask = cv2.resize(mask, (w, h))
- mask = mask.astype(np.bool)
+ mask = mask.astype(bool)
else:
- mask = image_masks[j].astype(np.bool)
+ mask = image_masks[j].astype(bool)
with contextlib.suppress(Exception):
im[y:y + h, x:x + w, :][mask] = im[y:y + h, x:x + w, :][mask] * 0.4 + np.array(color) * 0.6
annotator.fromarray(im)
| Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
<!--
Thank you for submitting a YOLOv5 🚀 Pull Request! We want to make contributing to YOLOv5 as easy and transparent as possible. A few tips to get you started:
- Search existing YOLOv5 [PRs](https://github.com/ultralytics/yolov5/pull) to see if a similar PR already exists.
- Link this PR to a YOLOv5 [issue](https://github.com/ultralytics/yolov5/issues) to help us understand what bug fix or feature is being implemented.
- Provide before and after profiling/inference/training results to help us quantify the improvement your PR provides (if applicable).
Please see our ✅ [Contributing Guide](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) for more details.
-->
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Enhancements to YOLOv5 inference support and data processing.
### 📊 Key Changes
- Removed PaddlePaddle's output handle redundancies for efficiency.
- Enabled inference on all export formats except for Edge TPU and TF.js.
- Improved data type consistency and inference support across different formats.
- Simplified the way PaddlePaddle outputs are handled during forward passes.
- Updated mask data types from `np.bool` to Python's built-in `bool` for better compatibility.
### 🎯 Purpose & Impact
- 🚀 Enhances the model's compatibility and performance with various export formats, except for Edge TPU and TF.js, expanding the range of supported devices and platforms.
- 💡 Simplifies PaddlePaddle's code for output processing, potentially improving maintainability and reducing the likelihood of future bugs.
- 🧠 Changes to data types promote consistency across the codebase, likely reducing errors and ensuring smoother data handling processes.
- 🖼 Updates to image and mask plotting improve reliability and follow best practices in type usage, potentially affecting all visual outputs or segmented images, enhancing visualization quality. | https://api.github.com/repos/ultralytics/yolov5/pulls/9459 | 2022-09-17T21:14:42Z | 2022-09-17T22:17:31Z | 2022-09-17T22:17:31Z | 2024-01-19T05:48:44Z | 888 | ultralytics/yolov5 | 25,621 |
Magic commands %% (Shell) and %info added | diff --git a/interpreter/terminal_interface/magic_commands.py b/interpreter/terminal_interface/magic_commands.py
index f67e203ec..660e07a9b 100644
--- a/interpreter/terminal_interface/magic_commands.py
+++ b/interpreter/terminal_interface/magic_commands.py
@@ -4,6 +4,7 @@
from .utils.count_tokens import count_messages_tokens
from .utils.display_markdown_message import display_markdown_message
+from ..core.utils.system_debug_info import system_info
def handle_undo(self, arguments):
@@ -44,6 +45,7 @@ def handle_undo(self, arguments):
def handle_help(self, arguments):
commands_description = {
+ "%% [commands]": "Run commands in system shell",
"%debug [true/false]": "Toggle debug mode. Without arguments or with 'true', it enters debug mode. With 'false', it exits debug mode.",
"%reset": "Resets the current session.",
"%undo": "Remove previous messages and its response from the message history.",
@@ -51,6 +53,7 @@ def handle_help(self, arguments):
"%load_message [path]": "Loads messages from a specified JSON path. If no path is provided, it defaults to 'messages.json'.",
"%tokens [prompt]": "EXPERIMENTAL: Calculate the tokens used by the next request based on the current conversation's messages and estimate the cost of that request; optionally provide a prompt to also calulate the tokens used by that prompt and the total amount of tokens that will be sent with the next request",
"%help": "Show this help message.",
+ "%info": "Show system and interpreter information",
}
base_message = ["> **Available Commands:**\n\n"]
@@ -80,6 +83,9 @@ def handle_debug(self, arguments=None):
else:
display_markdown_message("> Unknown argument to debug command.")
+def handle_info(self, arguments):
+ system_info(self)
+
def handle_reset(self, arguments):
self.reset()
@@ -156,19 +162,20 @@ def handle_count_tokens(self, prompt):
display_markdown_message("\n".join(outputs))
+def handle_shell(self, code):
+ result = subprocess.run(code, shell=True, capture_output=True)
+
+ if result.stdout:
+ print(result.stdout.decode())
+
+ if result.stderr:
+ print(result.stderr.decode())
def handle_magic_command(self, user_input):
# Handle shell
if user_input.startswith("%%"):
- # This is not implemented yet.
- print("%% magic command not supported yet.")
+ handle_shell(self,user_input[2:])
return
- # user_input = user_input[2:].split()
- # # Run as shell
- # for chunk in self.computer.run("shell", user_input):
- # if "output" in chunk:
- # print(chunk["output"]["content"]) # Just print it for now. Should hook up to TUI later
- # return
# split the command into the command and the arguments, by the first whitespace
switch = {
@@ -179,6 +186,7 @@ def handle_magic_command(self, user_input):
"load_message": handle_load_message,
"undo": handle_undo,
"tokens": handle_count_tokens,
+ "info": handle_info,
}
user_input = user_input[1:].strip() # Capture the part after the `%`
| ### Describe the changes you have made:
Added magic command %info to show system and interpreter information.
Added magic command %% to run shell commands from interpreter console.
### Reference any relevant issue (Fixes #000)
Saw %% Shell command in the roadmap file.
- [x] I have performed a self-review of my code:
### I have tested the code on the following OS:
- [x] Windows
- [ ] MacOS
- [ ] Linux
### AI Language Model (if applicable)
- [ ] GPT4
- [ ] GPT3
- [ ] Llama 7B
- [ ] Llama 13B
- [ ] Llama 34B
- [ ] Huggingface model (Please specify which one)
| https://api.github.com/repos/OpenInterpreter/open-interpreter/pulls/780 | 2023-11-21T04:55:54Z | 2023-11-21T15:28:24Z | 2023-11-21T15:28:24Z | 2023-11-21T22:57:07Z | 751 | OpenInterpreter/open-interpreter | 40,830 |
Update ui.rst | diff --git a/docs/apache-airflow/ui.rst b/docs/apache-airflow/ui.rst
index 1889f6883fc8a..5b416589283d1 100644
--- a/docs/apache-airflow/ui.rst
+++ b/docs/apache-airflow/ui.rst
@@ -76,7 +76,7 @@ Task groups are indicated by a caret and can be opened or closed:
.. image:: img/grid_task_group.png
-Mapped Tasks are indicated by a square brackets and will show a table of each mapped task instance in the details panel:
+Mapped Tasks are indicated by square brackets and will show a table of each mapped task instance in the details panel:
.. image:: img/grid_mapped_task.png
@@ -159,7 +159,7 @@ provide yet more context.
Task Instance Context Menu
..........................
-From the pages seen above (tree view, graph view, gantt, ...), it is always
+From the pages seen above (grid view, graph view, gantt, ...), it is always
possible to click on a task instance, and get to this rich context menu
that can take you to more detailed metadata, and perform some actions.
| Fix minor typos
<!--
Thank you for contributing! Please make sure that your code changes
are covered with tests. And in case of new features or big changes
remember to adjust the documentation.
Feel free to ping committers for the review!
In case of existing issue, reference it using one of the following:
closes: #ISSUE
related: #ISSUE
How to write a good git commit message:
http://chris.beams.io/posts/git-commit/
-->
---
**^ Add meaningful description above**
Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information.
In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in a newsfragement file, named `{pr_number}.significant.rst`, in [newsfragments](https://github.com/apache/airflow/tree/main/newsfragments).
| https://api.github.com/repos/apache/airflow/pulls/24514 | 2022-06-17T07:10:18Z | 2022-06-17T11:10:49Z | 2022-06-17T11:10:49Z | 2022-07-01T06:33:37Z | 253 | apache/airflow | 14,665 |
import os | diff --git a/Insecure Deserialization/Python.md b/Insecure Deserialization/Python.md
index 41887f6531..563db1cf90 100644
--- a/Insecure Deserialization/Python.md
+++ b/Insecure Deserialization/Python.md
@@ -3,6 +3,7 @@
## Pickle
The following code is a simple example of using `cPickle` in order to generate an auth_token which is a serialized User object.
+:warning: `import cPickle` will only work on Python 2
```python
import cPickle
@@ -32,7 +33,7 @@ Python 2.7 documentation clearly states Pickle should never be used with untrust
> The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.
```python
-import cPickle
+import cPickle, os
from base64 import b64encode, b64decode
class Evil(object):
@@ -47,4 +48,4 @@ print("Your Evil Token : {}").format(evil_token)
## References
* [Exploiting misuse of Python's "pickle" - Mar 20, 2011](https://blog.nelhage.com/2011/03/exploiting-pickle/)
-* [Python Pickle Injection - Apr 30, 2017](http://xhyumiracle.com/python-pickle-injection/)
\ No newline at end of file
+* [Python Pickle Injection - Apr 30, 2017](http://xhyumiracle.com/python-pickle-injection/)
| https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/486 | 2022-03-24T06:09:47Z | 2022-04-18T18:58:40Z | 2022-04-18T18:58:40Z | 2022-04-18T18:58:40Z | 352 | swisskyrepo/PayloadsAllTheThings | 8,392 |
|
[requires.io] dependency update on main branch | diff --git a/setup.py b/setup.py
index 222fea912b..073f0cbbcd 100644
--- a/setup.py
+++ b/setup.py
@@ -66,7 +66,7 @@
# https://packaging.python.org/en/latest/requirements/#install-requires
# It is not considered best practice to use install_requires to pin dependencies to specific versions.
install_requires=[
- "asgiref>=3.2.10,<3.4",
+ "asgiref>=3.2.10,<3.5",
"blinker>=1.4, <1.5",
"Brotli>=1.0,<1.1",
"certifi>=2019.9.11", # no semver here - this should always be on the last release!
| https://api.github.com/repos/mitmproxy/mitmproxy/pulls/4664 | 2021-06-27T20:40:59Z | 2021-06-28T17:03:31Z | 2021-06-28T17:03:31Z | 2021-06-28T17:03:34Z | 180 | mitmproxy/mitmproxy | 27,721 |
|
Backport/2.7/51953 | diff --git a/changelogs/fragments/51953-onepassword_facts-bug-fixes.yaml b/changelogs/fragments/51953-onepassword_facts-bug-fixes.yaml
new file mode 100644
index 00000000000000..ae32c5bdedbbc6
--- /dev/null
+++ b/changelogs/fragments/51953-onepassword_facts-bug-fixes.yaml
@@ -0,0 +1,2 @@
+bugfixes:
+ - onepassword_facts - Fixes issues which prevented this module working with 1Password CLI version 0.5.5 (or greater). Older versions of the CLI were deprecated by 1Password and will no longer function.
diff --git a/lib/ansible/modules/identity/onepassword_facts.py b/lib/ansible/modules/identity/onepassword_facts.py
index 85333bd0d7aa36..016d964ed84041 100644
--- a/lib/ansible/modules/identity/onepassword_facts.py
+++ b/lib/ansible/modules/identity/onepassword_facts.py
@@ -22,7 +22,7 @@
- Ryan Conway (@rylon)
version_added: "2.7"
requirements:
- - C(op) 1Password command line utility (v0.5.1). See U(https://support.1password.com/command-line/)
+ - C(op) 1Password command line utility (v0.5.5). See U(https://support.1password.com/command-line/)
notes:
- "Based on the C(onepassword) lookup plugin by Scott Buchanan <sbuchanan@ri.pn>."
short_description: Fetch facts from 1Password items
@@ -146,6 +146,9 @@ class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
+ def __str__(self):
+ return self.results
+
def __repr__(self):
return self.results
@@ -155,16 +158,22 @@ class OnePasswordFacts(object):
def __init__(self):
self.cli_path = module.params.get('cli_path')
self.auto_login = module.params.get('auto_login')
- self.token = {}
+ self.token = None
terms = module.params.get('search_terms')
self.terms = self.parse_search_terms(terms)
def _run(self, args, expected_rc=0, command_input=None, ignore_errors=False):
+ if self.token:
+ # Adds the session token to all commands if we're logged in.
+ args += [to_bytes('--session=') + self.token]
+
command = [self.cli_path] + args
+
p = Popen(command, stdout=PIPE, stderr=PIPE, stdin=PIPE)
out, err = p.communicate(input=command_input)
rc = p.wait()
+
if not ignore_errors and rc != expected_rc:
raise AnsibleModuleError(to_native(err))
return rc, out, err
@@ -174,8 +183,8 @@ def _parse_field(self, data_json, item_id, field_name, section_title=None):
if ('documentAttributes' in data['details']):
# This is actually a document, let's fetch the document data instead!
- document = self._run(["get", "document", data['overview']['title']])
- return {'document': document[0].strip()}
+ rc, output, error = self._run(["get", "document", data['overview']['title']])
+ return {'document': output.strip()}
else:
# This is not a document, let's try to find the requested field
@@ -259,15 +268,14 @@ def assert_logged_in(self):
if re.search(".*You are not currently signed in.*", str(e)) is not None:
if (self.auto_login is not None):
try:
- token = self._run([
+ rc, out, err = self._run([
"signin", "%s.1password.com" % self.auto_login['account'],
self.auto_login['username'],
self.auto_login['secretkey'],
- self.auto_login['masterpassword'],
"--shorthand=ansible_%s" % self.auto_login['account'],
"--output=raw"
- ])
- self.token = {'OP_SESSION_ansible_%s' % self.auto_login['account']: token[0].strip()}
+ ], command_input=self.auto_login['masterpassword'])
+ self.token = out.strip()
except Exception as e:
module.fail_json(msg="Unable to automatically login to 1Password: %s " % e)
@@ -282,7 +290,7 @@ def get_raw(self, item_id, vault=None):
args = ["get", "item", item_id]
if vault is not None:
args += ['--vault={0}'.format(vault)]
- output, dummy = self._run(args)
+ rc, output, err = self._run(args)
return output
except Exception as e:
@@ -295,52 +303,6 @@ def get_field(self, item_id, field, section=None, vault=None):
output = self.get_raw(item_id, vault)
return self._parse_field(output, item_id, field, section) if output != '' else ''
- def _run(self, args, expected_rc=0):
- # Duplicates the current shell environment before running 'op', so we get the same PATH the user has,
- # but we merge in the auth token dictionary, allowing the auto-login functionality to work (if enabled).
- env = {}
- env.update(os.environ.copy())
- env.update(self.token)
-
- p = Popen([self.cli_path] + args, stdout=PIPE, stderr=PIPE, stdin=PIPE, env=env)
- out, err = p.communicate()
-
- rc = p.wait()
-
- if rc != expected_rc:
- raise Exception(err)
-
- return out, err
-
- def _parse_field(self, data_json, item_id, field_name, section_title=None):
- data = json.loads(data_json)
-
- if ('documentAttributes' in data['details']):
- # This is actually a document, let's fetch the document data instead!
- document = self._run(["get", "document", data['overview']['title']])
- return {'document': document[0].strip()}
-
- else:
- # This is not a document, let's try to find the requested field
- if section_title is None:
- for field_data in data['details'].get('fields', []):
- if field_data.get('name').lower() == field_name.lower():
- return {field_name: field_data.get('value', '')}
-
- # Not found it yet, so now lets see if there are any sections defined
- # and search through those for the field. If a section was given, we skip
- # any non-matching sections, otherwise we search them all until we find the field.
- for section_data in data['details'].get('sections', []):
- if section_title is not None and section_title.lower() != section_data['title'].lower():
- continue
- for field_data in section_data.get('fields', []):
- if field_data.get('t').lower() == field_name.lower():
- return {field_name: field_data.get('v', '')}
-
- # We will get here if the field could not be found in any section and the item wasn't a document to be downloaded.
- optional_section_title = '' if section_title is None else " in the section '%s'" % section_title
- module.fail_json(msg="Unable to find an item in 1Password named '%s' with the field '%s'%s." % (item_id, field_name, optional_section_title))
-
def main():
global module
| ##### SUMMARY
This is the requested backport pull request for #51953.
It fixes issues which prevented this module working with 1Password CLI version 0.5.5 (or greater). Older versions of the CLI were deprecated by 1Password and will no longer function.
##### ISSUE TYPE
- Backport Pull Request
##### COMPONENT NAME
onepassword_facts | https://api.github.com/repos/ansible/ansible/pulls/53657 | 2019-03-11T20:31:01Z | 2019-03-11T22:18:41Z | 2019-03-11T22:18:40Z | 2019-07-25T17:08:55Z | 1,721 | ansible/ansible | 49,530 |
run rst2pseudoxml.py with shell=true | diff --git a/tests/test_docs.py b/tests/test_docs.py
index 766810afd8..3966f81d94 100644
--- a/tests/test_docs.py
+++ b/tests/test_docs.py
@@ -34,7 +34,8 @@ def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
- stdout=subprocess.PIPE
+ stdout=subprocess.PIPE,
+ shell=True
)
err = p.communicate()[1]
assert p.returncode == 0, err.decode('utf8')
| makes rst2pseudoxml.py work properly on Windows
executes via a shell instead of not working
also requires that the py extension is associated with the python interpreter | https://api.github.com/repos/httpie/cli/pulls/821 | 2019-12-04T10:42:36Z | 2019-12-04T12:33:14Z | 2019-12-04T12:33:14Z | 2019-12-04T12:33:17Z | 141 | httpie/cli | 34,164 |
fix bug | diff --git a/doc/doc_ch/ppocr_introduction.md b/doc/doc_ch/ppocr_introduction.md
index d9f9b533fa..1955426438 100644
--- a/doc/doc_ch/ppocr_introduction.md
+++ b/doc/doc_ch/ppocr_introduction.md
@@ -116,7 +116,7 @@ PP-OCR中英文模型列表如下:
| 模型简介 | 模型名称 | 推荐场景 | 检测模型 | 方向分类器 | 识别模型 |
| ------------------------------------- | ----------------------- | --------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 中英文超轻量PP-OCRv3模型(16.2M) | ch_PP-OCRv3_xx | 移动端&服务器端 | [推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_distill_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_train.tar) |
-| 英文超轻量PP-OCRv3模型(13.4M) | en_PP-OCRv3_xx | 移动端&服务器端 | [推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_distill_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/ch_ppocr_mobile_v2.0_cls_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/ch_ppocr_mobile_v2.0_cls_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_train.tar) |
+| 英文超轻量PP-OCRv3模型(13.4M) | en_PP-OCRv3_xx | 移动端&服务器端 | [推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_distill_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_train.tar) |
| 中英文超轻量PP-OCRv2模型(13.0M) | ch_PP-OCRv2_xx | 移动端&服务器端 | [推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_distill_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_train.tar) |
| 中英文超轻量PP-OCR mobile模型(9.4M) | ch_ppocr_mobile_v2.0_xx | 移动端&服务器端 | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_pre.tar) |
| 中英文通用PP-OCR server模型(143.4M) | ch_ppocr_server_v2.0_xx | 服务器端 | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | [推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_pre.tar) |
diff --git a/doc/doc_en/ppocr_introduction_en.md b/doc/doc_en/ppocr_introduction_en.md
index 6483d3bf7a..ba3fbb076c 100644
--- a/doc/doc_en/ppocr_introduction_en.md
+++ b/doc/doc_en/ppocr_introduction_en.md
@@ -106,7 +106,7 @@ For more tutorials, including model training, model compression, deployment, etc
| Model introduction | Model name | Recommended scene | Detection model | Direction classifier | Recognition model |
| ------------------------------------------------------------ | ---------------------------- | ----------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| Chinese and English ultra-lightweight PP-OCRv3 model(16.2M) | ch_PP-OCRv3_xx | Mobile & Server | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_distill_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_train.tar) |
-| English ultra-lightweight PP-OCRv3 model(13.4M) | en_PP-OCRv3_xx | Mobile & Server | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_distill_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/ch_ppocr_mobile_v2.0_cls_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_train.tar) |
+| English ultra-lightweight PP-OCRv3 model(13.4M) | en_PP-OCRv3_xx | Mobile & Server | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_distill_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_train.tar) |
| Chinese and English ultra-lightweight PP-OCRv2 model(11.6M) | ch_PP-OCRv2_xx |Mobile & Server|[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_distill_train.tar)| [inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_train.tar)|
| Chinese and English ultra-lightweight PP-OCR model (9.4M) | ch_ppocr_mobile_v2.0_xx | Mobile & server |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar)|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_train.tar) |
| Chinese and English general PP-OCR model (143.4M) | ch_ppocr_server_v2.0_xx | Server |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_train.tar) |
diff --git a/ppocr/utils/utility.py b/ppocr/utils/utility.py
index 48a84cfdf9..4a25ff8b2f 100755
--- a/ppocr/utils/utility.py
+++ b/ppocr/utils/utility.py
@@ -60,7 +60,7 @@ def get_image_file_list(img_file):
raise Exception("not found any img file in {}".format(img_file))
img_end = {'jpg', 'bmp', 'png', 'jpeg', 'rgb', 'tif', 'tiff', 'gif'}
- if os.path.isfile(img_file) and _check_image_file(file_path):
+ if os.path.isfile(img_file) and _check_image_file(img_file):
imgs_lists.append(img_file)
elif os.path.isdir(img_file):
for single_file in os.listdir(img_file):
diff --git a/tools/eval.py b/tools/eval.py
index 7fd4fa7ada..cab2833439 100755
--- a/tools/eval.py
+++ b/tools/eval.py
@@ -20,8 +20,8 @@
import sys
__dir__ = os.path.dirname(os.path.abspath(__file__))
-sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, __dir__)
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
from ppocr.data import build_dataloader
from ppocr.modeling.architectures import build_model
| att | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/6100 | 2022-04-29T09:15:04Z | 2022-04-29T09:50:44Z | 2022-04-29T09:50:44Z | 2022-04-29T09:50:44Z | 3,619 | PaddlePaddle/PaddleOCR | 42,748 |
Correct venv3 detection on windows | diff --git a/tools/_venv_common.py b/tools/_venv_common.py
index cce88f8261e..b180518f9fd 100755
--- a/tools/_venv_common.py
+++ b/tools/_venv_common.py
@@ -55,7 +55,7 @@ def main(venv_name, venv_args, args):
print('Please run the following command to activate developer environment:')
print('source {0}/bin/activate'.format(venv_name))
print('-------------------------------------------------------------------')
- elif os.path.isdir(os.path.join(venv_args, 'Scripts')):
+ elif os.path.isdir(os.path.join(venv_name, 'Scripts')):
# Windows specific
print('---------------------------------------------------------------------------')
print('Please run one of the following commands to activate developer environment:')
| A little typo in the `_venv_common.py` block the script to finish correctly once the virtual environment has been setup on Windows.
This PR fixes that. | https://api.github.com/repos/certbot/certbot/pulls/6490 | 2018-11-09T10:14:30Z | 2018-11-10T00:17:18Z | 2018-11-10T00:17:18Z | 2018-11-14T22:42:34Z | 175 | certbot/certbot | 460 |
[daftsex] fix: update domain and embed player url | diff --git a/yt_dlp/extractor/daftsex.py b/yt_dlp/extractor/daftsex.py
index 551d5e3abeb..92510c767c5 100644
--- a/yt_dlp/extractor/daftsex.py
+++ b/yt_dlp/extractor/daftsex.py
@@ -1,6 +1,7 @@
from .common import InfoExtractor
from ..compat import compat_b64decode
from ..utils import (
+ ExtractorError,
int_or_none,
js_to_json,
parse_count,
@@ -12,21 +13,24 @@
class DaftsexIE(InfoExtractor):
- _VALID_URL = r'https?://(?:www\.)?daftsex\.com/watch/(?P<id>-?\d+_\d+)'
+ _VALID_URL = r'https?://(?:www\.)?daft\.sex/watch/(?P<id>-?\d+_\d+)'
_TESTS = [{
- 'url': 'https://daftsex.com/watch/-35370899_456246186',
- 'md5': 'd95135e6cea2d905bea20dbe82cda64a',
+ 'url': 'https://daft.sex/watch/-35370899_456246186',
+ 'md5': '64c04ef7b4c7b04b308f3b0c78efe7cd',
'info_dict': {
'id': '-35370899_456246186',
'ext': 'mp4',
'title': 'just relaxing',
- 'description': 'just relaxing - Watch video Watch video in high quality',
+ 'description': 'just relaxing – Watch video Watch video in high quality',
'upload_date': '20201113',
'timestamp': 1605261911,
- 'thumbnail': r're:https://[^/]+/impf/-43BuMDIawmBGr3GLcZ93CYwWf2PBv_tVWoS1A/dnu41DnARU4\.jpg\?size=800x450&quality=96&keep_aspect_ratio=1&background=000000&sign=6af2c26ff4a45e55334189301c867384&type=video_thumb',
+ 'thumbnail': r're:^https?://.*\.jpg$',
+ 'age_limit': 18,
+ 'duration': 15.0,
+ 'view_count': int
},
}, {
- 'url': 'https://daftsex.com/watch/-156601359_456242791',
+ 'url': 'https://daft.sex/watch/-156601359_456242791',
'info_dict': {
'id': '-156601359_456242791',
'ext': 'mp4',
@@ -36,6 +40,7 @@ class DaftsexIE(InfoExtractor):
'timestamp': 1600250735,
'thumbnail': 'https://psv153-1.crazycloud.ru/videos/-156601359/456242791/thumb.jpg?extra=i3D32KaBbBFf9TqDRMAVmQ',
},
+ 'skip': 'deleted / private'
}]
def _real_extract(self, url):
@@ -60,7 +65,7 @@ def _real_extract(self, url):
webpage, 'player color', fatal=False) or ''
embed_page = self._download_webpage(
- 'https://daxab.com/player/%s?color=%s' % (player_hash, player_color),
+ 'https://dxb.to/player/%s?color=%s' % (player_hash, player_color),
video_id, headers={'Referer': url})
video_params = self._parse_json(
self._search_regex(
@@ -94,15 +99,19 @@ def _real_extract(self, url):
'age_limit': 18,
}
- item = self._download_json(
+ items = self._download_json(
f'{server_domain}/method/video.get/{video_id}', video_id,
headers={'Referer': url}, query={
'token': video_params['video']['access_token'],
'videos': video_id,
'ckey': video_params['c_key'],
'credentials': video_params['video']['credentials'],
- })['response']['items'][0]
+ })['response']['items']
+
+ if not items:
+ raise ExtractorError('Video is not available', video_id=video_id, expected=True)
+ item = items[0]
formats = []
for f_id, f_url in item.get('files', {}).items():
if f_id == 'external':
| ### Description of your *pull request* and other information
This PR implements the fix, provided by @NiklasFarber1024, for #5881 by updating the VALID_URL regex with the new domain as well as updating the url for the embed player where the video parameters are parsed.
Fixes #5881
<details open><summary>Template</summary> <!-- OPEN is intentional -->
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [x] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [x] Fix or improvement to an extractor (Make sure to add/update tests)
- [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy))
- [ ] Core bug fix/improvement
- [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
</details>
| https://api.github.com/repos/yt-dlp/yt-dlp/pulls/5966 | 2023-01-05T17:20:33Z | 2023-05-29T03:31:26Z | 2023-05-29T03:31:26Z | 2023-05-29T06:14:32Z | 1,041 | yt-dlp/yt-dlp | 7,494 |
Bybit: adjust stop handling for fetchMyTrades, fetchOrders and fetchOpenOrders | diff --git a/ts/src/bybit.ts b/ts/src/bybit.ts
index bd32a968591a..8a899559c7b4 100644
--- a/ts/src/bybit.ts
+++ b/ts/src/bybit.ts
@@ -4462,11 +4462,7 @@ export default class bybit extends Exchange {
const isStop = this.safeValueN (params, [ 'trigger', 'stop' ], false);
params = this.omit (params, [ 'trigger', 'stop' ]);
if (isStop) {
- if (type === 'spot') {
- request['orderFilter'] = 'tpslOrder';
- } else {
- request['orderFilter'] = 'StopOrder';
- }
+ request['orderFilter'] = 'StopOrder';
}
if (limit !== undefined) {
request['limit'] = limit;
@@ -4629,7 +4625,7 @@ export default class bybit extends Exchange {
* @param {int} [since] the earliest time in ms to fetch open orders for
* @param {int} [limit] the maximum number of open orders structures to retrieve
* @param {object} [params] extra parameters specific to the exchange API endpoint
- * @param {boolean} [params.stop] true if stop order
+ * @param {boolean} [params.stop] set to true for fetching open stop orders
* @param {string} [params.type] market type, ['swap', 'option', 'spot']
* @param {string} [params.subType] market subType, ['linear', 'inverse']
* @param {string} [params.baseCoin] Base coin. Supports linear, inverse & option
@@ -4666,11 +4662,7 @@ export default class bybit extends Exchange {
const isStop = this.safeValue2 (params, 'stop', 'trigger', false);
params = this.omit (params, [ 'stop', 'trigger' ]);
if (isStop) {
- if (type === 'spot') {
- request['orderFilter'] = 'tpslOrder';
- } else {
- request['orderFilter'] = 'StopOrder';
- }
+ request['orderFilter'] = 'StopOrder';
}
if (limit !== undefined) {
request['limit'] = limit;
@@ -4808,7 +4800,6 @@ export default class bybit extends Exchange {
* @param {int} [since] the earliest time in ms to fetch trades for
* @param {int} [limit] the maximum number of trades structures to retrieve
* @param {object} [params] extra parameters specific to the exchange API endpoint
- * @param {boolean} [params.stop] true if stop order
* @param {string} [params.type] market type, ['swap', 'option', 'spot']
* @param {string} [params.subType] market subType, ['linear', 'inverse']
* @param {boolean} [params.paginate] default false, when true will automatically paginate by calling this endpoint multiple times. See in the docs all the [availble parameters](https://github.com/ccxt/ccxt/wiki/Manual#pagination-params)
@@ -4822,7 +4813,7 @@ export default class bybit extends Exchange {
}
const [ enableUnifiedMargin, enableUnifiedAccount ] = await this.isUnifiedEnabled ();
const isUnifiedAccount = (enableUnifiedMargin || enableUnifiedAccount);
- const request = {};
+ let request = {};
let market = undefined;
let isUsdcSettled = false;
if (symbol !== undefined) {
@@ -4836,27 +4827,13 @@ export default class bybit extends Exchange {
return await this.fetchMyUsdcTrades (symbol, since, limit, params);
}
request['category'] = type;
- const isStop = this.safeValue2 (params, 'stop', 'trigger', false);
- params = this.omit (params, [ 'stop', 'type', 'trigger' ]);
- if (isStop) {
- if (type === 'spot') {
- request['orderFilter'] = 'tpslOrder';
- } else {
- request['orderFilter'] = 'StopOrder';
- }
- }
if (limit !== undefined) {
request['limit'] = limit;
}
if (since !== undefined) {
request['startTime'] = since;
}
- const until = this.safeInteger2 (params, 'until', 'till'); // unified in milliseconds
- const endTime = this.safeInteger (params, 'endTime', until); // exchange-specific in milliseconds
- params = this.omit (params, [ 'endTime', 'till', 'until' ]);
- if (endTime !== undefined) {
- request['endTime'] = endTime;
- }
+ [ request, params ] = this.handleUntilOption ('endTime', request, params);
const response = await this.privateGetV5ExecutionList (this.extend (request, params));
//
// {
diff --git a/ts/src/test/static/request/bybit.json b/ts/src/test/static/request/bybit.json
index 9b2029b23e99..0d99684ffbb5 100644
--- a/ts/src/test/static/request/bybit.json
+++ b/ts/src/test/static/request/bybit.json
@@ -407,6 +407,19 @@
"type": "swap"
}
]
+ },
+ {
+ "description": "Spot fetch open trigger orders",
+ "method": "fetchOpenOrders",
+ "url": "https://api-testnet.bybit.com/v5/order/realtime?symbol=BTCUSDT&category=spot&orderFilter=StopOrder",
+ "input": [
+ "BTC/USDT",
+ null,
+ null,
+ {
+ "stop": true
+ }
+ ]
}
],
"fetchClosedOrders": [
@@ -445,6 +458,21 @@
]
}
],
+ "fetchOrders": [
+ {
+ "description": "Spot fetch trigger orders",
+ "method": "fetchOrders",
+ "url": "https://api-testnet.bybit.com/v5/order/history?symbol=BTCUSDT&category=spot&orderFilter=StopOrder",
+ "input": [
+ "BTC/USDT",
+ null,
+ null,
+ {
+ "stop": true
+ }
+ ]
+ }
+ ],
"fetchMyTrades": [
{
"description": "Spot fetchMyTrades",
| Removed `orderFilter` from `fetchMyTrades` because it's not actually supported for the endpoint we're using. Edited the spot trigger handling for `fetchOrders` and `fetchOpenOrders`, so that `orderFilter` uses `StopOrder`:
Added static request tests for `fetchOrders` and `fetchOpenOrders` spot trigger orders.
```
bybit fetchOrders BTC/USDT undefined undefined '{"stop":true}'
bybit.fetchOrders (BTC/USDT, , , [object Object])
2024-01-18T02:27:49.669Z iteration 0 passed in 786 ms
id | clientOrderId | timestamp | datetime | lastTradeTimestamp | lastUpdateTimestamp | symbol | type | timeInForce | postOnly | reduceOnly | side | price | stopPrice | triggerPrice | amount | cost | filled | remaining | status | fee | trades | fees
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1599897800789984000 | 1599897800789984001 | 1705458680187 | 2024-01-17T02:31:20.187Z | 1705458711542 | 1705458711542 | BTC/USDT | limit | GTC | false | false | buy | 35000 | 35500 | 35500 | 0.001 | 0 | 0 | 0.001 | canceled | {"cost":"0","currency":"USDT"} | [] | [{"cost":0,"currency":"USDT"}]
1599966007471112960 | 1599966007471112961 | 1705466811056 | 2024-01-17T04:46:51.056Z | 1705466854241 | 1705466854241 | BTC/USDT | limit | GTC | false | false | buy | 35000 | 35500 | 35500 | 0.001 | 0 | 0 | 0.001 | canceled | {"cost":"0","currency":"USDT"} | [] | [{"cost":0,"currency":"USDT"}]
1599966662050972416 | 1599966662050972417 | 1705466889088 | 2024-01-17T04:48:09.088Z | 1705466935754 | 1705466935754 | BTC/USDT | limit | GTC | false | false | buy | 35000 | 35500 | 35500 | 0.001 | 0 | 0 | 0.001 | canceled | {"cost":"0","currency":"USDT"} | [] | [{"cost":0,"currency":"USDT"}]
3 objects
```
```
bybit fetchOpenOrders BTC/USDT undefined undefined '{"stop":true}'
bybit.fetchOpenOrders (BTC/USDT, , , [object Object])
2024-01-18T02:35:59.978Z iteration 0 passed in 1591 ms
id | clientOrderId | timestamp | datetime | lastTradeTimestamp | lastUpdateTimestamp | symbol | type | timeInForce | postOnly | reduceOnly | side | price | stopPrice | triggerPrice | amount | cost | filled | remaining | status | fee | trades | fees
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1600623510408135424 | 1600623510408135425 | 1705545191515 | 2024-01-18T02:33:11.515Z | 1705545191515 | 1705545191515 | BTC/USDT | limit | GTC | false | false | buy | 35000 | 35700 | 35700 | 0.001 | 0 | 0 | 0.001 | open | {"cost":"0","currency":"USDT"} | [] | [{"cost":0,"currency":"USDT"}]
1 objects
``` | https://api.github.com/repos/ccxt/ccxt/pulls/20856 | 2024-01-18T02:45:09Z | 2024-01-18T11:01:38Z | 2024-01-18T11:01:38Z | 2024-01-18T11:01:38Z | 1,462 | ccxt/ccxt | 13,080 |
Format and Add a link to leaderboard | diff --git a/fastchat/llm_judge/README.md b/fastchat/llm_judge/README.md
index 5565ea941e..fef4b769dd 100644
--- a/fastchat/llm_judge/README.md
+++ b/fastchat/llm_judge/README.md
@@ -1,5 +1,5 @@
# LLM Judge
-| [Paper](https://arxiv.org/abs/2306.05685) | [Demo](https://huggingface.co/spaces/lmsys/mt-bench) |
+| [Paper](https://arxiv.org/abs/2306.05685) | [Demo](https://huggingface.co/spaces/lmsys/mt-bench) | [Leaderboard](https://chat.lmsys.org/?leaderboard) |
In this package, you can use MT-bench questions and prompts to evaluate your models with LLM-as-a-judge.
diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py
index 857cdf2c43..4908fafd4f 100644
--- a/fastchat/model/model_adapter.py
+++ b/fastchat/model/model_adapter.py
@@ -147,8 +147,7 @@ def load_model(
for i in range(num_gpus)
}
else:
- kwargs["max_memory"] = {
- i: max_gpu_memory for i in range(num_gpus)}
+ kwargs["max_memory"] = {i: max_gpu_memory for i in range(num_gpus)}
elif device == "mps":
kwargs = {"torch_dtype": torch.float16}
# Avoid bugs in mps backend by not using in-place operations.
@@ -557,8 +556,7 @@ def load_model(self, model_path: str, from_pretrained_kwargs: dict):
model_path, low_cpu_mem_usage=True, **from_pretrained_kwargs
)
revision = from_pretrained_kwargs.get("revision", "main")
- tokenizer = LlamaTokenizer.from_pretrained(
- model_path, revision=revision)
+ tokenizer = LlamaTokenizer.from_pretrained(model_path, revision=revision)
return model, tokenizer
def get_default_conv_template(self, model_path: str) -> Conversation:
@@ -837,9 +835,7 @@ def match(self, model_path: str):
def load_model(self, model_path: str, from_pretrained_kwargs: dict):
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
- model_path,
- config=config,
- trust_remote_code=True
+ model_path, config=config, trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
model_path,
diff --git a/fastchat/model/model_registry.py b/fastchat/model/model_registry.py
index b49953d70b..d8c5038e15 100644
--- a/fastchat/model/model_registry.py
+++ b/fastchat/model/model_registry.py
@@ -50,8 +50,14 @@ def get_model_info(name: str) -> ModelInfo:
"PaLM 2 for Chat (chat-bison@001) by Google",
)
register_model_info(
- ["vicuna-13b", "vicuna-13b-v1.3", "vicuna-7b", "vicuna-7b-v1.3",
- "vicuna-33b", "vicuna-33b-v1.3"],
+ [
+ "vicuna-13b",
+ "vicuna-13b-v1.3",
+ "vicuna-7b",
+ "vicuna-7b-v1.3",
+ "vicuna-33b",
+ "vicuna-33b-v1.3",
+ ],
"Vicuna",
"https://lmsys.org/blog/2023-03-30-vicuna/",
"a chat assistant fine-tuned from LLaMA on user-shared conversations by LMSYS",
@@ -129,10 +135,10 @@ def get_model_info(name: str) -> ModelInfo:
"Stability AI language models",
)
register_model_info(
- ["mpt-7b-chat", "mpt-30b-chat"],
+ ["mpt-7b-chat"],
"MPT-Chat",
- "https://www.mosaicml.com/blog/mpt-30b",
- "a chatbot fine-tuned from MPT by MosaicML",
+ "https://www.mosaicml.com/blog/mpt-7b",
+ "a chatbot fine-tuned from MPT-7B by MosaicML",
)
register_model_info(
["mpt-30b-chat"],
diff --git a/fastchat/serve/monitor/monitor.py b/fastchat/serve/monitor/monitor.py
index cb268ab7c8..de7c101f0b 100644
--- a/fastchat/serve/monitor/monitor.py
+++ b/fastchat/serve/monitor/monitor.py
@@ -206,7 +206,9 @@ def build_leaderboard_tab(elo_results_file, leaderboard_table_file):
value=values,
elem_id="leaderboard_dataframe",
)
- gr.Markdown("If you want to see more models, please help us [add them](https://github.com/lm-sys/FastChat/blob/main/docs/arena.md#how-to-add-a-new-model).")
+ gr.Markdown(
+ "If you want to see more models, please help us [add them](https://github.com/lm-sys/FastChat/blob/main/docs/arena.md#how-to-add-a-new-model)."
+ )
else:
pass
| https://api.github.com/repos/lm-sys/FastChat/pulls/1764 | 2023-06-23T00:03:34Z | 2023-06-23T00:06:00Z | 2023-06-23T00:06:00Z | 2023-06-23T20:43:54Z | 1,242 | lm-sys/FastChat | 41,636 |
|
Added OpenShift as a free hosting provider alternative | diff --git a/docs/deploying/index.rst b/docs/deploying/index.rst
index 60b239f8cf..272a9e27a0 100644
--- a/docs/deploying/index.rst
+++ b/docs/deploying/index.rst
@@ -18,6 +18,7 @@ Hosted options
--------------
- `Deploying Flask on Heroku <https://devcenter.heroku.com/articles/getting-started-with-python>`_
+- `Deploying Flask on OpenShift <https://developers.openshift.com/en/python-flask.html>`_
- `Deploying WSGI on dotCloud <http://docs.dotcloud.com/services/python/>`_
with `Flask-specific notes <http://flask.pocoo.org/snippets/48/>`_
- `Deploying Flask on Webfaction <http://flask.pocoo.org/snippets/65/>`_
| https://api.github.com/repos/pallets/flask/pulls/1414 | 2015-03-31T23:13:37Z | 2015-04-01T14:13:31Z | 2015-04-01T14:13:31Z | 2020-11-14T05:52:41Z | 193 | pallets/flask | 19,954 |
|
[3.8] bpo-41735: Fix thread locks in zlib module may go wrong in rare case | diff --git a/Misc/NEWS.d/next/Library/2020-09-07-21-51-17.bpo-41735.NKqGKy.rst b/Misc/NEWS.d/next/Library/2020-09-07-21-51-17.bpo-41735.NKqGKy.rst
new file mode 100644
index 00000000000000..9e36435a364eaf
--- /dev/null
+++ b/Misc/NEWS.d/next/Library/2020-09-07-21-51-17.bpo-41735.NKqGKy.rst
@@ -0,0 +1 @@
+Fix thread locks in zlib module may go wrong in rare case. Patch by Ma Lin.
diff --git a/Modules/zlibmodule.c b/Modules/zlibmodule.c
index a3d9ed6646dec8..d6b6b01d89a174 100644
--- a/Modules/zlibmodule.c
+++ b/Modules/zlibmodule.c
@@ -653,11 +653,11 @@ zlib_Compress_compress_impl(compobject *self, Py_buffer *data)
Py_ssize_t ibuflen, obuflen = DEF_BUF_SIZE;
int err;
+ ENTER_ZLIB(self);
+
self->zst.next_in = data->buf;
ibuflen = data->len;
- ENTER_ZLIB(self);
-
do {
arrange_input_buffer(&self->zst, &ibuflen);
@@ -771,6 +771,8 @@ zlib_Decompress_decompress_impl(compobject *self, Py_buffer *data,
else
hard_limit = max_length;
+ ENTER_ZLIB(self);
+
self->zst.next_in = data->buf;
ibuflen = data->len;
@@ -778,8 +780,6 @@ zlib_Decompress_decompress_impl(compobject *self, Py_buffer *data,
if (max_length && obuflen > max_length)
obuflen = max_length;
- ENTER_ZLIB(self);
-
do {
arrange_input_buffer(&self->zst, &ibuflen);
|
<!-- issue-number: [bpo-41735](https://bugs.python.org/issue41735) -->
https://bugs.python.org/issue41735
<!-- /issue-number -->
| https://api.github.com/repos/python/cpython/pulls/22132 | 2020-09-07T13:53:25Z | 2021-04-26T19:48:20Z | 2021-04-26T19:48:20Z | 2021-04-27T00:45:11Z | 472 | python/cpython | 3,990 |
✏️ Fix typos in docstrings | diff --git a/fastapi/security/oauth2.py b/fastapi/security/oauth2.py
index be3e18cd80cf5..0606291b8a70d 100644
--- a/fastapi/security/oauth2.py
+++ b/fastapi/security/oauth2.py
@@ -441,7 +441,7 @@ def __init__(
bool,
Doc(
"""
- By default, if no HTTP Auhtorization header is provided, required for
+ By default, if no HTTP Authorization header is provided, required for
OAuth2 authentication, it will automatically cancel the request and
send the client an error.
@@ -543,7 +543,7 @@ def __init__(
bool,
Doc(
"""
- By default, if no HTTP Auhtorization header is provided, required for
+ By default, if no HTTP Authorization header is provided, required for
OAuth2 authentication, it will automatically cancel the request and
send the client an error.
diff --git a/fastapi/security/open_id_connect_url.py b/fastapi/security/open_id_connect_url.py
index c612b475de8f4..1d255877dbd02 100644
--- a/fastapi/security/open_id_connect_url.py
+++ b/fastapi/security/open_id_connect_url.py
@@ -49,7 +49,7 @@ def __init__(
bool,
Doc(
"""
- By default, if no HTTP Auhtorization header is provided, required for
+ By default, if no HTTP Authorization header is provided, required for
OpenID Connect authentication, it will automatically cancel the request
and send the client an error.
| Simple typo fix: when looking at the [security reference in the docs](https://fastapi.tiangolo.com/reference/security/?h=auhtorization) I found and fixed the typo from `Auhtorization` to `Authorization`. | https://api.github.com/repos/tiangolo/fastapi/pulls/11295 | 2024-03-14T12:50:06Z | 2024-03-14T16:38:24Z | 2024-03-14T16:38:24Z | 2024-03-14T16:39:12Z | 359 | tiangolo/fastapi | 22,852 |
ja: Swap VARCHAR and CHAR translation error | diff --git a/README-ja.md b/README-ja.md
index 86bf3443ad..f9a0024244 100644
--- a/README-ja.md
+++ b/README-ja.md
@@ -903,7 +903,7 @@ SQLチューニングは広範な知識を必要とする分野で多くの [本
##### スキーマを絞る
* より早い接続を得るために、連続したブロックの中のディスクにMySQLをダンプする。
-* 長さの決まったフィールドに対しては `CHAR` よりも `VARCHAR` を使うようにしましょう。
+* 長さの決まったフィールドに対しては `VARCHAR` よりも `CHAR` を使うようにしましょう。
* `CHAR` の方が効率的に速くランダムにデータにアクセスできます。 一方、 `VARCHAR` では次のデータに移る前にデータの末尾を検知しなければならないために速度が犠牲になります。
* ブログ投稿などの大きなテキスト `TEXT` を使いましょう。 `TEXT` ではブーリアン型の検索も可能です。 `TEXT` フィールドを使うことは、テキストブロックを配置するのに用いたポインターをディスク上に保存することになります。
* 2の32乗や40億を超えてくる数に関しては `INT` を使いましょう
| This PR fixes incorrect translation.
This line
```
長さの決まったフィールドに対しては CHAR よりも VARCHAR を使うようにしましょう。
```
should be
```
長さの決まったフィールドに対しては VARCHAR よりも CHAR を使うようにしましょう。
```
Original:
```
Use CHAR instead of VARCHAR for fixed-length fields.
``` | https://api.github.com/repos/donnemartin/system-design-primer/pulls/169 | 2018-07-14T09:33:31Z | 2018-07-15T00:05:15Z | 2018-07-15T00:05:15Z | 2018-07-15T00:05:29Z | 335 | donnemartin/system-design-primer | 36,696 |
Fix tokenizer for vllm worker | diff --git a/fastchat/serve/vllm_worker.py b/fastchat/serve/vllm_worker.py
index 9da06bfdc2..0af680bb5f 100644
--- a/fastchat/serve/vllm_worker.py
+++ b/fastchat/serve/vllm_worker.py
@@ -55,6 +55,10 @@ def __init__(
f"Loading the model {self.model_names} on worker {worker_id}, worker type: vLLM worker..."
)
self.tokenizer = llm_engine.engine.tokenizer
+ # This is to support vllm >= 0.2.7 where TokenizerGroup was introduced
+ # and llm_engine.engine.tokenizer was no longer a raw tokenizer
+ if hasattr(self.tokenizer, "tokenizer"):
+ self.tokenizer = llm_engine.engine.tokenizer.tokenizer
self.context_len = get_context_length(llm_engine.engine.model_config.hf_config)
if not no_register:
| <!-- Thank you for your contribution! -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
A recent change to vLLM has changed the tokenizer structure by having an additional `TokenizerGroup` wrapping the `tokenizer` causing error when the `tokenizer.eos_token` is called in fastchat code.
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number (if applicable)
Fixes #2978
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed.
- [ ] I've made sure the relevant tests are passing (if applicable).
| https://api.github.com/repos/lm-sys/FastChat/pulls/2984 | 2024-01-31T00:05:35Z | 2024-01-31T20:14:27Z | 2024-01-31T20:14:27Z | 2024-01-31T20:14:30Z | 213 | lm-sys/FastChat | 41,597 |
Fixed division by zero bug in alpha compositing | diff --git a/camera/camera.py b/camera/camera.py
index 1217f86f1a..2943162cb9 100644
--- a/camera/camera.py
+++ b/camera/camera.py
@@ -430,10 +430,15 @@ def overlay_rgba_array(self, arr):
]
out_a = src_a + dst_a*(1.0-src_a)
+
+ # When the output alpha is 0 for full transparency,
+ # we have a choice over what RGB value to use in our
+ # output representation. We choose 0.0 here.
out_rgb = fdiv(
src_rgb*src_a[..., None] + \
dst_rgb*dst_a[..., None]*(1.0-src_a[..., None]),
- out_a[..., None]
+ out_a[..., None],
+ zero_over_zero_value = 0.0
)
self.pixel_array[..., :3] = out_rgb*self.rgb_max_val
diff --git a/helpers.py b/helpers.py
index b9c7f3878f..878eb4e929 100644
--- a/helpers.py
+++ b/helpers.py
@@ -694,8 +694,19 @@ def __init__(self, dict):
self.__dict__ = dict
# Just to have a less heavyweight name for this extremely common operation
-def fdiv(a, b):
- return np.true_divide(a,b)
+#
+# We may wish to have more fine-grained control over division by zero behavior
+# in future (separate specifiable default values for 0/0 and x/0 with x != 0),
+# but for now, we just allow the option to handle 0/0.
+def fdiv(a, b, zero_over_zero_value = None):
+ if zero_over_zero_value != None:
+ out = np.full_like(a, zero_over_zero_value)
+ where = np.logical_or (a != 0, b != 0)
+ else:
+ out = None
+ where = True
+
+ return np.true_divide(a, b, out = out, where = where)
def add_extension_if_not_present(file_name, extension):
# This could conceivably be smarter about handling existing differing extensions
| https://api.github.com/repos/3b1b/manim/pulls/133 | 2018-02-20T20:35:07Z | 2018-02-20T20:37:49Z | 2018-02-20T20:37:49Z | 2018-02-20T20:37:49Z | 492 | 3b1b/manim | 18,242 |
|
regularization and constraints for Convolution1D and Convolution2D | diff --git a/keras/layers/convolutional.py b/keras/layers/convolutional.py
index a250ed461b1..6d4db99ec74 100644
--- a/keras/layers/convolutional.py
+++ b/keras/layers/convolutional.py
@@ -13,7 +13,8 @@
class Convolution1D(Layer):
def __init__(self, nb_filter, stack_size, filter_length,
init='uniform', activation='linear', weights=None,
- image_shape=None, border_mode='valid', subsample_length=1):
+ image_shape=None, border_mode='valid', subsample_length=1,
+ W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None):
nb_row = 1
nb_col = filter_length
@@ -35,6 +36,9 @@ def __init__(self, nb_filter, stack_size, filter_length,
self.params = [self.W, self.b]
+ self.regularizers = [W_regularizer, b_regularizer]
+ self.constraints = [W_constraint, b_constraint]
+
if weights is not None:
self.set_weights(weights)
@@ -82,7 +86,8 @@ def get_config(self):
class Convolution2D(Layer):
def __init__(self, nb_filter, stack_size, nb_row, nb_col,
init='glorot_uniform', activation='linear', weights=None,
- image_shape=None, border_mode='valid', subsample=(1,1)):
+ image_shape=None, border_mode='valid', subsample=(1,1),
+ W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None):
super(Convolution2D,self).__init__()
self.init = initializations.get(init)
@@ -102,6 +107,9 @@ def __init__(self, nb_filter, stack_size, nb_row, nb_col,
self.params = [self.W, self.b]
+ self.regularizers = [W_regularizer, b_regularizer]
+ self.constraints = [W_constraint, b_constraint]
+
if weights is not None:
self.set_weights(weights)
| fix for #170, including constraints. Nothing fancy, just copied the code from the dense layer, but I suppose it should work the same.
| https://api.github.com/repos/keras-team/keras/pulls/218 | 2015-06-12T12:03:25Z | 2015-06-12T18:07:22Z | 2015-06-12T18:07:22Z | 2015-06-12T18:07:22Z | 479 | keras-team/keras | 47,047 |
[2.7] advance copyright years to 2018 (GH-5094). | diff --git a/Doc/copyright.rst b/Doc/copyright.rst
index 22d7705846ea93..540ff5ef0593af 100644
--- a/Doc/copyright.rst
+++ b/Doc/copyright.rst
@@ -4,7 +4,7 @@ Copyright
Python and this documentation is:
-Copyright © 2001-2016 Python Software Foundation. All rights reserved.
+Copyright © 2001-2018 Python Software Foundation. All rights reserved.
Copyright © 2000 BeOpen.com. All rights reserved.
diff --git a/Doc/license.rst b/Doc/license.rst
index 942ad20e47bfad..f33495ab1ef943 100644
--- a/Doc/license.rst
+++ b/Doc/license.rst
@@ -87,7 +87,7 @@ PSF LICENSE AGREEMENT FOR PYTHON |release|
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python |release| alone or in any derivative
version, provided, however, that PSF's License Agreement and PSF's notice of
- copyright, i.e., "Copyright © 2001-2017 Python Software Foundation; All Rights
+ copyright, i.e., "Copyright © 2001-2018 Python Software Foundation; All Rights
Reserved" are retained in Python |release| alone or in any derivative version
prepared by Licensee.
diff --git a/LICENSE b/LICENSE
index 529349e4b38c05..1afbedba92b33c 100644
--- a/LICENSE
+++ b/LICENSE
@@ -73,9 +73,9 @@ analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python alone or in any derivative version,
provided, however, that PSF's License Agreement and PSF's notice of copyright,
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
-2011, 2012, 2013, 2014, 2015, 2016, 2017 Python Software Foundation; All Rights
-Reserved" are retained in Python alone or in any derivative version prepared by
-Licensee.
+2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 Python Software Foundation; All
+Rights Reserved" are retained in Python alone or in any derivative version
+prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make
diff --git a/Mac/PythonLauncher/Info.plist.in b/Mac/PythonLauncher/Info.plist.in
index 1a8e2b44553d5f..b84fffeec64a7d 100644
--- a/Mac/PythonLauncher/Info.plist.in
+++ b/Mac/PythonLauncher/Info.plist.in
@@ -40,7 +40,7 @@
<key>CFBundleExecutable</key>
<string>PythonLauncher</string>
<key>CFBundleGetInfoString</key>
- <string>%VERSION%, © 2001-2017 Python Software Foundation</string>
+ <string>%VERSION%, © 2001-2018 Python Software Foundation</string>
<key>CFBundleIconFile</key>
<string>PythonLauncher.icns</string>
<key>CFBundleIdentifier</key>
diff --git a/Mac/Resources/app/Info.plist.in b/Mac/Resources/app/Info.plist.in
index a23166e6d32d87..abe9ae23e341a7 100644
--- a/Mac/Resources/app/Info.plist.in
+++ b/Mac/Resources/app/Info.plist.in
@@ -37,7 +37,7 @@
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleLongVersionString</key>
- <string>%version%, (c) 2001-2017 Python Software Foundation.</string>
+ <string>%version%, (c) 2001-2018 Python Software Foundation.</string>
<key>CFBundleName</key>
<string>Python</string>
<key>CFBundlePackageType</key>
diff --git a/Mac/Resources/framework/Info.plist.in b/Mac/Resources/framework/Info.plist.in
index 7a64619e295f65..c1ea9f6889209b 100644
--- a/Mac/Resources/framework/Info.plist.in
+++ b/Mac/Resources/framework/Info.plist.in
@@ -17,9 +17,9 @@
<key>CFBundlePackageType</key>
<string>FMWK</string>
<key>CFBundleShortVersionString</key>
- <string>%VERSION%, (c) 2001-2017 Python Software Foundation.</string>
+ <string>%VERSION%, (c) 2001-2018 Python Software Foundation.</string>
<key>CFBundleLongVersionString</key>
- <string>%VERSION%, (c) 2001-2017 Python Software Foundation.</string>
+ <string>%VERSION%, (c) 2001-2018 Python Software Foundation.</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
diff --git a/Python/getcopyright.c b/Python/getcopyright.c
index c37f8fa81cbe2a..1b69012fbc1795 100644
--- a/Python/getcopyright.c
+++ b/Python/getcopyright.c
@@ -4,7 +4,7 @@
static char cprt[] =
"\
-Copyright (c) 2001-2017 Python Software Foundation.\n\
+Copyright (c) 2001-2018 Python Software Foundation.\n\
All Rights Reserved.\n\
\n\
Copyright (c) 2000 BeOpen.com.\n\
diff --git a/README b/README
index 00a6b3946550aa..387b437cc29f00 100644
--- a/README
+++ b/README
@@ -2,7 +2,7 @@ This is Python version 2.7.14
=============================
Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011,
-2012, 2013, 2014, 2015, 2016, 2017 Python Software Foundation. All rights
+2012, 2013, 2014, 2015, 2016, 2017, 2018 Python Software Foundation. All rights
reserved.
Copyright (c) 2000 BeOpen.com.
| https://api.github.com/repos/python/cpython/pulls/5105 | 2018-01-05T06:38:25Z | 2018-01-05T07:02:11Z | 2018-01-05T07:02:11Z | 2018-01-05T07:02:24Z | 1,544 | python/cpython | 4,657 |
|
the file has no function [self.vector_slope] | diff --git a/ppocr/data/imaug/fce_targets.py b/ppocr/data/imaug/fce_targets.py
index 1818480867..4d1903c0a7 100644
--- a/ppocr/data/imaug/fce_targets.py
+++ b/ppocr/data/imaug/fce_targets.py
@@ -22,6 +22,9 @@
from numpy.linalg import norm
import sys
+def vector_slope(vec):
+ assert len(vec) == 2
+ return abs(vec[1] / (vec[0] + 1e-8))
class FCENetTargets:
"""Generate the ground truth targets of FCENet: Fourier Contour Embedding
@@ -233,9 +236,9 @@ def find_head_tail(self, points, orientation_thr):
head_inds = [head_start, head_end]
tail_inds = [tail_start, tail_end]
else:
- if self.vector_slope(points[1] - points[0]) + self.vector_slope(
- points[3] - points[2]) < self.vector_slope(points[
- 2] - points[1]) + self.vector_slope(points[0] - points[
+ if vector_slope(points[1] - points[0]) + vector_slope(
+ points[3] - points[2]) < vector_slope(points[
+ 2] - points[1]) + vector_slope(points[0] - points[
3]):
horizontal_edge_inds = [[0, 1], [2, 3]]
vertical_edge_inds = [[3, 0], [1, 2]]
| self.vector_slope 方法不存在,应该是漏掉了 | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/6261 | 2022-05-12T04:01:50Z | 2022-05-12T11:46:15Z | 2022-05-12T11:46:15Z | 2022-05-12T11:46:15Z | 349 | PaddlePaddle/PaddleOCR | 42,523 |
Define Vertex examples as a Sequence | diff --git a/llama_index/llms/vertex.py b/llama_index/llms/vertex.py
index fbadb297c8d8f..7ef8c2e33134d 100644
--- a/llama_index/llms/vertex.py
+++ b/llama_index/llms/vertex.py
@@ -33,7 +33,7 @@ class Vertex(LLM):
model: str = Field(description="The vertex model to use.")
temperature: float = Field(description="The temperature to use for sampling.")
max_tokens: int = Field(description="The maximum number of tokens to generate.")
- examples: Optional[ChatMessage] = Field(
+ examples: Optional[Sequence[ChatMessage]] = Field(
description="Example messages for the chat model."
)
max_retries: int = Field(default=10, description="The maximum number of retries.")
@@ -53,7 +53,7 @@ def __init__(
project: Optional[str] = None,
location: Optional[str] = None,
credential: Optional[str] = None,
- examples: Optional[ChatMessage] = None,
+ examples: Optional[Sequence[ChatMessage]] = None,
temperature: float = 0.1,
max_tokens: int = 512,
max_retries: int = 10,
| # Description
Previous `examples` definition was implying it's an optional `ChatMessage` but in fact it should be an optional Sequence of ChatMessages. [llama_index/llms/vertex_utils.py#L164](https://github.com/run-llama/llama_index/blob/main/llama_index/llms/vertex_utils.py#L164) clearly shows this should be kind of a Sequence, because `len()` is used and later `for` to iterate over the examples.
## Type of Change
Please delete options that are not relevant.
- [x] Bug fix (non-breaking change which fixes an issue)
# How Has This Been Tested?
I've used Vertex integration in my project and using `examples` was yielding errors when a list was provided.
- [x] I stared at the code and made sure it makes sense
# Suggested Checklist:
- [x] I have performed a self-review of my own code
- [x] My changes generate no new warnings
- [x] New and existing unit tests pass locally with my changes
- [ ] I ran `make format; make lint` to appease the lint gods
| https://api.github.com/repos/run-llama/llama_index/pulls/8916 | 2023-11-14T16:30:49Z | 2023-11-14T22:33:08Z | 2023-11-14T22:33:08Z | 2023-11-14T22:33:08Z | 292 | run-llama/llama_index | 6,145 |
Future proof octal number: 0755 --> 0o755 | diff --git a/code/default/launcher/autorun.py b/code/default/launcher/autorun.py
index 2939d04b7d..f32a20ddda 100644
--- a/code/default/launcher/autorun.py
+++ b/code/default/launcher/autorun.py
@@ -126,7 +126,7 @@ def add(name, cmd):
xlog.info("create file:%s", plist_file_path)
if not os.path.isdir(launch_path):
- os.mkdir(launch_path, 0755)
+ os.mkdir(launch_path, 0o755)
with open(plist_file_path, "w") as f:
f.write(file_content)
| https://api.github.com/repos/XX-net/XX-Net/pulls/6180 | 2017-08-03T15:10:13Z | 2017-09-01T03:28:55Z | 2017-09-01T03:28:55Z | 2017-09-01T04:05:57Z | 152 | XX-net/XX-Net | 17,387 |
|
Add tests for deprecation helpers | diff --git a/tests/helpers/test_deprecation.py b/tests/helpers/test_deprecation.py
new file mode 100644
index 000000000000..8064c2ea5d67
--- /dev/null
+++ b/tests/helpers/test_deprecation.py
@@ -0,0 +1,85 @@
+"""Test deprecation helpers."""
+from homeassistant.helpers.deprecation import (
+ deprecated_substitute, get_deprecated)
+
+from unittest.mock import patch, MagicMock
+
+
+class MockBaseClass():
+ """Mock base class for deprecated testing."""
+
+ @property
+ @deprecated_substitute('old_property')
+ def new_property(self):
+ """Test property to fetch."""
+ raise NotImplementedError()
+
+
+class MockDeprecatedClass(MockBaseClass):
+ """Mock deprecated class object."""
+
+ @property
+ def old_property(self):
+ """Test property to fetch."""
+ return True
+
+
+class MockUpdatedClass(MockBaseClass):
+ """Mock updated class object."""
+
+ @property
+ def new_property(self):
+ """Test property to fetch."""
+ return True
+
+
+@patch('logging.getLogger')
+def test_deprecated_substitute_old_class(mock_get_logger):
+ """Test deprecated class object."""
+ mock_logger = MagicMock()
+ mock_get_logger.return_value = mock_logger
+
+ mock_object = MockDeprecatedClass()
+ assert mock_object.new_property is True
+ assert mock_object.new_property is True
+ assert mock_logger.warning.called
+ assert len(mock_logger.warning.mock_calls) == 1
+
+
+@patch('logging.getLogger')
+def test_deprecated_substitute_new_class(mock_get_logger):
+ """Test deprecated class object."""
+ mock_logger = MagicMock()
+ mock_get_logger.return_value = mock_logger
+
+ mock_object = MockUpdatedClass()
+ assert mock_object.new_property is True
+ assert mock_object.new_property is True
+ assert not mock_logger.warning.called
+
+
+@patch('logging.getLogger')
+def test_config_get_deprecated_old(mock_get_logger):
+ """Test deprecated class object."""
+ mock_logger = MagicMock()
+ mock_get_logger.return_value = mock_logger
+
+ config = {
+ 'old_name': True,
+ }
+ assert get_deprecated(config, 'new_name', 'old_name') is True
+ assert mock_logger.warning.called
+ assert len(mock_logger.warning.mock_calls) == 1
+
+
+@patch('logging.getLogger')
+def test_config_get_deprecated_new(mock_get_logger):
+ """Test deprecated class object."""
+ mock_logger = MagicMock()
+ mock_get_logger.return_value = mock_logger
+
+ config = {
+ 'new_name': True,
+ }
+ assert get_deprecated(config, 'new_name', 'old_name') is True
+ assert not mock_logger.warning.called
| ## Description:
Add tests for deprecation helpers | https://api.github.com/repos/home-assistant/core/pulls/7452 | 2017-05-05T15:08:04Z | 2017-05-06T17:10:49Z | 2017-05-06T17:10:49Z | 2017-08-12T20:54:26Z | 630 | home-assistant/core | 39,416 |
Version 0.55.1 (just pointing to new docs) | diff --git a/docs/advanced_concepts.md b/docs/advanced_concepts.md
index 319d17d640c9..3f3f130cceb9 100644
--- a/docs/advanced_concepts.md
+++ b/docs/advanced_concepts.md
@@ -159,11 +159,3 @@ chart.add_rows(data2)
Coming soon! Ping us in the [community forum](https://discuss.streamlit.io/) if
you just can't wait and have to have this info immediately.
-
-## Advanced caching
-
-Coming soon! Ping us in the [community forum](https://discuss.streamlit.io/) if
-you just can't wait and have to have this info immediately.
-
-Meanwhile, for an intro to caching in Streamlit, see the [data explorer
-tutorial](tutorial/create_a_data_explorer_app.md).
diff --git a/docs/api.md b/docs/api.md
index ea2efd5f48ae..f5f8c3800eef 100644
--- a/docs/api.md
+++ b/docs/api.md
@@ -297,6 +297,9 @@ reads the output from the local cache and passes it on to the caller.
The main limitation is that Streamlit’s cache feature doesn’t know about
changes that take place outside the body of the annotated function.
+For more information about the Streamlit cache, its configuration parameters,
+and its limitations, see [Caching](caching.md).
+
```eval_rst
.. autofunction:: streamlit.cache
```
diff --git a/docs/troubleshooting/sanity-checks.md b/docs/troubleshooting/sanity-checks.md
index 114bdd5023d4..6329cb1b3250 100644
--- a/docs/troubleshooting/sanity-checks.md
+++ b/docs/troubleshooting/sanity-checks.md
@@ -26,7 +26,7 @@ $ pip install --upgrade streamlit
$ streamlit version
```
-...and then verify that the version number printed is `0.55.0`.
+...and then verify that the version number printed is `0.55.1`.
**Try reproducing the issue now.** If not fixed, keep reading on.
@@ -43,7 +43,7 @@ st.write(st.__version__)
...then call `streamlit run` on your script and make sure it says the same
version as above. If not the same version, check out [these
-instructions](clean-install.html) for some sure-fire ways to set up your
+instructions](clean-install.md) for some sure-fire ways to set up your
environment.
## Check #4: Is your browser caching your app too aggressively?
diff --git a/docs/tutorial/run_streamlit_remotely.md b/docs/tutorial/run_streamlit_remotely.md
index 85a80dff2f26..e0aeb66a5842 100644
--- a/docs/tutorial/run_streamlit_remotely.md
+++ b/docs/tutorial/run_streamlit_remotely.md
@@ -57,7 +57,7 @@ Ignore the URLs that print on your terminal. Instead, since you're using
port-forwarding you should open your browser at <http://localhost:8501>.
If you see the Streamlit Hello page, everything is working! Otherwise, check
-out the [Troubleshooting page](../troubleshooting).
+out the [Troubleshooting page](../troubleshooting/index.md).
## Run your own code remotely
diff --git a/frontend/package.json b/frontend/package.json
index 7f9daac288af..acc693f665de 100644
--- a/frontend/package.json
+++ b/frontend/package.json
@@ -1,6 +1,6 @@
{
"name": "streamlit-browser",
- "version": "0.55.0",
+ "version": "0.55.1",
"private": true,
"homepage": ".",
"scripts": {
diff --git a/lib/setup.py b/lib/setup.py
index ce073e99a773..a1dacedb1f22 100644
--- a/lib/setup.py
+++ b/lib/setup.py
@@ -37,7 +37,7 @@ def readme():
setuptools.setup(
name="streamlit",
- version="0.55.0", # PEP-440
+ version="0.55.1", # PEP-440
description="Frontend library for machine learning engineers",
long_description=readme(),
url="https://streamlit.io",
diff --git a/lib/streamlit/caching.py b/lib/streamlit/caching.py
index af146ff0e272..94d09f86f984 100644
--- a/lib/streamlit/caching.py
+++ b/lib/streamlit/caching.py
@@ -198,7 +198,7 @@ def _get_mutated_output_error_message():
By default, Streamlit’s cache is immutable. You received this warning
because Streamlit thinks you modified a cached object.
- [Click here to see how to fix this issue.](https://docs.streamlit.io/advanced_concepts.html#advanced-caching)
+ [Click here to see how to fix this issue.](https://docs.streamlit.io/advanced_caching.html)
"""
).strip("\n")
diff --git a/lib/streamlit/hashing.py b/lib/streamlit/hashing.py
index 97fd66e52f30..8f5b716cc7d0 100644
--- a/lib/streamlit/hashing.py
+++ b/lib/streamlit/hashing.py
@@ -200,9 +200,9 @@ def my_func(...):
...
```
- Please see the `hash_funcs` [documentation]
- (https://streamlit.io/docs/advanced_concepts.html#advanced-caching)
- for more details.
+ Please see the [`hash_funcs`
+ documentation](https://streamlit.io/docs/advanced_caching.html) for
+ more details.
"""
% {"bad_type": str(bad_type).split("'")[1]}
).strip("\n")
diff --git a/scripts/update_version.py b/scripts/update_version.py
index a74c427ea185..51ce9d19fbf8 100755
--- a/scripts/update_version.py
+++ b/scripts/update_version.py
@@ -59,7 +59,7 @@
# there.
PYTHON = {
"lib/setup.py": r"(?P<pre>.*version=\").*(?P<post>\", # PEP-440$)",
- "docs/troubleshooting/common-issues.md": r"(?P<pre>.*number printed is `).*(?P<post>`.$)",
+ "docs/troubleshooting/sanity-checks.md": r"(?P<pre>.*number printed is `).*(?P<post>`.$)",
}
NODE = {"frontend/package.json": r'(?P<pre>^ "version": ").*(?P<post>",$)'}
| https://api.github.com/repos/streamlit/streamlit/pulls/1089 | 2020-02-12T11:49:14Z | 2020-02-12T19:28:47Z | 2020-02-12T19:28:47Z | 2020-02-15T22:59:03Z | 1,504 | streamlit/streamlit | 22,106 |
|
install.sh: sudo apt-get install python3-pip | diff --git a/install.sh b/install.sh
index 2d243de1..3ef2ae9a 100644
--- a/install.sh
+++ b/install.sh
@@ -45,7 +45,7 @@ if [ $choice == 1 ] || [ $choice == 2 ]; then
echo -e ${BLUE}"[✔] Loading ... "
if [ $choice == 1 ]; then
sudo apt-get update && apt-get upgrade
- sudo apt-get install python-pip
+ sudo apt-get install python3-pip
elif [ $choice == 2 ]; then # added arch linux support because of feature request #231
sudo pacman -Suy
sudo pacman -S python-pip
| Replaces #219 | https://api.github.com/repos/Z4nzu/hackingtool/pulls/253 | 2022-06-12T20:07:04Z | 2022-06-13T10:24:52Z | 2022-06-13T10:24:52Z | 2022-06-13T10:25:30Z | 161 | Z4nzu/hackingtool | 9,876 |
[manyvids] Improve extraction | diff --git a/youtube_dl/extractor/manyvids.py b/youtube_dl/extractor/manyvids.py
index e8d7163e4ab..6805102ba3b 100644
--- a/youtube_dl/extractor/manyvids.py
+++ b/youtube_dl/extractor/manyvids.py
@@ -1,11 +1,16 @@
# coding: utf-8
from __future__ import unicode_literals
+import re
+
from .common import InfoExtractor
+from ..compat import compat_str
from ..utils import (
determine_ext,
+ extract_attributes,
int_or_none,
str_to_int,
+ url_or_none,
urlencode_postdata,
)
@@ -20,17 +25,20 @@ class ManyVidsIE(InfoExtractor):
'id': '133957',
'ext': 'mp4',
'title': 'everthing about me (Preview)',
+ 'uploader': 'ellyxxix',
'view_count': int,
'like_count': int,
},
}, {
# full video
'url': 'https://www.manyvids.com/Video/935718/MY-FACE-REVEAL/',
- 'md5': 'f3e8f7086409e9b470e2643edb96bdcc',
+ 'md5': 'bb47bab0e0802c2a60c24ef079dfe60f',
'info_dict': {
'id': '935718',
'ext': 'mp4',
'title': 'MY FACE REVEAL',
+ 'description': 'md5:ec5901d41808b3746fed90face161612',
+ 'uploader': 'Sarah Calanthe',
'view_count': int,
'like_count': int,
},
@@ -41,15 +49,43 @@ def _real_extract(self, url):
webpage = self._download_webpage(url, video_id)
- video_url = self._search_regex(
- r'data-(?:video-filepath|meta-video)\s*=s*(["\'])(?P<url>(?:(?!\1).)+)\1',
- webpage, 'video URL', group='url')
+ info = self._search_regex(
+ r'''(<div\b[^>]*\bid\s*=\s*(['"])pageMetaDetails\2[^>]*>)''',
+ webpage, 'meta details', default='')
+ info = extract_attributes(info)
+
+ player = self._search_regex(
+ r'''(<div\b[^>]*\bid\s*=\s*(['"])rmpPlayerStream\2[^>]*>)''',
+ webpage, 'player details', default='')
+ player = extract_attributes(player)
+
+ video_urls_and_ids = (
+ (info.get('data-meta-video'), 'video'),
+ (player.get('data-video-transcoded'), 'transcoded'),
+ (player.get('data-video-filepath'), 'filepath'),
+ (self._og_search_video_url(webpage, secure=False, default=None), 'og_video'),
+ )
+
+ def txt_or_none(s, default=None):
+ return (s.strip() or default) if isinstance(s, compat_str) else default
+
+ uploader = txt_or_none(info.get('data-meta-author'))
+
+ def mung_title(s):
+ if uploader:
+ s = re.sub(r'^\s*%s\s+[|-]' % (re.escape(uploader), ), '', s)
+ return txt_or_none(s)
- title = self._html_search_regex(
- (r'<span[^>]+class=["\']item-title[^>]+>([^<]+)',
- r'<h2[^>]+class=["\']h2 m-0["\'][^>]*>([^<]+)'),
- webpage, 'title', default=None) or self._html_search_meta(
- 'twitter:title', webpage, 'title', fatal=True)
+ title = (
+ mung_title(info.get('data-meta-title'))
+ or self._html_search_regex(
+ (r'<span[^>]+class=["\']item-title[^>]+>([^<]+)',
+ r'<h2[^>]+class=["\']h2 m-0["\'][^>]*>([^<]+)'),
+ webpage, 'title', default=None)
+ or self._html_search_meta(
+ 'twitter:title', webpage, 'title', fatal=True))
+
+ title = re.sub(r'\s*[|-]\s+ManyVids\s*$', '', title) or title
if any(p in webpage for p in ('preview_videos', '_preview.mp4')):
title += ' (Preview)'
@@ -70,23 +106,56 @@ def _real_extract(self, url):
'X-Requested-With': 'XMLHttpRequest'
})
- if determine_ext(video_url) == 'm3u8':
- formats = self._extract_m3u8_formats(
- video_url, video_id, 'mp4', entry_protocol='m3u8_native',
- m3u8_id='hls')
- else:
- formats = [{'url': video_url}]
+ formats = []
+ for v_url, fmt in video_urls_and_ids:
+ v_url = url_or_none(v_url)
+ if not v_url:
+ continue
+ if determine_ext(v_url) == 'm3u8':
+ formats.extend(self._extract_m3u8_formats(
+ v_url, video_id, 'mp4', entry_protocol='m3u8_native',
+ m3u8_id='hls'))
+ else:
+ formats.append({
+ 'url': v_url,
+ 'format_id': fmt,
+ })
+
+ self._remove_duplicate_formats(formats)
+
+ for f in formats:
+ if f.get('height') is None:
+ f['height'] = int_or_none(
+ self._search_regex(r'_(\d{2,3}[02468])_', f['url'], 'video height', default=None))
+ if '/preview/' in f['url']:
+ f['format_id'] = '_'.join(filter(None, (f.get('format_id'), 'preview')))
+ f['preference'] = -10
+ if 'transcoded' in f['format_id']:
+ f['preference'] = f.get('preference', -1) - 1
+
+ self._sort_formats(formats)
+
+ def get_likes():
+ likes = self._search_regex(
+ r'''(<a\b[^>]*\bdata-id\s*=\s*(['"])%s\2[^>]*>)''' % (video_id, ),
+ webpage, 'likes', default='')
+ likes = extract_attributes(likes)
+ return int_or_none(likes.get('data-likes'))
- like_count = int_or_none(self._search_regex(
- r'data-likes=["\'](\d+)', webpage, 'like count', default=None))
- view_count = str_to_int(self._html_search_regex(
- r'(?s)<span[^>]+class="views-wrapper"[^>]*>(.+?)</span', webpage,
- 'view count', default=None))
+ def get_views():
+ return str_to_int(self._html_search_regex(
+ r'''(?s)<span\b[^>]*\bclass\s*=["']views-wrapper\b[^>]+>.+?<span\b[^>]+>\s*(\d[\d,.]*)\s*</span>''',
+ webpage, 'view count', default=None))
return {
'id': video_id,
'title': title,
- 'view_count': view_count,
- 'like_count': like_count,
'formats': formats,
+ 'description': txt_or_none(info.get('data-meta-description')),
+ 'uploader': txt_or_none(info.get('data-meta-author')),
+ 'thumbnail': (
+ url_or_none(info.get('data-meta-image'))
+ or url_or_none(player.get('data-video-screenshot'))),
+ 'view_count': get_views(),
+ 'like_count': get_likes(),
}
| <details>
<summary>Boilerplate: own code, improvement</summary>
## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Read [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site)
- [x] Read [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) and adjusted the code to meet them
- [x] Covered the code with tests (note that PRs without tests will be REJECTED)
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [x] Improvement
- [ ] New extractor
- [ ] New feature
</details>
---
### Description of your *pull request* and other information
This PR provides an improved version of the ManyVids extractor, which
* extracts all formats from page
* extracts description (see https://github.com/yt-dlp/yt-dlp/issues/4634), uploader, views, likes
* downrates previews.
Please comment if this fixes the issue #28758, where a user could not download a paid video using cookies from a logged-in browser session. The extractor doesn't support username/password authentication.
| https://api.github.com/repos/ytdl-org/youtube-dl/pulls/31172 | 2022-08-15T15:06:47Z | 2022-10-10T18:26:32Z | 2022-10-10T18:26:32Z | 2022-10-10T18:26:32Z | 1,811 | ytdl-org/youtube-dl | 49,914 |
⬆ Bump dawidd6/action-download-artifact from 2.24.3 to 2.26.0 | diff --git a/.github/workflows/preview-docs.yml b/.github/workflows/preview-docs.yml
index 2af56e2bcaff2..cf0db59ab998c 100644
--- a/.github/workflows/preview-docs.yml
+++ b/.github/workflows/preview-docs.yml
@@ -16,7 +16,7 @@ jobs:
rm -rf ./site
mkdir ./site
- name: Download Artifact Docs
- uses: dawidd6/action-download-artifact@v2.24.3
+ uses: dawidd6/action-download-artifact@v2.26.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
workflow: build-docs.yml
diff --git a/.github/workflows/smokeshow.yml b/.github/workflows/smokeshow.yml
index d6206d697b8bb..421720433fb4e 100644
--- a/.github/workflows/smokeshow.yml
+++ b/.github/workflows/smokeshow.yml
@@ -20,7 +20,7 @@ jobs:
- run: pip install smokeshow
- - uses: dawidd6/action-download-artifact@v2.24.3
+ - uses: dawidd6/action-download-artifact@v2.26.0
with:
workflow: test.yml
commit: ${{ github.event.workflow_run.head_sha }}
| Bumps [dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact) from 2.24.3 to 2.26.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/dawidd6/action-download-artifact/commit/5e780fc7bbd0cac69fc73271ed86edf5dcb72d67"><code>5e780fc</code></a> Use <code>commit</code> as <code>head_sha</code> to reduce number of API calls (<a href="https://github-redirect.dependabot.com/dawidd6/action-download-artifact/issues/227">#227</a>)</li>
<li><a href="https://github.com/dawidd6/action-download-artifact/commit/b59d8c6a6c5c6c6437954f470d963c0b20ea7415"><code>b59d8c6</code></a> Add pagination to appropriate listWorkflowRunArtifacts call (<a href="https://github-redirect.dependabot.com/dawidd6/action-download-artifact/issues/225">#225</a>)</li>
<li><a href="https://github.com/dawidd6/action-download-artifact/commit/5004d5476e64af71d021e6c98993829d8820969d"><code>5004d54</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/dawidd6/action-download-artifact/issues/219">#219</a> from dawidd6/dependabot-npm_and_yarn-actions-artifact...</li>
<li><a href="https://github.com/dawidd6/action-download-artifact/commit/b1a9c91d1facc7d48fbea64ebcde5200d03974da"><code>b1a9c91</code></a> build(deps): bump <code>@actions/artifact</code> from 1.1.0 to 1.1.1</li>
<li>See full diff in <a href="https://github.com/dawidd6/action-download-artifact/compare/v2.24.3...v2.26.0">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=dawidd6/action-download-artifact&package-manager=github_actions&previous-version=2.24.3&new-version=2.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | https://api.github.com/repos/tiangolo/fastapi/pulls/6034 | 2023-02-23T11:21:48Z | 2023-03-10T18:50:08Z | 2023-03-10T18:50:08Z | 2023-03-10T18:50:09Z | 320 | tiangolo/fastapi | 23,290 |