qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
69,057,991 | I want to parse a web page which is really a PDF file using python.
bellow is the link of a sample PDF web page:
<http://www.jsu.edu/ire/factbook/JSUFactbook14-15.pdf>
but when i want to parse the content of web page or print a part of it's text, the result is in unreadable format and contains some unfamiliar characters.
this is a part of my code:
```
from bs4 import BeautifulSoup
from requests_html import HTMLSession
def search(url):
session = HTMLSession()
response = session.get(url, verify=False, timeout=2)
if response is None:
print('response empty')
return
content = response.content
content = BeautifulSoup(content, "html.parser")
content = str(content)
print(content[:2000])
location = content.find("University")
return location
url = 'http://www.jsu.edu/ire/factbook/JSUFactbook14-15.pdf'
print(search(url))
```
I'm sure that the word "university" is in this PDF file but I can't find it and also when I print a part of it's text(2000 character), the output file is unreadable like this:
```
6885 0 obj
<>stream
h��W{PT�����ee�]���+"� �A���5(�6�#e[�d��� �iL��j#�4�6ffmǶ�%i�tZj5�֚iG'IM[��w��]���Lgڿ�p�������c���YH��P]�4�K
�]^&�_���錄
���=giʎV�����}�����I����$W���CBR)κ�����N�JN���T�
�2=����\z�R,�Z�)p9UZ����������3�}�d<`� ?_����%]�ű���I�~��>��d�(|<lx##rq�5��o�j�i�eo�'9�ǘryk��z���� ="" f���*����#�7�h8����0s��\u���]+.�="B��~{J�"��Ly�q93?'�" vy���0�����}�*��c��'="|y�ȝ!{��}�" wx�i�a8�=""
```
can anyone help me, how I can print the output in a pretty format and how I can find a specific word inside it? | 2021/09/04 | [
"https://Stackoverflow.com/questions/69057991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6911480/"
] | Neither C nor C++ supports named parameters.
The closest you can come to this is to define a `struct` with all of the parameters:
```
struct cal_params {
int year;
int month;
int day;
};
```
Define the function to take an instance of that struct:
```
char *calendar(struct cal_params params)
{
...
}
```
Then call the function with a compound literal using designated initializers:
```
char *str = calendar((struct cal_params){ .year=2021, .month=9, .day=4});
``` | C++ does not have this feature. As shown in [this answer](https://stackoverflow.com/a/69058090/4117728) you can emulate it by naming members of a struct. Alternatively you can name the individual types of the arguments:
```
#include <string>
#include <iostream>
struct Year { int value; };
struct Month { int value; };
struct Day { int value; };
std::string calendar(Year y,Month m,Day d)
{
return std::to_string(y.value) + "-" + std::to_string(m.value) + "-" + std::to_string(d.value);
}
int main ()
{
std::cout<<calendar(Year{2001},Month{31},Day{42});
}
``` |
69,057,991 | I want to parse a web page which is really a PDF file using python.
bellow is the link of a sample PDF web page:
<http://www.jsu.edu/ire/factbook/JSUFactbook14-15.pdf>
but when i want to parse the content of web page or print a part of it's text, the result is in unreadable format and contains some unfamiliar characters.
this is a part of my code:
```
from bs4 import BeautifulSoup
from requests_html import HTMLSession
def search(url):
session = HTMLSession()
response = session.get(url, verify=False, timeout=2)
if response is None:
print('response empty')
return
content = response.content
content = BeautifulSoup(content, "html.parser")
content = str(content)
print(content[:2000])
location = content.find("University")
return location
url = 'http://www.jsu.edu/ire/factbook/JSUFactbook14-15.pdf'
print(search(url))
```
I'm sure that the word "university" is in this PDF file but I can't find it and also when I print a part of it's text(2000 character), the output file is unreadable like this:
```
6885 0 obj
<>stream
h��W{PT�����ee�]���+"� �A���5(�6�#e[�d��� �iL��j#�4�6ffmǶ�%i�tZj5�֚iG'IM[��w��]���Lgڿ�p�������c���YH��P]�4�K
�]^&�_���錄
���=giʎV�����}�����I����$W���CBR)κ�����N�JN���T�
�2=����\z�R,�Z�)p9UZ����������3�}�d<`� ?_����%]�ű���I�~��>��d�(|<lx##rq�5��o�j�i�eo�'9�ǘryk��z���� ="" f���*����#�7�h8����0s��\u���]+.�="B��~{J�"��Ly�q93?'�" vy���0�����}�*��c��'="|y�ȝ!{��}�" wx�i�a8�=""
```
can anyone help me, how I can print the output in a pretty format and how I can find a specific word inside it? | 2021/09/04 | [
"https://Stackoverflow.com/questions/69057991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6911480/"
] | Neither C nor C++ supports named parameters.
The closest you can come to this is to define a `struct` with all of the parameters:
```
struct cal_params {
int year;
int month;
int day;
};
```
Define the function to take an instance of that struct:
```
char *calendar(struct cal_params params)
{
...
}
```
Then call the function with a compound literal using designated initializers:
```
char *str = calendar((struct cal_params){ .year=2021, .month=9, .day=4});
``` | In C++20, a designated initializer was added, but the initialization **must be in order** according to the order of declaration.
```cpp
struct date_t {
int year, month, day;
};
std::string make_calendar(date_t date) {
return std::format("{0}:{1}:{2}", date.year, date.month, date.day);
}
```
Then, you can call it like:
```cpp
make_calendar({.year = 2001, .month = 9, .day = 4})
``` |
67,949,441 | I am a beginner python programmer and I run into a problem that I could not resolve to do with list index out of range.
```
originalList = [['a','b'],['c','d'],['e','f']]
outputList = []
for i in range(len(originalList)):
if i == len(originalList):
outputList.append([originalList[i],originalList[0]])
els:
outputList.append([originalList[i], originalList[i+1]])
print(outputList)
```
I would need the output to be:
```
outputList = [[['a','b'],['c','d']],[['c','d'],['e','f']],[['e','f'],['a','b']]]
```
however when I try to run this code, it gives me an list index out of range at line 7 error.
I do not know why is this error happening sins python shouldn't even run the code at line 7 if **i** in equal to the length of the list. | 2021/06/12 | [
"https://Stackoverflow.com/questions/67949441",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16207117/"
] | Use readarray.
```
$ variable='13423exa*lkco3nr*sw
kjenve*kejnv'
$ readarray -d '*' -t arr < <(printf "%s" "$variable")
$ declare -p arr
declare -a arr=([0]="13423exa" [1]="lkco3nr" [2]=$'sw\nkjenve' [3]="kejnv")
```
>
> mapfile: -d: invavlid option
>
>
>
Update bash, then use `readarray`.
If not, replace separator with zero byte and read it element by element with `read -d ''`.
```
arr=()
while IFS= read -d '' -r e || [[ -n "$e" ]]; do
arr+=("$e")
done < <(printf "%s" "$variable" | tr '*' '\0');
declare -p arr
declare -a arr=([0]="13423exa" [1]="lkco3nr" [2]=$'sw\nkjenve' [3]="kejnv")
``` | You can use the `readarray` command and use it like in the following example:
```sh
readarray -d ':' -t my_array <<< "a:b:c:d:"
for (( i = 0; i < ${#my_array[*]}; i++ )); do
echo "${my_array[i]}"
done
```
Where the `-d` parameter defines the delimiter and `-t` ask to remove last delimiter. |
67,949,441 | I am a beginner python programmer and I run into a problem that I could not resolve to do with list index out of range.
```
originalList = [['a','b'],['c','d'],['e','f']]
outputList = []
for i in range(len(originalList)):
if i == len(originalList):
outputList.append([originalList[i],originalList[0]])
els:
outputList.append([originalList[i], originalList[i+1]])
print(outputList)
```
I would need the output to be:
```
outputList = [[['a','b'],['c','d']],[['c','d'],['e','f']],[['e','f'],['a','b']]]
```
however when I try to run this code, it gives me an list index out of range at line 7 error.
I do not know why is this error happening sins python shouldn't even run the code at line 7 if **i** in equal to the length of the list. | 2021/06/12 | [
"https://Stackoverflow.com/questions/67949441",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16207117/"
] | You can use the `readarray` command and use it like in the following example:
```sh
readarray -d ':' -t my_array <<< "a:b:c:d:"
for (( i = 0; i < ${#my_array[*]}; i++ )); do
echo "${my_array[i]}"
done
```
Where the `-d` parameter defines the delimiter and `-t` ask to remove last delimiter. | Use a ending character different than new line
```
end=.
read -a array -d "$end" <<< "$v$end"
```
Of course this solution suppose there is at least one charecter not used in your input variable. |
67,949,441 | I am a beginner python programmer and I run into a problem that I could not resolve to do with list index out of range.
```
originalList = [['a','b'],['c','d'],['e','f']]
outputList = []
for i in range(len(originalList)):
if i == len(originalList):
outputList.append([originalList[i],originalList[0]])
els:
outputList.append([originalList[i], originalList[i+1]])
print(outputList)
```
I would need the output to be:
```
outputList = [[['a','b'],['c','d']],[['c','d'],['e','f']],[['e','f'],['a','b']]]
```
however when I try to run this code, it gives me an list index out of range at line 7 error.
I do not know why is this error happening sins python shouldn't even run the code at line 7 if **i** in equal to the length of the list. | 2021/06/12 | [
"https://Stackoverflow.com/questions/67949441",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16207117/"
] | Use readarray.
```
$ variable='13423exa*lkco3nr*sw
kjenve*kejnv'
$ readarray -d '*' -t arr < <(printf "%s" "$variable")
$ declare -p arr
declare -a arr=([0]="13423exa" [1]="lkco3nr" [2]=$'sw\nkjenve' [3]="kejnv")
```
>
> mapfile: -d: invavlid option
>
>
>
Update bash, then use `readarray`.
If not, replace separator with zero byte and read it element by element with `read -d ''`.
```
arr=()
while IFS= read -d '' -r e || [[ -n "$e" ]]; do
arr+=("$e")
done < <(printf "%s" "$variable" | tr '*' '\0');
declare -p arr
declare -a arr=([0]="13423exa" [1]="lkco3nr" [2]=$'sw\nkjenve' [3]="kejnv")
``` | Use a ending character different than new line
```
end=.
read -a array -d "$end" <<< "$v$end"
```
Of course this solution suppose there is at least one charecter not used in your input variable. |
52,340,497 | When using azure-cli in python 3.5 and calling the commands from a script I have no control on the output in the console.
When a command is executed it prints the result to the console, but I'm struggling to just take the result and put it in a variable to analyze it.
```
from azure.cli.core import get_default_cli
class AzureCmd():
def __init__(self, username, password):
self.username = username
self.password = password
def login(self, tenant):
login_successfull = get_default_cli().invoke(['login',
'--tenant', tenant,
'--username', self.username,
'--password', self.password]) == 0
return login_successfull
def list_vm(self, tenant):
list_vm = get_default_cli().invoke(['vm', 'list', '--output', 'json'])
print(list_vm)
tenant = 'mytenant.onmicrosoft.com'
cmd = AzureCmd('login', 'mypassword')
cmd.login(tenant)
cmd.list_vm(tenant)
```
Here is my my script attempt.
What I want to achieve : not getting any output when cmd.login(tenant) is executed.
Instead of getting 0 (success) or 1 (failure) in my variables login\_successfull and list\_vm, I want to save the output of the get\_default\_cli().invoke() in it. | 2018/09/15 | [
"https://Stackoverflow.com/questions/52340497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6051042/"
] | I ran into the same problem, and found a solution, I also found out many people offered the standard solution that normally works in most cases, but they didn't verify it works for this scenario, and it turns out az cli is an edge case.
I think the issue has something to do with az cli is based on python.
`Win10CommandPrompt:\> where az
C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin\az.cmd`
If you look in that file you'll see something like this and discover that Azure CLI is just python:
```
python.exe -IBm azure.cli
```
So to do what you want to do, try this (it works for me):
```
import subprocess
out = subprocess.run(['python', '-IBm', 'azure.cli', '-h'], stdout=subprocess.PIPE).stdout.decode('utf-8')
print(out)
#this is equivalent to "az -h'
```
The above syntax won't work unless every single arg is a comma separated list of strings, I found a syntax I like alot more after reading how to do multiple args with python popen:
```
import subprocess
azcmd = "az ad sp create-for-rbac --name " + SPName + " --scopes /subscriptions/" + subscriptionid
out = subprocess.run(azcmd, shell=True, stdout=subprocess.PIPE).stdout.decode('utf-8')
print(out)
``` | I think you can use the subprocess and call the az cli to get the output instead using get\_default\_cli.
```
import subprocess
import json
process = subprocess.Popen(['az','network', 'ddos-protection', 'list'], stdout=subprocess.PIPE)
out, err = process.communicate()
d = json.loads(out)
print(d)
``` |
52,340,497 | When using azure-cli in python 3.5 and calling the commands from a script I have no control on the output in the console.
When a command is executed it prints the result to the console, but I'm struggling to just take the result and put it in a variable to analyze it.
```
from azure.cli.core import get_default_cli
class AzureCmd():
def __init__(self, username, password):
self.username = username
self.password = password
def login(self, tenant):
login_successfull = get_default_cli().invoke(['login',
'--tenant', tenant,
'--username', self.username,
'--password', self.password]) == 0
return login_successfull
def list_vm(self, tenant):
list_vm = get_default_cli().invoke(['vm', 'list', '--output', 'json'])
print(list_vm)
tenant = 'mytenant.onmicrosoft.com'
cmd = AzureCmd('login', 'mypassword')
cmd.login(tenant)
cmd.list_vm(tenant)
```
Here is my my script attempt.
What I want to achieve : not getting any output when cmd.login(tenant) is executed.
Instead of getting 0 (success) or 1 (failure) in my variables login\_successfull and list\_vm, I want to save the output of the get\_default\_cli().invoke() in it. | 2018/09/15 | [
"https://Stackoverflow.com/questions/52340497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6051042/"
] | I faced the same problem while trying to save the log of Azure Container Instance. None of the above solutions worked exactly as they are. After debugging the azure cli python code
(File : \Python39\Lib\site-packages\azure\cli\command\_modules\container\custom.py , function : container\_logs() ), i see that the container logs are just printed to the console but not returned. If you want to save the logs to any variable, add the return line (Not exactly a great solution but works for now). Hoping MS Azure updates their azure cli in upcoming versions.
```
def container_logs(cmd, resource_group_name, name, container_name=None, follow=False):
"""Tail a container instance log. """
container_client = cf_container(cmd.cli_ctx)
container_group_client = cf_container_groups(cmd.cli_ctx)
container_group = container_group_client.get(resource_group_name, name)
# If container name is not present, use the first container.
if container_name is None:
container_name = container_group.containers[0].name
if not follow:
log = container_client.list_logs(resource_group_name, name, container_name)
print(log.content)
# Return the log
return(log.content)
else:
_start_streaming(
terminate_condition=_is_container_terminated,
terminate_condition_args=(container_group_client, resource_group_name, name, container_name),
shupdown_grace_period=5,
stream_target=_stream_logs,
stream_args=(container_client, resource_group_name, name, container_name, container_group.restart_policy))
```
With this modification and along with the above solutions given (Using the get\_default\_cli), we can store the log of the Azure container instance in a variable.
```
from azure.cli.core import get_default_cli
def az_cli(args_str):
args = args_str.split()
cli = get_default_cli()
res = cli.invoke(args)
if cli.result.result:
jsondata = cli.result.result
elif cli.result.error:
print(cli.result.error)
``` | I think you can use the subprocess and call the az cli to get the output instead using get\_default\_cli.
```
import subprocess
import json
process = subprocess.Popen(['az','network', 'ddos-protection', 'list'], stdout=subprocess.PIPE)
out, err = process.communicate()
d = json.loads(out)
print(d)
``` |
52,340,497 | When using azure-cli in python 3.5 and calling the commands from a script I have no control on the output in the console.
When a command is executed it prints the result to the console, but I'm struggling to just take the result and put it in a variable to analyze it.
```
from azure.cli.core import get_default_cli
class AzureCmd():
def __init__(self, username, password):
self.username = username
self.password = password
def login(self, tenant):
login_successfull = get_default_cli().invoke(['login',
'--tenant', tenant,
'--username', self.username,
'--password', self.password]) == 0
return login_successfull
def list_vm(self, tenant):
list_vm = get_default_cli().invoke(['vm', 'list', '--output', 'json'])
print(list_vm)
tenant = 'mytenant.onmicrosoft.com'
cmd = AzureCmd('login', 'mypassword')
cmd.login(tenant)
cmd.list_vm(tenant)
```
Here is my my script attempt.
What I want to achieve : not getting any output when cmd.login(tenant) is executed.
Instead of getting 0 (success) or 1 (failure) in my variables login\_successfull and list\_vm, I want to save the output of the get\_default\_cli().invoke() in it. | 2018/09/15 | [
"https://Stackoverflow.com/questions/52340497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6051042/"
] | I ran into the same problem, and found a solution, I also found out many people offered the standard solution that normally works in most cases, but they didn't verify it works for this scenario, and it turns out az cli is an edge case.
I think the issue has something to do with az cli is based on python.
`Win10CommandPrompt:\> where az
C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin\az.cmd`
If you look in that file you'll see something like this and discover that Azure CLI is just python:
```
python.exe -IBm azure.cli
```
So to do what you want to do, try this (it works for me):
```
import subprocess
out = subprocess.run(['python', '-IBm', 'azure.cli', '-h'], stdout=subprocess.PIPE).stdout.decode('utf-8')
print(out)
#this is equivalent to "az -h'
```
The above syntax won't work unless every single arg is a comma separated list of strings, I found a syntax I like alot more after reading how to do multiple args with python popen:
```
import subprocess
azcmd = "az ad sp create-for-rbac --name " + SPName + " --scopes /subscriptions/" + subscriptionid
out = subprocess.run(azcmd, shell=True, stdout=subprocess.PIPE).stdout.decode('utf-8')
print(out)
``` | Well, we can execute the Azure CLI commands in Python as shown below.
Here, the *res* variable usually stores a value of integer type and therefore we might not be able to access the json response. To store the response in a variable, we need to do cli.result.result.
```
from azure.cli.core import get_default_cli
def az_cli(args_str):
args = args_str.split()
cli = get_default_cli()
res = cli.invoke(args)```
if cli.result.result:
jsondata = cli.result.result
elif cli.result.error:
print(cli.result.error)
``` |
52,340,497 | When using azure-cli in python 3.5 and calling the commands from a script I have no control on the output in the console.
When a command is executed it prints the result to the console, but I'm struggling to just take the result and put it in a variable to analyze it.
```
from azure.cli.core import get_default_cli
class AzureCmd():
def __init__(self, username, password):
self.username = username
self.password = password
def login(self, tenant):
login_successfull = get_default_cli().invoke(['login',
'--tenant', tenant,
'--username', self.username,
'--password', self.password]) == 0
return login_successfull
def list_vm(self, tenant):
list_vm = get_default_cli().invoke(['vm', 'list', '--output', 'json'])
print(list_vm)
tenant = 'mytenant.onmicrosoft.com'
cmd = AzureCmd('login', 'mypassword')
cmd.login(tenant)
cmd.list_vm(tenant)
```
Here is my my script attempt.
What I want to achieve : not getting any output when cmd.login(tenant) is executed.
Instead of getting 0 (success) or 1 (failure) in my variables login\_successfull and list\_vm, I want to save the output of the get\_default\_cli().invoke() in it. | 2018/09/15 | [
"https://Stackoverflow.com/questions/52340497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6051042/"
] | I faced the same problem while trying to save the log of Azure Container Instance. None of the above solutions worked exactly as they are. After debugging the azure cli python code
(File : \Python39\Lib\site-packages\azure\cli\command\_modules\container\custom.py , function : container\_logs() ), i see that the container logs are just printed to the console but not returned. If you want to save the logs to any variable, add the return line (Not exactly a great solution but works for now). Hoping MS Azure updates their azure cli in upcoming versions.
```
def container_logs(cmd, resource_group_name, name, container_name=None, follow=False):
"""Tail a container instance log. """
container_client = cf_container(cmd.cli_ctx)
container_group_client = cf_container_groups(cmd.cli_ctx)
container_group = container_group_client.get(resource_group_name, name)
# If container name is not present, use the first container.
if container_name is None:
container_name = container_group.containers[0].name
if not follow:
log = container_client.list_logs(resource_group_name, name, container_name)
print(log.content)
# Return the log
return(log.content)
else:
_start_streaming(
terminate_condition=_is_container_terminated,
terminate_condition_args=(container_group_client, resource_group_name, name, container_name),
shupdown_grace_period=5,
stream_target=_stream_logs,
stream_args=(container_client, resource_group_name, name, container_name, container_group.restart_policy))
```
With this modification and along with the above solutions given (Using the get\_default\_cli), we can store the log of the Azure container instance in a variable.
```
from azure.cli.core import get_default_cli
def az_cli(args_str):
args = args_str.split()
cli = get_default_cli()
res = cli.invoke(args)
if cli.result.result:
jsondata = cli.result.result
elif cli.result.error:
print(cli.result.error)
``` | Well, we can execute the Azure CLI commands in Python as shown below.
Here, the *res* variable usually stores a value of integer type and therefore we might not be able to access the json response. To store the response in a variable, we need to do cli.result.result.
```
from azure.cli.core import get_default_cli
def az_cli(args_str):
args = args_str.split()
cli = get_default_cli()
res = cli.invoke(args)```
if cli.result.result:
jsondata = cli.result.result
elif cli.result.error:
print(cli.result.error)
``` |
63,136,066 | I am working on a codebase using Django 1.9, I am busy getting everything ready to upgrade to 1.10.
I have run into an issue after migrating from python social auth to python social auth app django. I have used the steps found [here](https://github.com/omab/python-social-auth/blob/master/MIGRATING_TO_SOCIAL.md)
After updating my settings and url files, I ran into the below error. Does anyone know how I can get around this?
```
Running migrations:
Rendering model states... DONE
Applying social_django.0006_partial... OK
Applying social_django.0007_code_timestamp... OK
Applying social_django.0008_partial_timestamp... OK
Applying social_django.0009_auto_20191118_0520...Traceback (most recent call last):
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.IntegrityError: column "modified" contains null values
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/base.py", line 399, in execute
output = self.handle(*args, **options)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/commands/migrate.py", line 200, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/executor.py", line 92, in migrate
self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/executor.py", line 121, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/executor.py", line 198, in apply_migration
state = migration.apply(state, schema_editor)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/migration.py", line 123, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/operations/fields.py", line 62, in database_forwards
field,
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/base/schema.py", line 396, in add_field
self.execute(sql, params)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/base/schema.py", line 110, in execute
cursor.execute(sql, params)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: column "modified" contains null values
``` | 2020/07/28 | [
"https://Stackoverflow.com/questions/63136066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11305355/"
] | This usually happens when you upgrade the module and the newer module's models have new constraints, in your case
>
> column "modified" contains null values
>
>
>
One way is to delete just the instance that are causing the error. For this, go to the migrations folder and delete manually files that have `0009_auto_20191118_0520` type of name, you can delete, probably all, but `0001_initial.py file`. After that run
`python ./manage.py make migrations social_django`, it should update your database.
You can also consider clearing the migration history for the social\_django app, with the cmd
```
$ python manage.py migrate --fake social_django zero
```
Follow the tutorial [here](https://simpleisbetterthancomplex.com/tutorial/2016/07/26/how-to-reset-migrations.html) on how you can do that.
**Edit**:
You can overcome this by downgrading to a lower version of social auth app django. The latest version was to get to on Django 1.9 was social-auth-app-django==3.1.0 | I was able to overcome this by downgrading to a lower version of social auth app django. The latest version I was able to get to on Django 1.9 was social-auth-app-django==3.1.0 |
63,136,066 | I am working on a codebase using Django 1.9, I am busy getting everything ready to upgrade to 1.10.
I have run into an issue after migrating from python social auth to python social auth app django. I have used the steps found [here](https://github.com/omab/python-social-auth/blob/master/MIGRATING_TO_SOCIAL.md)
After updating my settings and url files, I ran into the below error. Does anyone know how I can get around this?
```
Running migrations:
Rendering model states... DONE
Applying social_django.0006_partial... OK
Applying social_django.0007_code_timestamp... OK
Applying social_django.0008_partial_timestamp... OK
Applying social_django.0009_auto_20191118_0520...Traceback (most recent call last):
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.IntegrityError: column "modified" contains null values
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/base.py", line 399, in execute
output = self.handle(*args, **options)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/core/management/commands/migrate.py", line 200, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/executor.py", line 92, in migrate
self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/executor.py", line 121, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/executor.py", line 198, in apply_migration
state = migration.apply(state, schema_editor)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/migration.py", line 123, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/migrations/operations/fields.py", line 62, in database_forwards
field,
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/base/schema.py", line 396, in add_field
self.execute(sql, params)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/base/schema.py", line 110, in execute
cursor.execute(sql, params)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/brendan/venvs/social/lib/python3.6/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: column "modified" contains null values
``` | 2020/07/28 | [
"https://Stackoverflow.com/questions/63136066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11305355/"
] | This usually happens when you upgrade the module and the newer module's models have new constraints, in your case
>
> column "modified" contains null values
>
>
>
One way is to delete just the instance that are causing the error. For this, go to the migrations folder and delete manually files that have `0009_auto_20191118_0520` type of name, you can delete, probably all, but `0001_initial.py file`. After that run
`python ./manage.py make migrations social_django`, it should update your database.
You can also consider clearing the migration history for the social\_django app, with the cmd
```
$ python manage.py migrate --fake social_django zero
```
Follow the tutorial [here](https://simpleisbetterthancomplex.com/tutorial/2016/07/26/how-to-reset-migrations.html) on how you can do that.
**Edit**:
You can overcome this by downgrading to a lower version of social auth app django. The latest version was to get to on Django 1.9 was social-auth-app-django==3.1.0 | I hit the exact same issue. (`pip install "python-social-auth==0.2.21"` -> set `social.apps.django_app.default` in `INSTALLED_APPS` -> `python manage.py migrate` -> `pip install python-social-auth[django]` -> set `social_django` in `INSTALLED_APPS` -> `python manage.py migrate` = `column "modified" contains null values`).
At first I thought this was a conflict between the model and the changes in the migrations file. E.g. from here:
[django.db.utils.IntegrityError: column "venue\_city" contains null values](https://stackoverflow.com/questions/37241819/django-db-utils-integrityerror-column-venue-city-contains-null-values)
>
> Looks like you added null=True after created migration file
>
>
>
However, that doesn't look right because `/usr/local/lib/python2.7/site-packages/social_django/models.py` does not include `null=True`:
```
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
```
Instead, I suspect the issue is the default value to update existing rows when creating the column is null which immediately conflicts. This is discussed here:
[Django Migrations Add Field with Default as Function of Model](https://stackoverflow.com/questions/29787853/django-migrations-add-field-with-default-as-function-of-model)
I got past this error by adding `null=True` to the `modified` field in `/usr/local/lib/python2.7/site-packages/social_django/migrations/0009_auto_20191118_0520.py` (or wherever social\_django is installed) and re-running migrations.
```
migrations.AddField(
model_name='usersocialauth',
name='created',
field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now),
preserve_default=False,
),
migrations.AddField(
model_name='usersocialauth',
name='modified',
- field=models.DateTimeField(auto_now=True),
+ field=models.DateTimeField(auto_now=True, null=True),
),
```
A better way might be to set `default=django.utils.timezone.now` as is already done for the `created` field. |
66,922,359 | I have a simple graph and need to draw it on my screen, here is my code:
```
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
```
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the `nx.draw()` call. The error I got is:
```
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
```
Is this an error on my code? If so, what can i do? Thanks in advance | 2021/04/02 | [
"https://Stackoverflow.com/questions/66922359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13215403/"
] | As stated for this question: [networkx shows random\_state\_index is incorrect](https://stackoverflow.com/questions/66920533/networkx-shows-random-state-index-is-incorrect)
Their was a problem with `decorator=5.0.0`. As discussed in the related issue on GitHub (<https://github.com/networkx/networkx/issues/4718>) `decorator`>=5.0.X should be available soon. So either wait a little bit to upgrade or downgrade to an old version as suggested in the SO question above.
### Edit `decorator`==5.0.5 or >=5.0.7 fixes the error
As discussed in the issue linked above, decorator has now been updated and fixed. | If you are using MacOS - BigSur, networkx won't work the way you want it to. I needed to go and open my project in Ubuntu. |
66,922,359 | I have a simple graph and need to draw it on my screen, here is my code:
```
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
```
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the `nx.draw()` call. The error I got is:
```
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
```
Is this an error on my code? If so, what can i do? Thanks in advance | 2021/04/02 | [
"https://Stackoverflow.com/questions/66922359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13215403/"
] | As stated for this question: [networkx shows random\_state\_index is incorrect](https://stackoverflow.com/questions/66920533/networkx-shows-random-state-index-is-incorrect)
Their was a problem with `decorator=5.0.0`. As discussed in the related issue on GitHub (<https://github.com/networkx/networkx/issues/4718>) `decorator`>=5.0.X should be available soon. So either wait a little bit to upgrade or downgrade to an old version as suggested in the SO question above.
### Edit `decorator`==5.0.5 or >=5.0.7 fixes the error
As discussed in the issue linked above, decorator has now been updated and fixed. | I was using `decorator==5.0.6`, and got the same error.
However, upgrading to 5.0.7 solves my problem. |
66,922,359 | I have a simple graph and need to draw it on my screen, here is my code:
```
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
```
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the `nx.draw()` call. The error I got is:
```
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
```
Is this an error on my code? If so, what can i do? Thanks in advance | 2021/04/02 | [
"https://Stackoverflow.com/questions/66922359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13215403/"
] | As stated for this question: [networkx shows random\_state\_index is incorrect](https://stackoverflow.com/questions/66920533/networkx-shows-random-state-index-is-incorrect)
Their was a problem with `decorator=5.0.0`. As discussed in the related issue on GitHub (<https://github.com/networkx/networkx/issues/4718>) `decorator`>=5.0.X should be available soon. So either wait a little bit to upgrade or downgrade to an old version as suggested in the SO question above.
### Edit `decorator`==5.0.5 or >=5.0.7 fixes the error
As discussed in the issue linked above, decorator has now been updated and fixed. | Its worked for me, after downgrade the networkx and decorator
```
networkx 2.3 pypi_0 pypi
decorator 4.3.0 pypi_0 pypi
```
OS: Mac OS BigSur |
66,922,359 | I have a simple graph and need to draw it on my screen, here is my code:
```
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
```
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the `nx.draw()` call. The error I got is:
```
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
```
Is this an error on my code? If so, what can i do? Thanks in advance | 2021/04/02 | [
"https://Stackoverflow.com/questions/66922359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13215403/"
] | As stated for this question: [networkx shows random\_state\_index is incorrect](https://stackoverflow.com/questions/66920533/networkx-shows-random-state-index-is-incorrect)
Their was a problem with `decorator=5.0.0`. As discussed in the related issue on GitHub (<https://github.com/networkx/networkx/issues/4718>) `decorator`>=5.0.X should be available soon. So either wait a little bit to upgrade or downgrade to an old version as suggested in the SO question above.
### Edit `decorator`==5.0.5 or >=5.0.7 fixes the error
As discussed in the issue linked above, decorator has now been updated and fixed. | Since I am using Windows 10, the suggestion given by Sparky05 worked for me by updating the version of decorator module in python by using `pip install decorator==5.0.7`.
Also you can update the version of networkx library. I will share the link for updating it later, but it will not work if you are using Spyder IDE. It will work in VSCode. |
66,922,359 | I have a simple graph and need to draw it on my screen, here is my code:
```
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
```
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the `nx.draw()` call. The error I got is:
```
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
```
Is this an error on my code? If so, what can i do? Thanks in advance | 2021/04/02 | [
"https://Stackoverflow.com/questions/66922359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13215403/"
] | I was using `decorator==5.0.6`, and got the same error.
However, upgrading to 5.0.7 solves my problem. | If you are using MacOS - BigSur, networkx won't work the way you want it to. I needed to go and open my project in Ubuntu. |
66,922,359 | I have a simple graph and need to draw it on my screen, here is my code:
```
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
```
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the `nx.draw()` call. The error I got is:
```
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
```
Is this an error on my code? If so, what can i do? Thanks in advance | 2021/04/02 | [
"https://Stackoverflow.com/questions/66922359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13215403/"
] | If you are using MacOS - BigSur, networkx won't work the way you want it to. I needed to go and open my project in Ubuntu. | Its worked for me, after downgrade the networkx and decorator
```
networkx 2.3 pypi_0 pypi
decorator 4.3.0 pypi_0 pypi
```
OS: Mac OS BigSur |
66,922,359 | I have a simple graph and need to draw it on my screen, here is my code:
```
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
```
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the `nx.draw()` call. The error I got is:
```
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
```
Is this an error on my code? If so, what can i do? Thanks in advance | 2021/04/02 | [
"https://Stackoverflow.com/questions/66922359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13215403/"
] | If you are using MacOS - BigSur, networkx won't work the way you want it to. I needed to go and open my project in Ubuntu. | Since I am using Windows 10, the suggestion given by Sparky05 worked for me by updating the version of decorator module in python by using `pip install decorator==5.0.7`.
Also you can update the version of networkx library. I will share the link for updating it later, but it will not work if you are using Spyder IDE. It will work in VSCode. |
66,922,359 | I have a simple graph and need to draw it on my screen, here is my code:
```
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
```
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the `nx.draw()` call. The error I got is:
```
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
```
Is this an error on my code? If so, what can i do? Thanks in advance | 2021/04/02 | [
"https://Stackoverflow.com/questions/66922359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13215403/"
] | I was using `decorator==5.0.6`, and got the same error.
However, upgrading to 5.0.7 solves my problem. | Its worked for me, after downgrade the networkx and decorator
```
networkx 2.3 pypi_0 pypi
decorator 4.3.0 pypi_0 pypi
```
OS: Mac OS BigSur |
66,922,359 | I have a simple graph and need to draw it on my screen, here is my code:
```
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
```
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the `nx.draw()` call. The error I got is:
```
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
```
Is this an error on my code? If so, what can i do? Thanks in advance | 2021/04/02 | [
"https://Stackoverflow.com/questions/66922359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13215403/"
] | I was using `decorator==5.0.6`, and got the same error.
However, upgrading to 5.0.7 solves my problem. | Since I am using Windows 10, the suggestion given by Sparky05 worked for me by updating the version of decorator module in python by using `pip install decorator==5.0.7`.
Also you can update the version of networkx library. I will share the link for updating it later, but it will not work if you are using Spyder IDE. It will work in VSCode. |
42,723,994 | I am trying to get a bit more intuition for how mutability of lists works in Python. In particular, I am confused by a statement in [this source](https://linuxconfig.org/python-list-methods):
>
> If you need to sort a list, either alphabetically or numerically, Python has you covered there as well. Just call the sort() method on the list, and it will be saved in the new order. This will overwrite the previous list, **so create a copy to sort, if you need to preserve the original.**
>
>
>
I'm not sure I understand how to implement the part in bold. Here is an example of what I'm talking about:
```
>>> x = [3,2,1,0]
>>> y = x
>>> x
[3,2,1,0]
>>> y
[3,2,1,0]
>>> y.sort()
>>> y
[0,1,2,3]
>>> x
[0,1,2,3]
```
So I created a copy to sort, then sorted the copy, but this has changed the original list. Therefore, I am confused about what the quote above is meant to mean. I don't get how this "preserves the original". Is there a special way to copy? Thanks. | 2017/03/10 | [
"https://Stackoverflow.com/questions/42723994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7691567/"
] | Use `y = list(x)` to create a new list. Otherwise you just get a new reference to the same list object. | You are copy the references of x
You can try [deepcopy](https://docs.python.org/2/library/copy.html) to have some fun
```
import copy
x = [3,2,1,0]
y = copy.deepcopy(x)
y.sort()
print x
``` |
42,723,994 | I am trying to get a bit more intuition for how mutability of lists works in Python. In particular, I am confused by a statement in [this source](https://linuxconfig.org/python-list-methods):
>
> If you need to sort a list, either alphabetically or numerically, Python has you covered there as well. Just call the sort() method on the list, and it will be saved in the new order. This will overwrite the previous list, **so create a copy to sort, if you need to preserve the original.**
>
>
>
I'm not sure I understand how to implement the part in bold. Here is an example of what I'm talking about:
```
>>> x = [3,2,1,0]
>>> y = x
>>> x
[3,2,1,0]
>>> y
[3,2,1,0]
>>> y.sort()
>>> y
[0,1,2,3]
>>> x
[0,1,2,3]
```
So I created a copy to sort, then sorted the copy, but this has changed the original list. Therefore, I am confused about what the quote above is meant to mean. I don't get how this "preserves the original". Is there a special way to copy? Thanks. | 2017/03/10 | [
"https://Stackoverflow.com/questions/42723994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7691567/"
] | Use `y = list(x)` to create a new list. Otherwise you just get a new reference to the same list object. | You should think of these statements
```
x = [3,2,1,0]
y = x
```
as copying a reference to the list object pointed by x to y.
As both variables are a reference to the same object, sorting y leads to sorting the object that x and y point to.
To achieve the behaviour you want it would be easier to use a function that creates a sorted copy of your list
```
y = sorted(x)
```
or as the documentation says first create a copy and inplace sort the copy
```
y = x[:]
y.sort()
y = list(x)
y.sort()
``` |
42,723,994 | I am trying to get a bit more intuition for how mutability of lists works in Python. In particular, I am confused by a statement in [this source](https://linuxconfig.org/python-list-methods):
>
> If you need to sort a list, either alphabetically or numerically, Python has you covered there as well. Just call the sort() method on the list, and it will be saved in the new order. This will overwrite the previous list, **so create a copy to sort, if you need to preserve the original.**
>
>
>
I'm not sure I understand how to implement the part in bold. Here is an example of what I'm talking about:
```
>>> x = [3,2,1,0]
>>> y = x
>>> x
[3,2,1,0]
>>> y
[3,2,1,0]
>>> y.sort()
>>> y
[0,1,2,3]
>>> x
[0,1,2,3]
```
So I created a copy to sort, then sorted the copy, but this has changed the original list. Therefore, I am confused about what the quote above is meant to mean. I don't get how this "preserves the original". Is there a special way to copy? Thanks. | 2017/03/10 | [
"https://Stackoverflow.com/questions/42723994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7691567/"
] | Use `y = list(x)` to create a new list. Otherwise you just get a new reference to the same list object. | When you say `y = x`, you are not merely copying the data in `x` to `y`; you are assigning the name `y` to the same underlying data that `x` points to. That is, you have merely produced another name, `y`, by which to reference the data already referenced by `x`. So when you call `y.sort()`, you also inadvertently sort `x` because they reference the same underlying data.
You can copy the data referenced by `x` and reference the copy with the name `y` with `y = list(x)` or `y = x[:]`, and then sorting `y` will not affect `x` because they are truly different objects. Alternatively, you can use the non-mutating sort function: `x_sorted = sorted(x)`. |
42,723,994 | I am trying to get a bit more intuition for how mutability of lists works in Python. In particular, I am confused by a statement in [this source](https://linuxconfig.org/python-list-methods):
>
> If you need to sort a list, either alphabetically or numerically, Python has you covered there as well. Just call the sort() method on the list, and it will be saved in the new order. This will overwrite the previous list, **so create a copy to sort, if you need to preserve the original.**
>
>
>
I'm not sure I understand how to implement the part in bold. Here is an example of what I'm talking about:
```
>>> x = [3,2,1,0]
>>> y = x
>>> x
[3,2,1,0]
>>> y
[3,2,1,0]
>>> y.sort()
>>> y
[0,1,2,3]
>>> x
[0,1,2,3]
```
So I created a copy to sort, then sorted the copy, but this has changed the original list. Therefore, I am confused about what the quote above is meant to mean. I don't get how this "preserves the original". Is there a special way to copy? Thanks. | 2017/03/10 | [
"https://Stackoverflow.com/questions/42723994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7691567/"
] | Use `y = list(x)` to create a new list. Otherwise you just get a new reference to the same list object. | You should look at what is `shallow copy` and `deep copy`. That should be able to give you the solution to why your way of copying is not working.
Ref: <https://docs.python.org/2/library/copy.html>
So, when you say `y = x`, that means you are copying the reference. But not exactly creating another list with different reference.
`y = copy.deepcopy(x)` is deep copy, which will create another reference for `y.` So modifying `y` will not change any values in `x.` |
42,723,994 | I am trying to get a bit more intuition for how mutability of lists works in Python. In particular, I am confused by a statement in [this source](https://linuxconfig.org/python-list-methods):
>
> If you need to sort a list, either alphabetically or numerically, Python has you covered there as well. Just call the sort() method on the list, and it will be saved in the new order. This will overwrite the previous list, **so create a copy to sort, if you need to preserve the original.**
>
>
>
I'm not sure I understand how to implement the part in bold. Here is an example of what I'm talking about:
```
>>> x = [3,2,1,0]
>>> y = x
>>> x
[3,2,1,0]
>>> y
[3,2,1,0]
>>> y.sort()
>>> y
[0,1,2,3]
>>> x
[0,1,2,3]
```
So I created a copy to sort, then sorted the copy, but this has changed the original list. Therefore, I am confused about what the quote above is meant to mean. I don't get how this "preserves the original". Is there a special way to copy? Thanks. | 2017/03/10 | [
"https://Stackoverflow.com/questions/42723994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7691567/"
] | You should think of these statements
```
x = [3,2,1,0]
y = x
```
as copying a reference to the list object pointed by x to y.
As both variables are a reference to the same object, sorting y leads to sorting the object that x and y point to.
To achieve the behaviour you want it would be easier to use a function that creates a sorted copy of your list
```
y = sorted(x)
```
or as the documentation says first create a copy and inplace sort the copy
```
y = x[:]
y.sort()
y = list(x)
y.sort()
``` | You are copy the references of x
You can try [deepcopy](https://docs.python.org/2/library/copy.html) to have some fun
```
import copy
x = [3,2,1,0]
y = copy.deepcopy(x)
y.sort()
print x
``` |
42,723,994 | I am trying to get a bit more intuition for how mutability of lists works in Python. In particular, I am confused by a statement in [this source](https://linuxconfig.org/python-list-methods):
>
> If you need to sort a list, either alphabetically or numerically, Python has you covered there as well. Just call the sort() method on the list, and it will be saved in the new order. This will overwrite the previous list, **so create a copy to sort, if you need to preserve the original.**
>
>
>
I'm not sure I understand how to implement the part in bold. Here is an example of what I'm talking about:
```
>>> x = [3,2,1,0]
>>> y = x
>>> x
[3,2,1,0]
>>> y
[3,2,1,0]
>>> y.sort()
>>> y
[0,1,2,3]
>>> x
[0,1,2,3]
```
So I created a copy to sort, then sorted the copy, but this has changed the original list. Therefore, I am confused about what the quote above is meant to mean. I don't get how this "preserves the original". Is there a special way to copy? Thanks. | 2017/03/10 | [
"https://Stackoverflow.com/questions/42723994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7691567/"
] | You are copy the references of x
You can try [deepcopy](https://docs.python.org/2/library/copy.html) to have some fun
```
import copy
x = [3,2,1,0]
y = copy.deepcopy(x)
y.sort()
print x
``` | You should look at what is `shallow copy` and `deep copy`. That should be able to give you the solution to why your way of copying is not working.
Ref: <https://docs.python.org/2/library/copy.html>
So, when you say `y = x`, that means you are copying the reference. But not exactly creating another list with different reference.
`y = copy.deepcopy(x)` is deep copy, which will create another reference for `y.` So modifying `y` will not change any values in `x.` |
42,723,994 | I am trying to get a bit more intuition for how mutability of lists works in Python. In particular, I am confused by a statement in [this source](https://linuxconfig.org/python-list-methods):
>
> If you need to sort a list, either alphabetically or numerically, Python has you covered there as well. Just call the sort() method on the list, and it will be saved in the new order. This will overwrite the previous list, **so create a copy to sort, if you need to preserve the original.**
>
>
>
I'm not sure I understand how to implement the part in bold. Here is an example of what I'm talking about:
```
>>> x = [3,2,1,0]
>>> y = x
>>> x
[3,2,1,0]
>>> y
[3,2,1,0]
>>> y.sort()
>>> y
[0,1,2,3]
>>> x
[0,1,2,3]
```
So I created a copy to sort, then sorted the copy, but this has changed the original list. Therefore, I am confused about what the quote above is meant to mean. I don't get how this "preserves the original". Is there a special way to copy? Thanks. | 2017/03/10 | [
"https://Stackoverflow.com/questions/42723994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7691567/"
] | You should think of these statements
```
x = [3,2,1,0]
y = x
```
as copying a reference to the list object pointed by x to y.
As both variables are a reference to the same object, sorting y leads to sorting the object that x and y point to.
To achieve the behaviour you want it would be easier to use a function that creates a sorted copy of your list
```
y = sorted(x)
```
or as the documentation says first create a copy and inplace sort the copy
```
y = x[:]
y.sort()
y = list(x)
y.sort()
``` | When you say `y = x`, you are not merely copying the data in `x` to `y`; you are assigning the name `y` to the same underlying data that `x` points to. That is, you have merely produced another name, `y`, by which to reference the data already referenced by `x`. So when you call `y.sort()`, you also inadvertently sort `x` because they reference the same underlying data.
You can copy the data referenced by `x` and reference the copy with the name `y` with `y = list(x)` or `y = x[:]`, and then sorting `y` will not affect `x` because they are truly different objects. Alternatively, you can use the non-mutating sort function: `x_sorted = sorted(x)`. |
42,723,994 | I am trying to get a bit more intuition for how mutability of lists works in Python. In particular, I am confused by a statement in [this source](https://linuxconfig.org/python-list-methods):
>
> If you need to sort a list, either alphabetically or numerically, Python has you covered there as well. Just call the sort() method on the list, and it will be saved in the new order. This will overwrite the previous list, **so create a copy to sort, if you need to preserve the original.**
>
>
>
I'm not sure I understand how to implement the part in bold. Here is an example of what I'm talking about:
```
>>> x = [3,2,1,0]
>>> y = x
>>> x
[3,2,1,0]
>>> y
[3,2,1,0]
>>> y.sort()
>>> y
[0,1,2,3]
>>> x
[0,1,2,3]
```
So I created a copy to sort, then sorted the copy, but this has changed the original list. Therefore, I am confused about what the quote above is meant to mean. I don't get how this "preserves the original". Is there a special way to copy? Thanks. | 2017/03/10 | [
"https://Stackoverflow.com/questions/42723994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7691567/"
] | You should think of these statements
```
x = [3,2,1,0]
y = x
```
as copying a reference to the list object pointed by x to y.
As both variables are a reference to the same object, sorting y leads to sorting the object that x and y point to.
To achieve the behaviour you want it would be easier to use a function that creates a sorted copy of your list
```
y = sorted(x)
```
or as the documentation says first create a copy and inplace sort the copy
```
y = x[:]
y.sort()
y = list(x)
y.sort()
``` | You should look at what is `shallow copy` and `deep copy`. That should be able to give you the solution to why your way of copying is not working.
Ref: <https://docs.python.org/2/library/copy.html>
So, when you say `y = x`, that means you are copying the reference. But not exactly creating another list with different reference.
`y = copy.deepcopy(x)` is deep copy, which will create another reference for `y.` So modifying `y` will not change any values in `x.` |
42,723,994 | I am trying to get a bit more intuition for how mutability of lists works in Python. In particular, I am confused by a statement in [this source](https://linuxconfig.org/python-list-methods):
>
> If you need to sort a list, either alphabetically or numerically, Python has you covered there as well. Just call the sort() method on the list, and it will be saved in the new order. This will overwrite the previous list, **so create a copy to sort, if you need to preserve the original.**
>
>
>
I'm not sure I understand how to implement the part in bold. Here is an example of what I'm talking about:
```
>>> x = [3,2,1,0]
>>> y = x
>>> x
[3,2,1,0]
>>> y
[3,2,1,0]
>>> y.sort()
>>> y
[0,1,2,3]
>>> x
[0,1,2,3]
```
So I created a copy to sort, then sorted the copy, but this has changed the original list. Therefore, I am confused about what the quote above is meant to mean. I don't get how this "preserves the original". Is there a special way to copy? Thanks. | 2017/03/10 | [
"https://Stackoverflow.com/questions/42723994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7691567/"
] | When you say `y = x`, you are not merely copying the data in `x` to `y`; you are assigning the name `y` to the same underlying data that `x` points to. That is, you have merely produced another name, `y`, by which to reference the data already referenced by `x`. So when you call `y.sort()`, you also inadvertently sort `x` because they reference the same underlying data.
You can copy the data referenced by `x` and reference the copy with the name `y` with `y = list(x)` or `y = x[:]`, and then sorting `y` will not affect `x` because they are truly different objects. Alternatively, you can use the non-mutating sort function: `x_sorted = sorted(x)`. | You should look at what is `shallow copy` and `deep copy`. That should be able to give you the solution to why your way of copying is not working.
Ref: <https://docs.python.org/2/library/copy.html>
So, when you say `y = x`, that means you are copying the reference. But not exactly creating another list with different reference.
`y = copy.deepcopy(x)` is deep copy, which will create another reference for `y.` So modifying `y` will not change any values in `x.` |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | If you really need a new list, I would simply return it. Return value optimization will take care of no needless copies in most cases, and your code stays very clear.
That being said, taking lists and returning other lists is indeed python programming in C++.
A, for C++, more suitable paradigm would be to create functions that take a range of iterators and alter the underlying collection.
e.g.
```
void DoSomething(iterator const & from, iterator const & to);
```
(with iterator possibly being a template, depending on your needs)
Chaining operations is then a matter of calling consecutive methods on begin(), end().
If you don't want to alter the input, you'd make a copy yourself first.
```
std::vector theOutput(inputVector);
```
This all comes from the C++ "don't pay for what you don't need" philosophy, you'd only create copies where you actually want to keep the originals. | Another problem with returning a list of objects (opposed to working on one or two lists in place, as BigSandwich pointed out), is if your objects have complex copy constructors, those will called for each element in the container.
If you have 1000 objects each referencing a hunk of memory, and they copy that memory on Object a, b; a=b; that's 1000 memcopys for you, just for returning them contained in a container. If you still want to return a container directly, think about pointers in this case. |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | I'd use the generic approach:
```
template <typename InIt, typename OutIt>
void DoMagic(InIt first, InIt last, OutIt out)
{
for(; first != last; ++first) {
if(IsCorrectIngredient(*first)) {
*out = DoMoreMagic(*first);
++out;
}
}
}
```
Now you can call it
```
std::vector<MagicIngredients> ingredients;
std::vector<MagicResults> result;
DoMagic(ingredients.begin(), ingredients.end(), std::back_inserter(results));
```
You can easily change containers used without changing the algorithm used, also it is efficient there's no overhead in returning containers. | Using a std::vector is the preferably way in many situations. Its guaranteed to use consecutive memory and is therefor pleasant for the L1 cache.
You should be aware of what happends when your return type is std::vector. What happens under the hood is that the std::vector is recursive copied, so if SomeType's copy constructor is expensive the "return statement" may be a lengthy and time consuming operation.
If you are searching and inserting a lot in your list you could look at std::set to get logarithmic time complexity instead of linear. (std::vectors insert is constant until its capacity is exceeded).
You are saying that you have many "pipe functions"... sounds like an excellent scenario for std::transform. |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | If you want to be really hardcore, you could use [boost::tuple](http://www.boost.org/doc/libs/1_37_0/libs/tuple/doc/tuple_users_guide.html).
```
tuple<int, int, double> add_multiply_divide(int a, int b) {
return make_tuple(a+b, a*b, double(a)/double(b));
}
```
But since it seems all your objects are of a single, non-polymorphic type, then the std::vector is all well and fine.
If your types were polymorphic (inherited classes of a base class) then you'd need a vector of pointers, and you'd need to remember to delete all the allocated objects before throwing away your vector. | It works very simple.
```
list<int> foo(void)
{
list<int> l;
// do something
return l;
}
```
Now receiving data:
```
list<int> lst=foo();
```
Is fully optimal because compiler know to optimize constructor of lst well. and
would not cause copies.
Other method, more portable:
```
list<int> lst;
// do anything you want with list
lst.swap(foo());
```
What happens: foo already optimized so there is no problem to return the value. When
you call swap you set value of lst to new, and thus do not copy it. Now old value
of lst is "swapped" and destructed.
This is the efficient way to do the job. |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | If you really need a new list, I would simply return it. Return value optimization will take care of no needless copies in most cases, and your code stays very clear.
That being said, taking lists and returning other lists is indeed python programming in C++.
A, for C++, more suitable paradigm would be to create functions that take a range of iterators and alter the underlying collection.
e.g.
```
void DoSomething(iterator const & from, iterator const & to);
```
(with iterator possibly being a template, depending on your needs)
Chaining operations is then a matter of calling consecutive methods on begin(), end().
If you don't want to alter the input, you'd make a copy yourself first.
```
std::vector theOutput(inputVector);
```
This all comes from the C++ "don't pay for what you don't need" philosophy, you'd only create copies where you actually want to keep the originals. | It works very simple.
```
list<int> foo(void)
{
list<int> l;
// do something
return l;
}
```
Now receiving data:
```
list<int> lst=foo();
```
Is fully optimal because compiler know to optimize constructor of lst well. and
would not cause copies.
Other method, more portable:
```
list<int> lst;
// do anything you want with list
lst.swap(foo());
```
What happens: foo already optimized so there is no problem to return the value. When
you call swap you set value of lst to new, and thus do not copy it. Now old value
of lst is "swapped" and destructed.
This is the efficient way to do the job. |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | Using a std::vector is the preferably way in many situations. Its guaranteed to use consecutive memory and is therefor pleasant for the L1 cache.
You should be aware of what happends when your return type is std::vector. What happens under the hood is that the std::vector is recursive copied, so if SomeType's copy constructor is expensive the "return statement" may be a lengthy and time consuming operation.
If you are searching and inserting a lot in your list you could look at std::set to get logarithmic time complexity instead of linear. (std::vectors insert is constant until its capacity is exceeded).
You are saying that you have many "pipe functions"... sounds like an excellent scenario for std::transform. | It works very simple.
```
list<int> foo(void)
{
list<int> l;
// do something
return l;
}
```
Now receiving data:
```
list<int> lst=foo();
```
Is fully optimal because compiler know to optimize constructor of lst well. and
would not cause copies.
Other method, more portable:
```
list<int> lst;
// do anything you want with list
lst.swap(foo());
```
What happens: foo already optimized so there is no problem to return the value. When
you call swap you set value of lst to new, and thus do not copy it. Now old value
of lst is "swapped" and destructed.
This is the efficient way to do the job. |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | If you really need a new list, I would simply return it. Return value optimization will take care of no needless copies in most cases, and your code stays very clear.
That being said, taking lists and returning other lists is indeed python programming in C++.
A, for C++, more suitable paradigm would be to create functions that take a range of iterators and alter the underlying collection.
e.g.
```
void DoSomething(iterator const & from, iterator const & to);
```
(with iterator possibly being a template, depending on your needs)
Chaining operations is then a matter of calling consecutive methods on begin(), end().
If you don't want to alter the input, you'd make a copy yourself first.
```
std::vector theOutput(inputVector);
```
This all comes from the C++ "don't pay for what you don't need" philosophy, you'd only create copies where you actually want to keep the originals. | If you want to be really hardcore, you could use [boost::tuple](http://www.boost.org/doc/libs/1_37_0/libs/tuple/doc/tuple_users_guide.html).
```
tuple<int, int, double> add_multiply_divide(int a, int b) {
return make_tuple(a+b, a*b, double(a)/double(b));
}
```
But since it seems all your objects are of a single, non-polymorphic type, then the std::vector is all well and fine.
If your types were polymorphic (inherited classes of a base class) then you'd need a vector of pointers, and you'd need to remember to delete all the allocated objects before throwing away your vector. |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | The only thing I can see is that your forcing a copy of the list you return. It would be more efficient to do something like:
```
void DoSomething(const std::vector<SomeType>& in, std::vector<SomeType>& out)
{
...
// no need to return anything, just modify out
}
```
Because you pass in the list you want to return, you avoid the extra copy.
Edit: This is an old reply. If you can use a modern C++ compiler with move semantics, you don't need to worry about this. Of course, this answer still applies if the object you are returning DOES NOT have move semantics. | I'd use the generic approach:
```
template <typename InIt, typename OutIt>
void DoMagic(InIt first, InIt last, OutIt out)
{
for(; first != last; ++first) {
if(IsCorrectIngredient(*first)) {
*out = DoMoreMagic(*first);
++out;
}
}
}
```
Now you can call it
```
std::vector<MagicIngredients> ingredients;
std::vector<MagicResults> result;
DoMagic(ingredients.begin(), ingredients.end(), std::back_inserter(results));
```
You can easily change containers used without changing the algorithm used, also it is efficient there's no overhead in returning containers. |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | I'd use the generic approach:
```
template <typename InIt, typename OutIt>
void DoMagic(InIt first, InIt last, OutIt out)
{
for(; first != last; ++first) {
if(IsCorrectIngredient(*first)) {
*out = DoMoreMagic(*first);
++out;
}
}
}
```
Now you can call it
```
std::vector<MagicIngredients> ingredients;
std::vector<MagicResults> result;
DoMagic(ingredients.begin(), ingredients.end(), std::back_inserter(results));
```
You can easily change containers used without changing the algorithm used, also it is efficient there's no overhead in returning containers. | Another problem with returning a list of objects (opposed to working on one or two lists in place, as BigSandwich pointed out), is if your objects have complex copy constructors, those will called for each element in the container.
If you have 1000 objects each referencing a hunk of memory, and they copy that memory on Object a, b; a=b; that's 1000 memcopys for you, just for returning them contained in a container. If you still want to return a container directly, think about pointers in this case. |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | The only thing I can see is that your forcing a copy of the list you return. It would be more efficient to do something like:
```
void DoSomething(const std::vector<SomeType>& in, std::vector<SomeType>& out)
{
...
// no need to return anything, just modify out
}
```
Because you pass in the list you want to return, you avoid the extra copy.
Edit: This is an old reply. If you can use a modern C++ compiler with move semantics, you don't need to worry about this. Of course, this answer still applies if the object you are returning DOES NOT have move semantics. | Another problem with returning a list of objects (opposed to working on one or two lists in place, as BigSandwich pointed out), is if your objects have complex copy constructors, those will called for each element in the container.
If you have 1000 objects each referencing a hunk of memory, and they copy that memory on Object a, b; a=b; that's 1000 memcopys for you, just for returning them contained in a container. If you still want to return a container directly, think about pointers in this case. |
514,908 | It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant.
I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input),
this is in concept, but in practice, I'm using `std::vector` to represent the list, is that acceptable?
further more, I'm not using any pointers, so I'm using `std::vector<SomeType> the_list(some_size);` as the variable, and returning it directly, i.e. `return the_list;`
P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++. | 2009/02/05 | [
"https://Stackoverflow.com/questions/514908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35364/"
] | I'd use the generic approach:
```
template <typename InIt, typename OutIt>
void DoMagic(InIt first, InIt last, OutIt out)
{
for(; first != last; ++first) {
if(IsCorrectIngredient(*first)) {
*out = DoMoreMagic(*first);
++out;
}
}
}
```
Now you can call it
```
std::vector<MagicIngredients> ingredients;
std::vector<MagicResults> result;
DoMagic(ingredients.begin(), ingredients.end(), std::back_inserter(results));
```
You can easily change containers used without changing the algorithm used, also it is efficient there's no overhead in returning containers. | It works very simple.
```
list<int> foo(void)
{
list<int> l;
// do something
return l;
}
```
Now receiving data:
```
list<int> lst=foo();
```
Is fully optimal because compiler know to optimize constructor of lst well. and
would not cause copies.
Other method, more portable:
```
list<int> lst;
// do anything you want with list
lst.swap(foo());
```
What happens: foo already optimized so there is no problem to return the value. When
you call swap you set value of lst to new, and thus do not copy it. Now old value
of lst is "swapped" and destructed.
This is the efficient way to do the job. |
46,039,850 | I have 2 use cases:
* Extract, Transform and Load from Oracle / PostgreSQL / Redshift / S3 / CSV to my own Redshift cluster
* Schedule the job do it runs daily/weekly (INSERT + TABLE or INSERT + NONE options preferable).
I am currently using:
1. SQLAlchemy for extracts (works well generally).
2. PETL for transforms and loads (works well on smaller data sets, but for ~50m+ rows it is slow and the connection to the database(s) time out).
3. An internal tool for the scheduling component (which stores the transform in XML and then the loads from the XML and seems rather long and complicated).
I have been looking through [this link](https://github.com/pawl/awesome-etl#python) but would welcome additional suggestions. Exporting to Spark or similar is also welcome if there is an "easier" process where I can just do everything through Python (I'm only using Redshift because it seems like the best option). | 2017/09/04 | [
"https://Stackoverflow.com/questions/46039850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2183820/"
] | You can try [pyetl](https://github.com/taogeYT/pyetl) an etl framework write by python3
```py
from pyetl import Task, DatabaseReader, DatabaseWriter
reader = DatabaseReader("sqlite:///db.sqlite3", table_name="source")
writer = DatabaseWriter("sqlite:///db.sqlite3", table_name="target")
columns = {"id": "uuid", "name": "full_name"}
functions={"id": str, "name": lambda x: x.strip()}
Task(reader, writer, columns=columns, functions=functions).start()
``` | How about
* Python
* Pandas
This is what we use for our ETL processing. |
46,039,850 | I have 2 use cases:
* Extract, Transform and Load from Oracle / PostgreSQL / Redshift / S3 / CSV to my own Redshift cluster
* Schedule the job do it runs daily/weekly (INSERT + TABLE or INSERT + NONE options preferable).
I am currently using:
1. SQLAlchemy for extracts (works well generally).
2. PETL for transforms and loads (works well on smaller data sets, but for ~50m+ rows it is slow and the connection to the database(s) time out).
3. An internal tool for the scheduling component (which stores the transform in XML and then the loads from the XML and seems rather long and complicated).
I have been looking through [this link](https://github.com/pawl/awesome-etl#python) but would welcome additional suggestions. Exporting to Spark or similar is also welcome if there is an "easier" process where I can just do everything through Python (I'm only using Redshift because it seems like the best option). | 2017/09/04 | [
"https://Stackoverflow.com/questions/46039850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2183820/"
] | You can try [pyetl](https://github.com/taogeYT/pyetl) an etl framework write by python3
```py
from pyetl import Task, DatabaseReader, DatabaseWriter
reader = DatabaseReader("sqlite:///db.sqlite3", table_name="source")
writer = DatabaseWriter("sqlite:///db.sqlite3", table_name="target")
columns = {"id": "uuid", "name": "full_name"}
functions={"id": str, "name": lambda x: x.strip()}
Task(reader, writer, columns=columns, functions=functions).start()
``` | I'm using Pandas to access my ETL files, try doing something like this:
* Create a class with all your queries there.
* Create another class that processes the actual Datawarehouse that includes Pandas and Matplotlib for the graph. |
46,039,850 | I have 2 use cases:
* Extract, Transform and Load from Oracle / PostgreSQL / Redshift / S3 / CSV to my own Redshift cluster
* Schedule the job do it runs daily/weekly (INSERT + TABLE or INSERT + NONE options preferable).
I am currently using:
1. SQLAlchemy for extracts (works well generally).
2. PETL for transforms and loads (works well on smaller data sets, but for ~50m+ rows it is slow and the connection to the database(s) time out).
3. An internal tool for the scheduling component (which stores the transform in XML and then the loads from the XML and seems rather long and complicated).
I have been looking through [this link](https://github.com/pawl/awesome-etl#python) but would welcome additional suggestions. Exporting to Spark or similar is also welcome if there is an "easier" process where I can just do everything through Python (I'm only using Redshift because it seems like the best option). | 2017/09/04 | [
"https://Stackoverflow.com/questions/46039850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2183820/"
] | You can try [pyetl](https://github.com/taogeYT/pyetl) an etl framework write by python3
```py
from pyetl import Task, DatabaseReader, DatabaseWriter
reader = DatabaseReader("sqlite:///db.sqlite3", table_name="source")
writer = DatabaseWriter("sqlite:///db.sqlite3", table_name="target")
columns = {"id": "uuid", "name": "full_name"}
functions={"id": str, "name": lambda x: x.strip()}
Task(reader, writer, columns=columns, functions=functions).start()
``` | Consider having a look at [convtools](https://github.com/westandskif/convtools) library, it provides lots of data processing primitives, is pure python and has zero dependencies.
Since it generates ad hoc python code under the hood, sometimes it outperforms pandas/polars, so it can some gaps in your workflows. Especially if those have dynamic nature. |
46,074,841 | I want to run a service that requests urls using coroutines and multithread. However I cannot pass coroutines to the workers in the executor. See the code below for a minimal example of this issue:
```
import time
import asyncio
import concurrent.futures
EXECUTOR = concurrent.futures.ThreadPoolExecutor(max_workers=5)
async def async_request(loop):
await asyncio.sleep(3)
def sync_request(_):
time.sleep(3)
async def main(loop):
futures = [loop.run_in_executor(EXECUTOR, async_request,loop)
for x in range(10)]
await asyncio.wait(futures)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
```
Resulting in the following error:
```
Traceback (most recent call last):
File "co_test.py", line 17, in <module>
loop.run_until_complete(main(loop))
File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "co_test.py", line 10, in main
futures = [loop.run_in_executor(EXECUTOR, req,loop) for x in range(10)]
File "co_test.py", line 10, in <listcomp>
futures = [loop.run_in_executor(EXECUTOR, req,loop) for x in range(10)]
File "/usr/lib/python3.5/asyncio/base_events.py", line 541, in run_in_executor
raise TypeError("coroutines cannot be used with run_in_executor()")
TypeError: coroutines cannot be used with run_in_executor()
```
I know that I could use `sync_request` funcion instead of `async_request`, in this case I would have coroutines by means of sending the blocking function to another thread.
I also know I could call `async_request` ten times in the event loop. Something like in the code below:
```
loop = asyncio.get_event_loop()
futures = [async_request(loop) for i in range(10)]
loop.run_until_complete(asyncio.wait(futures))
```
But in this case I would be using a single thread.
How could I use both scenarios, the coroutines working within multithreads? As you can see by the code, I am passing (and not using) the `pool` to the `async_request` in the hopes I can code something that tells the worker to make a future, send it to the pool and asynchronously (freeing the worker) waits for the result.
The reason I want to do that is to make the application scalable. Is it an unnecessary step? Should I simply have a thread per url and that is it? Something like:
```
LEN = len(list_of_urls)
EXECUTOR = concurrent.futures.ThreadPoolExecutor(max_workers=LEN)
```
is good enough? | 2017/09/06 | [
"https://Stackoverflow.com/questions/46074841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1494511/"
] | You have to create and set a new event loop in the thread context in order to run coroutines:
```
import asyncio
from concurrent.futures import ThreadPoolExecutor
def run(corofn, *args):
loop = asyncio.new_event_loop()
try:
coro = corofn(*args)
asyncio.set_event_loop(loop)
return loop.run_until_complete(coro)
finally:
loop.close()
async def main():
loop = asyncio.get_event_loop()
executor = ThreadPoolExecutor(max_workers=5)
futures = [
loop.run_in_executor(executor, run, asyncio.sleep, 1, x)
for x in range(10)]
print(await asyncio.gather(*futures))
# Prints: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
``` | From what I understood from the question, you are trying to use each thread to:
* trigger a coroutine execution
* be free to receive more coroutines to trigger
* wait everything to end in an asynchronous way
However, as soon as you call the loop (be it the main or a new loop) to wait for results, it blocks the thread waiting.
And, by using run\_in\_executor with a bunch of sync functions, the thread doesn't actually know if there are more coroutines to dispatch in one go before reaching the point where it waits the loop.
I think that if you want to dispatch a bunch of coroutines in such a way as to each thread manage its own group of coroutines in its own event loop, the following code achieved the 1 second total time, multithreaded wait for 10 async sleeps of 1 second.
```
import asyncio
import threading
from asyncio import AbstractEventLoop
from concurrent.futures import ThreadPoolExecutor
from time import perf_counter
from typing import Dict, Set
import _asyncio
event_loops_for_each_thread: Dict[int, AbstractEventLoop] = {}
def run(corofn, *args):
curr_thread_id = threading.current_thread().ident
if curr_thread_id not in event_loops_for_each_thread:
event_loops_for_each_thread[curr_thread_id] = asyncio.new_event_loop()
thread_loop = event_loops_for_each_thread[curr_thread_id]
coro = corofn(*args)
return thread_loop.create_task(coro)
async def async_gather_tasks(all_tasks: Set[_asyncio.Task]):
return await asyncio.gather(*all_tasks)
def wait_loops():
# each thread will block waiting all async calls of its specific async loop
curr_thread_id = threading.current_thread().ident
threads_event_loop = event_loops_for_each_thread[curr_thread_id]
# I print the following to prove that each thread is waiting its loop
print(f'Thread {curr_thread_id} will wait its tasks.')
return threads_event_loop.run_until_complete(async_gather_tasks(asyncio.all_tasks(threads_event_loop)))
async def main():
loop = asyncio.get_event_loop()
max_workers = 5
executor = ThreadPoolExecutor(max_workers=max_workers)
# dispatching async tasks for each thread.
futures = [
loop.run_in_executor(executor, run, asyncio.sleep, 1, x)
for x in range(10)]
# waiting the threads finish dispatching the async executions to its own event loops
await asyncio.wait(futures)
# at this point the async events were dispatched to each thread event loop
# in the lines below, you tell each worker thread to wait all its async tasks completion.
futures = [
loop.run_in_executor(executor, wait_loops)
for _ in range(max_workers)
]
print(await asyncio.gather(*futures))
# it will print something like:
# [[1, 8], [0], [6, 3, 9, 7], [4], [2, 5]]
# each sub-set is the result of the tasks of a thread
# it is non-deterministic, so it will return a diferent array of arrays each time you run.
if __name__ == '__main__':
loop = asyncio.get_event_loop()
start = perf_counter()
loop.run_until_complete(main())
end = perf_counter()
duration_s = end - start
# the print below proves that all threads are waiting its tasks asynchronously
print(f'duration_s={duration_s:.3f}')
``` |
46,074,841 | I want to run a service that requests urls using coroutines and multithread. However I cannot pass coroutines to the workers in the executor. See the code below for a minimal example of this issue:
```
import time
import asyncio
import concurrent.futures
EXECUTOR = concurrent.futures.ThreadPoolExecutor(max_workers=5)
async def async_request(loop):
await asyncio.sleep(3)
def sync_request(_):
time.sleep(3)
async def main(loop):
futures = [loop.run_in_executor(EXECUTOR, async_request,loop)
for x in range(10)]
await asyncio.wait(futures)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
```
Resulting in the following error:
```
Traceback (most recent call last):
File "co_test.py", line 17, in <module>
loop.run_until_complete(main(loop))
File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "co_test.py", line 10, in main
futures = [loop.run_in_executor(EXECUTOR, req,loop) for x in range(10)]
File "co_test.py", line 10, in <listcomp>
futures = [loop.run_in_executor(EXECUTOR, req,loop) for x in range(10)]
File "/usr/lib/python3.5/asyncio/base_events.py", line 541, in run_in_executor
raise TypeError("coroutines cannot be used with run_in_executor()")
TypeError: coroutines cannot be used with run_in_executor()
```
I know that I could use `sync_request` funcion instead of `async_request`, in this case I would have coroutines by means of sending the blocking function to another thread.
I also know I could call `async_request` ten times in the event loop. Something like in the code below:
```
loop = asyncio.get_event_loop()
futures = [async_request(loop) for i in range(10)]
loop.run_until_complete(asyncio.wait(futures))
```
But in this case I would be using a single thread.
How could I use both scenarios, the coroutines working within multithreads? As you can see by the code, I am passing (and not using) the `pool` to the `async_request` in the hopes I can code something that tells the worker to make a future, send it to the pool and asynchronously (freeing the worker) waits for the result.
The reason I want to do that is to make the application scalable. Is it an unnecessary step? Should I simply have a thread per url and that is it? Something like:
```
LEN = len(list_of_urls)
EXECUTOR = concurrent.futures.ThreadPoolExecutor(max_workers=LEN)
```
is good enough? | 2017/09/06 | [
"https://Stackoverflow.com/questions/46074841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1494511/"
] | You have to create and set a new event loop in the thread context in order to run coroutines:
```
import asyncio
from concurrent.futures import ThreadPoolExecutor
def run(corofn, *args):
loop = asyncio.new_event_loop()
try:
coro = corofn(*args)
asyncio.set_event_loop(loop)
return loop.run_until_complete(coro)
finally:
loop.close()
async def main():
loop = asyncio.get_event_loop()
executor = ThreadPoolExecutor(max_workers=5)
futures = [
loop.run_in_executor(executor, run, asyncio.sleep, 1, x)
for x in range(10)]
print(await asyncio.gather(*futures))
# Prints: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
``` | I just wanted to write a similar answer to [Tonsic's answer](https://stackoverflow.com/a/63106889) on how `asyncio` should actually be used in this situation, but much more succinctly (using some newer `asyncio` features as well).
What you're really looking for in this case `asyncio.gather`, which let's you run many coroutines concurrently.
From your example, it should thus become:
```py
async def async_request():
await asyncio.sleep(3)
async def main():
await asyncio.gather(*[async_request() for _ in range(10)])
```
Now when we time it, it takes about 3 seconds, as desired, instead of 30 seconds:
```py
>>> from time import time
>>> start = time()
>>> asyncio.run(main())
>>> time() - start
3.00907039642334
```
Furthermore, on using `concurrent.futures` alongside `asyncio`, you should identify what blocking code needs an executor and only apply it there to turn it into asynchronous code.
```py
async def async_request():
# The default executor is a `ThreadPoolExecutor`.
# In python >= 3.9, this can be shortened to `asyncio.to_thread(sync_request)`.
await asyncio.get_running_loop().run_in_executor(None, sync_request)
```
From that point, you can then manage your executors by treating these as coroutines with `asyncio`, using things like `asyncio.gather`, as originally shown. |
46,074,841 | I want to run a service that requests urls using coroutines and multithread. However I cannot pass coroutines to the workers in the executor. See the code below for a minimal example of this issue:
```
import time
import asyncio
import concurrent.futures
EXECUTOR = concurrent.futures.ThreadPoolExecutor(max_workers=5)
async def async_request(loop):
await asyncio.sleep(3)
def sync_request(_):
time.sleep(3)
async def main(loop):
futures = [loop.run_in_executor(EXECUTOR, async_request,loop)
for x in range(10)]
await asyncio.wait(futures)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
```
Resulting in the following error:
```
Traceback (most recent call last):
File "co_test.py", line 17, in <module>
loop.run_until_complete(main(loop))
File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "co_test.py", line 10, in main
futures = [loop.run_in_executor(EXECUTOR, req,loop) for x in range(10)]
File "co_test.py", line 10, in <listcomp>
futures = [loop.run_in_executor(EXECUTOR, req,loop) for x in range(10)]
File "/usr/lib/python3.5/asyncio/base_events.py", line 541, in run_in_executor
raise TypeError("coroutines cannot be used with run_in_executor()")
TypeError: coroutines cannot be used with run_in_executor()
```
I know that I could use `sync_request` funcion instead of `async_request`, in this case I would have coroutines by means of sending the blocking function to another thread.
I also know I could call `async_request` ten times in the event loop. Something like in the code below:
```
loop = asyncio.get_event_loop()
futures = [async_request(loop) for i in range(10)]
loop.run_until_complete(asyncio.wait(futures))
```
But in this case I would be using a single thread.
How could I use both scenarios, the coroutines working within multithreads? As you can see by the code, I am passing (and not using) the `pool` to the `async_request` in the hopes I can code something that tells the worker to make a future, send it to the pool and asynchronously (freeing the worker) waits for the result.
The reason I want to do that is to make the application scalable. Is it an unnecessary step? Should I simply have a thread per url and that is it? Something like:
```
LEN = len(list_of_urls)
EXECUTOR = concurrent.futures.ThreadPoolExecutor(max_workers=LEN)
```
is good enough? | 2017/09/06 | [
"https://Stackoverflow.com/questions/46074841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1494511/"
] | From what I understood from the question, you are trying to use each thread to:
* trigger a coroutine execution
* be free to receive more coroutines to trigger
* wait everything to end in an asynchronous way
However, as soon as you call the loop (be it the main or a new loop) to wait for results, it blocks the thread waiting.
And, by using run\_in\_executor with a bunch of sync functions, the thread doesn't actually know if there are more coroutines to dispatch in one go before reaching the point where it waits the loop.
I think that if you want to dispatch a bunch of coroutines in such a way as to each thread manage its own group of coroutines in its own event loop, the following code achieved the 1 second total time, multithreaded wait for 10 async sleeps of 1 second.
```
import asyncio
import threading
from asyncio import AbstractEventLoop
from concurrent.futures import ThreadPoolExecutor
from time import perf_counter
from typing import Dict, Set
import _asyncio
event_loops_for_each_thread: Dict[int, AbstractEventLoop] = {}
def run(corofn, *args):
curr_thread_id = threading.current_thread().ident
if curr_thread_id not in event_loops_for_each_thread:
event_loops_for_each_thread[curr_thread_id] = asyncio.new_event_loop()
thread_loop = event_loops_for_each_thread[curr_thread_id]
coro = corofn(*args)
return thread_loop.create_task(coro)
async def async_gather_tasks(all_tasks: Set[_asyncio.Task]):
return await asyncio.gather(*all_tasks)
def wait_loops():
# each thread will block waiting all async calls of its specific async loop
curr_thread_id = threading.current_thread().ident
threads_event_loop = event_loops_for_each_thread[curr_thread_id]
# I print the following to prove that each thread is waiting its loop
print(f'Thread {curr_thread_id} will wait its tasks.')
return threads_event_loop.run_until_complete(async_gather_tasks(asyncio.all_tasks(threads_event_loop)))
async def main():
loop = asyncio.get_event_loop()
max_workers = 5
executor = ThreadPoolExecutor(max_workers=max_workers)
# dispatching async tasks for each thread.
futures = [
loop.run_in_executor(executor, run, asyncio.sleep, 1, x)
for x in range(10)]
# waiting the threads finish dispatching the async executions to its own event loops
await asyncio.wait(futures)
# at this point the async events were dispatched to each thread event loop
# in the lines below, you tell each worker thread to wait all its async tasks completion.
futures = [
loop.run_in_executor(executor, wait_loops)
for _ in range(max_workers)
]
print(await asyncio.gather(*futures))
# it will print something like:
# [[1, 8], [0], [6, 3, 9, 7], [4], [2, 5]]
# each sub-set is the result of the tasks of a thread
# it is non-deterministic, so it will return a diferent array of arrays each time you run.
if __name__ == '__main__':
loop = asyncio.get_event_loop()
start = perf_counter()
loop.run_until_complete(main())
end = perf_counter()
duration_s = end - start
# the print below proves that all threads are waiting its tasks asynchronously
print(f'duration_s={duration_s:.3f}')
``` | I just wanted to write a similar answer to [Tonsic's answer](https://stackoverflow.com/a/63106889) on how `asyncio` should actually be used in this situation, but much more succinctly (using some newer `asyncio` features as well).
What you're really looking for in this case `asyncio.gather`, which let's you run many coroutines concurrently.
From your example, it should thus become:
```py
async def async_request():
await asyncio.sleep(3)
async def main():
await asyncio.gather(*[async_request() for _ in range(10)])
```
Now when we time it, it takes about 3 seconds, as desired, instead of 30 seconds:
```py
>>> from time import time
>>> start = time()
>>> asyncio.run(main())
>>> time() - start
3.00907039642334
```
Furthermore, on using `concurrent.futures` alongside `asyncio`, you should identify what blocking code needs an executor and only apply it there to turn it into asynchronous code.
```py
async def async_request():
# The default executor is a `ThreadPoolExecutor`.
# In python >= 3.9, this can be shortened to `asyncio.to_thread(sync_request)`.
await asyncio.get_running_loop().run_in_executor(None, sync_request)
```
From that point, you can then manage your executors by treating these as coroutines with `asyncio`, using things like `asyncio.gather`, as originally shown. |
49,484,847 | I have a remote machine at my workplace, when we developers run server/ or docker containers. everything was working fine but a while back somethign went wrong.
if I run the python flask app
```
from app import app
app.run(host='0.0.0.0', port=5050)
```
i get message
```
* Running on http://0.0.0.0:5050/
```
and I am able to access the above from my local machine using the remote server machine `ip:5050` but if I run docker container `docker run -itd <conta_image_name> -p 80:90 --add-host=localdomain.com:machine_ip_address` i get error message saying `IPv4 forwarding is disabled. Networking will not work.`
Now this issue is in production so I really need someone to throw up some light, what might be wrong or let me know what more info I need to put. | 2018/03/26 | [
"https://Stackoverflow.com/questions/49484847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3311276/"
] | I have fixed this issue myself following this: <https://success.docker.com/article/ipv4-forwarding> | Another solution is..
Try adding -net=host along with docker run command
<https://medium.com/@gchandra/docker-ipv4-forwarding-is-disabled-8499ce59231e> |
44,782,476 | What is the most pythonic way of splitting a NumPy matrix (a 2-D array) into equal chunks both vertically and horizontally?
For example :
```
aa = np.reshape(np.arange(270),(18,15)) # a 18x15 matrix
```
then a "function" like
```
ab = np.split2d(aa,(2,3))
```
would result in a list of 6 matrices shaped (9,5) each. The first guess is combine hsplit, map and vsplit, but how the mar has to be applied if there are two parameters to define for it, like :
```
map(np.vsplit(@,3),np.hsplit(aa,2))
``` | 2017/06/27 | [
"https://Stackoverflow.com/questions/44782476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8044432/"
] | Here's one approach staying within NumPy environment -
```
def view_as_blocks(arr, BSZ):
# arr is input array, BSZ is block-size
m,n = arr.shape
M,N = BSZ
return arr.reshape(m//M, M, n//N, N).swapaxes(1,2).reshape(-1,M,N)
```
**Sample runs**
1) Actual big case to verify shapes :
```
In [41]: aa = np.reshape(np.arange(270),(18,15))
In [42]: view_as_blocks(aa, (9,5)).shape
Out[42]: (6, 9, 5)
```
2) Small case to manually verify values:
```
In [43]: aa = np.reshape(np.arange(36),(6,6))
In [44]: aa
Out[44]:
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
In [45]: view_as_blocks(aa, (2,3)) # Blocks of shape (2,3)
Out[45]:
array([[[ 0, 1, 2],
[ 6, 7, 8]],
[[ 3, 4, 5],
[ 9, 10, 11]],
[[12, 13, 14],
[18, 19, 20]],
[[15, 16, 17],
[21, 22, 23]],
[[24, 25, 26],
[30, 31, 32]],
[[27, 28, 29],
[33, 34, 35]]])
```
If you are willing to work with other libraries, [`scikit-image`](http://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_blocks) could be of use here, like so -
```
from skimage.util import view_as_blocks as viewB
out = viewB(aa, tuple(BSZ)).reshape(-1,*BSZ)
```
---
Runtime test -
```
In [103]: aa = np.reshape(np.arange(270),(18,15))
# @EFT's soln
In [99]: %timeit split_2d(aa, (2,3))
10000 loops, best of 3: 23.3 µs per loop
# @glegoux's soln-1
In [100]: %timeit list(get_chunks(aa, 2,3))
100000 loops, best of 3: 3.7 µs per loop
# @glegoux's soln-2
In [111]: %timeit list(get_chunks2(aa, 9, 5))
100000 loops, best of 3: 3.39 µs per loop
# Proposed in this post
In [101]: %timeit view_as_blocks(aa, (9,5))
1000000 loops, best of 3: 1.86 µs per loop
```
Please note that I have used `(2,3)` for `split_2d` and `get_chunks` as by their definitions, they are using that as the number of blocks. In my case with `view_as_blocks`, I have the parameter `BSZ` indicating the block size. So, I have `(9,5)` there. `get_chunks2` follows the same format as `view_as_blocks`. The outputs should represent the same there. | To cut, this matrix (18,15) :
```
+-+-+-+
+ +
+-+-+-+
```
in 2x3 blocks (9,5) like it :
```
+-+-+-+
+-+-+-+
+-+-+-+
```
Do:
```
from pprint import pprint
import numpy as np
M = np.reshape(np.arange(18*15),(18,15))
def get_chunks(M, n, p):
n = len(M)//n
p = len(M[0])//p
for i in range(0, len(M), n):
for j in range(0, len(M[0]), p):
yield M[i:i+n,j:j+p]
def get_chunks2(M, n, p):
for i in range(0, len(M), n):
for j in range(0, len(M[0]), p):
yield M[i:i+n,j:j+p]
# list(get_chunks2(M, 9, 5)) same result more faster
chunks = list(get_chunks(M, 2, 3))
pprint(chunks)
```
Output:
```
[array([[ 0, 1, 2, 3, 4],
[ 15, 16, 17, 18, 19],
[ 30, 31, 32, 33, 34],
[ 45, 46, 47, 48, 49],
[ 60, 61, 62, 63, 64],
[ 75, 76, 77, 78, 79],
[ 90, 91, 92, 93, 94],
[105, 106, 107, 108, 109],
[120, 121, 122, 123, 124]]),
array([[ 5, 6, 7, 8, 9],
[ 20, 21, 22, 23, 24],
[ 35, 36, 37, 38, 39],
[ 50, 51, 52, 53, 54],
[ 65, 66, 67, 68, 69],
[ 80, 81, 82, 83, 84],
[ 95, 96, 97, 98, 99],
[110, 111, 112, 113, 114],
[125, 126, 127, 128, 129]]),
array([[ 10, 11, 12, 13, 14],
[ 25, 26, 27, 28, 29],
[ 40, 41, 42, 43, 44],
[ 55, 56, 57, 58, 59],
[ 70, 71, 72, 73, 74],
[ 85, 86, 87, 88, 89],
[100, 101, 102, 103, 104],
[115, 116, 117, 118, 119],
[130, 131, 132, 133, 134]]),
array([[135, 136, 137, 138, 139],
[150, 151, 152, 153, 154],
[165, 166, 167, 168, 169],
[180, 181, 182, 183, 184],
[195, 196, 197, 198, 199],
[210, 211, 212, 213, 214],
[225, 226, 227, 228, 229],
[240, 241, 242, 243, 244],
[255, 256, 257, 258, 259]]),
array([[140, 141, 142, 143, 144],
[155, 156, 157, 158, 159],
[170, 171, 172, 173, 174],
[185, 186, 187, 188, 189],
[200, 201, 202, 203, 204],
[215, 216, 217, 218, 219],
[230, 231, 232, 233, 234],
[245, 246, 247, 248, 249],
[260, 261, 262, 263, 264]]),
array([[145, 146, 147, 148, 149],
[160, 161, 162, 163, 164],
[175, 176, 177, 178, 179],
[190, 191, 192, 193, 194],
[205, 206, 207, 208, 209],
[220, 221, 222, 223, 224],
[235, 236, 237, 238, 239],
[250, 251, 252, 253, 254],
[265, 266, 267, 268, 269]])]
``` |
44,782,476 | What is the most pythonic way of splitting a NumPy matrix (a 2-D array) into equal chunks both vertically and horizontally?
For example :
```
aa = np.reshape(np.arange(270),(18,15)) # a 18x15 matrix
```
then a "function" like
```
ab = np.split2d(aa,(2,3))
```
would result in a list of 6 matrices shaped (9,5) each. The first guess is combine hsplit, map and vsplit, but how the mar has to be applied if there are two parameters to define for it, like :
```
map(np.vsplit(@,3),np.hsplit(aa,2))
``` | 2017/06/27 | [
"https://Stackoverflow.com/questions/44782476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8044432/"
] | Here's one approach staying within NumPy environment -
```
def view_as_blocks(arr, BSZ):
# arr is input array, BSZ is block-size
m,n = arr.shape
M,N = BSZ
return arr.reshape(m//M, M, n//N, N).swapaxes(1,2).reshape(-1,M,N)
```
**Sample runs**
1) Actual big case to verify shapes :
```
In [41]: aa = np.reshape(np.arange(270),(18,15))
In [42]: view_as_blocks(aa, (9,5)).shape
Out[42]: (6, 9, 5)
```
2) Small case to manually verify values:
```
In [43]: aa = np.reshape(np.arange(36),(6,6))
In [44]: aa
Out[44]:
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
In [45]: view_as_blocks(aa, (2,3)) # Blocks of shape (2,3)
Out[45]:
array([[[ 0, 1, 2],
[ 6, 7, 8]],
[[ 3, 4, 5],
[ 9, 10, 11]],
[[12, 13, 14],
[18, 19, 20]],
[[15, 16, 17],
[21, 22, 23]],
[[24, 25, 26],
[30, 31, 32]],
[[27, 28, 29],
[33, 34, 35]]])
```
If you are willing to work with other libraries, [`scikit-image`](http://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_blocks) could be of use here, like so -
```
from skimage.util import view_as_blocks as viewB
out = viewB(aa, tuple(BSZ)).reshape(-1,*BSZ)
```
---
Runtime test -
```
In [103]: aa = np.reshape(np.arange(270),(18,15))
# @EFT's soln
In [99]: %timeit split_2d(aa, (2,3))
10000 loops, best of 3: 23.3 µs per loop
# @glegoux's soln-1
In [100]: %timeit list(get_chunks(aa, 2,3))
100000 loops, best of 3: 3.7 µs per loop
# @glegoux's soln-2
In [111]: %timeit list(get_chunks2(aa, 9, 5))
100000 loops, best of 3: 3.39 µs per loop
# Proposed in this post
In [101]: %timeit view_as_blocks(aa, (9,5))
1000000 loops, best of 3: 1.86 µs per loop
```
Please note that I have used `(2,3)` for `split_2d` and `get_chunks` as by their definitions, they are using that as the number of blocks. In my case with `view_as_blocks`, I have the parameter `BSZ` indicating the block size. So, I have `(9,5)` there. `get_chunks2` follows the same format as `view_as_blocks`. The outputs should represent the same there. | For a simpler solution, I used `np.array_split` together with transforming the matrices. So let's say that I want it split into 3 equal chunks vertically and 2 equal chunks horizontally, then:
```
# Create your matrix
matrix = np.reshape(np.arange(270),(18,15)) # a 18x15 matrix
# Container for your final matrices
final_matrices = []
# Then split into 3 equal chunks vertically
vertically_split_matrices = np.array_split(matrix)
for v_m in vertically_split_matrices:
# Then split the transformed matrices equally
m1, m2 = np.array_split(v_m.T, 2)
# And transform the matrices back
final_matrices.append(m1.T)
final_matrices.append(m2.T)
```
So I end up with 6 chunks, all of which are the same height and the same width. |
44,782,476 | What is the most pythonic way of splitting a NumPy matrix (a 2-D array) into equal chunks both vertically and horizontally?
For example :
```
aa = np.reshape(np.arange(270),(18,15)) # a 18x15 matrix
```
then a "function" like
```
ab = np.split2d(aa,(2,3))
```
would result in a list of 6 matrices shaped (9,5) each. The first guess is combine hsplit, map and vsplit, but how the mar has to be applied if there are two parameters to define for it, like :
```
map(np.vsplit(@,3),np.hsplit(aa,2))
``` | 2017/06/27 | [
"https://Stackoverflow.com/questions/44782476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8044432/"
] | You could use `np.split` & `np.concatenate`, the latter to allow the second split to be conducted in a single step:
```
def split_2d(array, splits):
x, y = splits
return np.split(np.concatenate(np.split(array, y, axis=1)), x*y)
ab = split_2d(aa,(2,3))
ab[0].shape
Out[95]: (9, 5)
len(ab)
Out[96]: 6
```
This also seems like it should be relatively straightforward to generalize to the n-dim case, though I haven't followed that thought all the way through just yet.
**Edit:**
For a single array as output, just add `np.stack`:
```
np.stack(ab).shape
Out[99]: (6, 9, 5)
``` | To cut, this matrix (18,15) :
```
+-+-+-+
+ +
+-+-+-+
```
in 2x3 blocks (9,5) like it :
```
+-+-+-+
+-+-+-+
+-+-+-+
```
Do:
```
from pprint import pprint
import numpy as np
M = np.reshape(np.arange(18*15),(18,15))
def get_chunks(M, n, p):
n = len(M)//n
p = len(M[0])//p
for i in range(0, len(M), n):
for j in range(0, len(M[0]), p):
yield M[i:i+n,j:j+p]
def get_chunks2(M, n, p):
for i in range(0, len(M), n):
for j in range(0, len(M[0]), p):
yield M[i:i+n,j:j+p]
# list(get_chunks2(M, 9, 5)) same result more faster
chunks = list(get_chunks(M, 2, 3))
pprint(chunks)
```
Output:
```
[array([[ 0, 1, 2, 3, 4],
[ 15, 16, 17, 18, 19],
[ 30, 31, 32, 33, 34],
[ 45, 46, 47, 48, 49],
[ 60, 61, 62, 63, 64],
[ 75, 76, 77, 78, 79],
[ 90, 91, 92, 93, 94],
[105, 106, 107, 108, 109],
[120, 121, 122, 123, 124]]),
array([[ 5, 6, 7, 8, 9],
[ 20, 21, 22, 23, 24],
[ 35, 36, 37, 38, 39],
[ 50, 51, 52, 53, 54],
[ 65, 66, 67, 68, 69],
[ 80, 81, 82, 83, 84],
[ 95, 96, 97, 98, 99],
[110, 111, 112, 113, 114],
[125, 126, 127, 128, 129]]),
array([[ 10, 11, 12, 13, 14],
[ 25, 26, 27, 28, 29],
[ 40, 41, 42, 43, 44],
[ 55, 56, 57, 58, 59],
[ 70, 71, 72, 73, 74],
[ 85, 86, 87, 88, 89],
[100, 101, 102, 103, 104],
[115, 116, 117, 118, 119],
[130, 131, 132, 133, 134]]),
array([[135, 136, 137, 138, 139],
[150, 151, 152, 153, 154],
[165, 166, 167, 168, 169],
[180, 181, 182, 183, 184],
[195, 196, 197, 198, 199],
[210, 211, 212, 213, 214],
[225, 226, 227, 228, 229],
[240, 241, 242, 243, 244],
[255, 256, 257, 258, 259]]),
array([[140, 141, 142, 143, 144],
[155, 156, 157, 158, 159],
[170, 171, 172, 173, 174],
[185, 186, 187, 188, 189],
[200, 201, 202, 203, 204],
[215, 216, 217, 218, 219],
[230, 231, 232, 233, 234],
[245, 246, 247, 248, 249],
[260, 261, 262, 263, 264]]),
array([[145, 146, 147, 148, 149],
[160, 161, 162, 163, 164],
[175, 176, 177, 178, 179],
[190, 191, 192, 193, 194],
[205, 206, 207, 208, 209],
[220, 221, 222, 223, 224],
[235, 236, 237, 238, 239],
[250, 251, 252, 253, 254],
[265, 266, 267, 268, 269]])]
``` |
44,782,476 | What is the most pythonic way of splitting a NumPy matrix (a 2-D array) into equal chunks both vertically and horizontally?
For example :
```
aa = np.reshape(np.arange(270),(18,15)) # a 18x15 matrix
```
then a "function" like
```
ab = np.split2d(aa,(2,3))
```
would result in a list of 6 matrices shaped (9,5) each. The first guess is combine hsplit, map and vsplit, but how the mar has to be applied if there are two parameters to define for it, like :
```
map(np.vsplit(@,3),np.hsplit(aa,2))
``` | 2017/06/27 | [
"https://Stackoverflow.com/questions/44782476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8044432/"
] | To cut, this matrix (18,15) :
```
+-+-+-+
+ +
+-+-+-+
```
in 2x3 blocks (9,5) like it :
```
+-+-+-+
+-+-+-+
+-+-+-+
```
Do:
```
from pprint import pprint
import numpy as np
M = np.reshape(np.arange(18*15),(18,15))
def get_chunks(M, n, p):
n = len(M)//n
p = len(M[0])//p
for i in range(0, len(M), n):
for j in range(0, len(M[0]), p):
yield M[i:i+n,j:j+p]
def get_chunks2(M, n, p):
for i in range(0, len(M), n):
for j in range(0, len(M[0]), p):
yield M[i:i+n,j:j+p]
# list(get_chunks2(M, 9, 5)) same result more faster
chunks = list(get_chunks(M, 2, 3))
pprint(chunks)
```
Output:
```
[array([[ 0, 1, 2, 3, 4],
[ 15, 16, 17, 18, 19],
[ 30, 31, 32, 33, 34],
[ 45, 46, 47, 48, 49],
[ 60, 61, 62, 63, 64],
[ 75, 76, 77, 78, 79],
[ 90, 91, 92, 93, 94],
[105, 106, 107, 108, 109],
[120, 121, 122, 123, 124]]),
array([[ 5, 6, 7, 8, 9],
[ 20, 21, 22, 23, 24],
[ 35, 36, 37, 38, 39],
[ 50, 51, 52, 53, 54],
[ 65, 66, 67, 68, 69],
[ 80, 81, 82, 83, 84],
[ 95, 96, 97, 98, 99],
[110, 111, 112, 113, 114],
[125, 126, 127, 128, 129]]),
array([[ 10, 11, 12, 13, 14],
[ 25, 26, 27, 28, 29],
[ 40, 41, 42, 43, 44],
[ 55, 56, 57, 58, 59],
[ 70, 71, 72, 73, 74],
[ 85, 86, 87, 88, 89],
[100, 101, 102, 103, 104],
[115, 116, 117, 118, 119],
[130, 131, 132, 133, 134]]),
array([[135, 136, 137, 138, 139],
[150, 151, 152, 153, 154],
[165, 166, 167, 168, 169],
[180, 181, 182, 183, 184],
[195, 196, 197, 198, 199],
[210, 211, 212, 213, 214],
[225, 226, 227, 228, 229],
[240, 241, 242, 243, 244],
[255, 256, 257, 258, 259]]),
array([[140, 141, 142, 143, 144],
[155, 156, 157, 158, 159],
[170, 171, 172, 173, 174],
[185, 186, 187, 188, 189],
[200, 201, 202, 203, 204],
[215, 216, 217, 218, 219],
[230, 231, 232, 233, 234],
[245, 246, 247, 248, 249],
[260, 261, 262, 263, 264]]),
array([[145, 146, 147, 148, 149],
[160, 161, 162, 163, 164],
[175, 176, 177, 178, 179],
[190, 191, 192, 193, 194],
[205, 206, 207, 208, 209],
[220, 221, 222, 223, 224],
[235, 236, 237, 238, 239],
[250, 251, 252, 253, 254],
[265, 266, 267, 268, 269]])]
``` | For a simpler solution, I used `np.array_split` together with transforming the matrices. So let's say that I want it split into 3 equal chunks vertically and 2 equal chunks horizontally, then:
```
# Create your matrix
matrix = np.reshape(np.arange(270),(18,15)) # a 18x15 matrix
# Container for your final matrices
final_matrices = []
# Then split into 3 equal chunks vertically
vertically_split_matrices = np.array_split(matrix)
for v_m in vertically_split_matrices:
# Then split the transformed matrices equally
m1, m2 = np.array_split(v_m.T, 2)
# And transform the matrices back
final_matrices.append(m1.T)
final_matrices.append(m2.T)
```
So I end up with 6 chunks, all of which are the same height and the same width. |
44,782,476 | What is the most pythonic way of splitting a NumPy matrix (a 2-D array) into equal chunks both vertically and horizontally?
For example :
```
aa = np.reshape(np.arange(270),(18,15)) # a 18x15 matrix
```
then a "function" like
```
ab = np.split2d(aa,(2,3))
```
would result in a list of 6 matrices shaped (9,5) each. The first guess is combine hsplit, map and vsplit, but how the mar has to be applied if there are two parameters to define for it, like :
```
map(np.vsplit(@,3),np.hsplit(aa,2))
``` | 2017/06/27 | [
"https://Stackoverflow.com/questions/44782476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8044432/"
] | You could use `np.split` & `np.concatenate`, the latter to allow the second split to be conducted in a single step:
```
def split_2d(array, splits):
x, y = splits
return np.split(np.concatenate(np.split(array, y, axis=1)), x*y)
ab = split_2d(aa,(2,3))
ab[0].shape
Out[95]: (9, 5)
len(ab)
Out[96]: 6
```
This also seems like it should be relatively straightforward to generalize to the n-dim case, though I haven't followed that thought all the way through just yet.
**Edit:**
For a single array as output, just add `np.stack`:
```
np.stack(ab).shape
Out[99]: (6, 9, 5)
``` | For a simpler solution, I used `np.array_split` together with transforming the matrices. So let's say that I want it split into 3 equal chunks vertically and 2 equal chunks horizontally, then:
```
# Create your matrix
matrix = np.reshape(np.arange(270),(18,15)) # a 18x15 matrix
# Container for your final matrices
final_matrices = []
# Then split into 3 equal chunks vertically
vertically_split_matrices = np.array_split(matrix)
for v_m in vertically_split_matrices:
# Then split the transformed matrices equally
m1, m2 = np.array_split(v_m.T, 2)
# And transform the matrices back
final_matrices.append(m1.T)
final_matrices.append(m2.T)
```
So I end up with 6 chunks, all of which are the same height and the same width. |
13,170,258 | I am using the [Python Requests Module](http://docs.python-requests.org/en/latest/user/quickstart/) to datamine a website. As part of the datamining, I have to HTTP POST a form and check if it succeeded by checking the resulting URL. My question is, after the POST, is it possible to request the server to not send the entire page? I only need to check the URL, yet my program downloads the entire page and consumes unnecessary bandwidth. The code is very simple
```
import requests
r = requests.post(URL, payload)
if 'keyword' in r.url:
success
fail
``` | 2012/11/01 | [
"https://Stackoverflow.com/questions/13170258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1726304/"
] | An easy solution, if it's implementable for you. Is to go low-level. Use socket library.
For example you need to send a POST with some data in its body. I used this in my Crawler for one site.
```
import socket
from urllib import quote # POST body is escaped. use quote
req_header = "POST /{0} HTTP/1.1\r\nHost: www.yourtarget.com\r\nUser-Agent: For the lulz..\r\nContent-Type: application/x-www-form-urlencoded; charset=UTF-8\r\nContent-Length: {1}"
req_body = quote("data1=yourtestdata&data2=foo&data3=bar=")
req_url = "test.php"
header = req_header.format(req_url,str(len(req_body))) #plug in req_url as {0}
#and length of req_body as Content-length
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) #create a socket
s.connect(("www.yourtarget.com",80)) #connect it
s.send(header+"\r\n\r\n"+body+"\r\n\r\n") # send header+ two times CR_LF + body + 2 times CR_LF to complete the request
page = ""
while True:
buf = s.recv(1024) #receive first 1024 bytes(in UTF-8 chars), this should be enought to receive the header in one try
if not buf:
break
if "\r\n\r\n" in page: # if we received the whole header(ending with 2x CRLF) break
break
page+=buf
s.close() # close the socket here. which should close the TCP connection even if data is still flowing in
# this should leave you with a header where you should find a 302 redirected and then your target URL in "Location:" header statement.
``` | It would help if you gave some more data, for example, a sample URL that you're trying to request. That being said, it seems to me that generally you're checking if you had the correct URL after your POST request using the following algorithm relying on redirection or HTTP 404 errors:
```
if original_url == returned request url:
correct url to a correctly made request
else:
wrong url and a wrongly made request
```
If this is the case, what you can do here is use the HTTP HEAD request (another type of HTTP request like GET, POST, etc.) in Python's `requests` library to get only the header and not also the page body. Then, you'd check the response code and redirection url (if present) to see if you made a request to a valid URL.
For example:
```py
def attempt_url(url):
'''Checks the url to see if it is valid, or returns a redirect or error.
Returns True if valid, False otherwise.'''
r = requests.head(url)
if r.status_code == 200:
return True
elif r.status_code in (301, 302):
if r.headers['location'] == url:
return True
else:
return False
elif r.status_code == 404:
return False
else:
raise Exception, "A status code we haven't prepared for has arisen!"
```
If this isn't quite what you're looking for, additional detail on your requirements would help. At the very least, this gets you the status code and headers without pulling all of the page data. |
13,170,258 | I am using the [Python Requests Module](http://docs.python-requests.org/en/latest/user/quickstart/) to datamine a website. As part of the datamining, I have to HTTP POST a form and check if it succeeded by checking the resulting URL. My question is, after the POST, is it possible to request the server to not send the entire page? I only need to check the URL, yet my program downloads the entire page and consumes unnecessary bandwidth. The code is very simple
```
import requests
r = requests.post(URL, payload)
if 'keyword' in r.url:
success
fail
``` | 2012/11/01 | [
"https://Stackoverflow.com/questions/13170258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1726304/"
] | An easy solution, if it's implementable for you. Is to go low-level. Use socket library.
For example you need to send a POST with some data in its body. I used this in my Crawler for one site.
```
import socket
from urllib import quote # POST body is escaped. use quote
req_header = "POST /{0} HTTP/1.1\r\nHost: www.yourtarget.com\r\nUser-Agent: For the lulz..\r\nContent-Type: application/x-www-form-urlencoded; charset=UTF-8\r\nContent-Length: {1}"
req_body = quote("data1=yourtestdata&data2=foo&data3=bar=")
req_url = "test.php"
header = req_header.format(req_url,str(len(req_body))) #plug in req_url as {0}
#and length of req_body as Content-length
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) #create a socket
s.connect(("www.yourtarget.com",80)) #connect it
s.send(header+"\r\n\r\n"+body+"\r\n\r\n") # send header+ two times CR_LF + body + 2 times CR_LF to complete the request
page = ""
while True:
buf = s.recv(1024) #receive first 1024 bytes(in UTF-8 chars), this should be enought to receive the header in one try
if not buf:
break
if "\r\n\r\n" in page: # if we received the whole header(ending with 2x CRLF) break
break
page+=buf
s.close() # close the socket here. which should close the TCP connection even if data is still flowing in
# this should leave you with a header where you should find a 302 redirected and then your target URL in "Location:" header statement.
``` | There's a chance the site uses [Post/Redirect/Get (PRG)](http://en.wikipedia.org/wiki/Post/Redirect/Get) pattern. If so then it's enough to not follow redirect and read `Location` header from response.
Example
```
>>> import requests
>>> response = requests.get('http://httpbin.org/redirect/1', allow_redirects=False)
>>> response.status_code
302
>>> response.headers['location']
'http://httpbin.org/get'
```
If you need more information on what would you get if you had followed redirection then you can use `HEAD` on the url given in `Location` header.
Example
```
>>> import requests
>>> response = requests.get('http://httpbin.org/redirect/1', allow_redirects=False)
>>> response.status_code
302
>>> response.headers['location']
'http://httpbin.org/get'
>>> response2 = requests.head(response.headers['location'])
>>> response2.status_code
200
>>> response2.headers
{'date': 'Wed, 07 Nov 2012 20:04:16 GMT', 'content-length': '352', 'content-type':
'application/json', 'connection': 'keep-alive', 'server': 'gunicorn/0.13.4'}
``` |
51,442,437 | When I try to JPEG-Decompress (old-style JPEG compression, not JPEG-LS and not JPEG2000) the RAW data, I get following error:
```
Traceback (most recent call last):
File "raw-reader.py", line 766, in <module>
raw_image_data = imageio.imread(io.BytesIO(raw_packed_image_data))
File "/home/ian/.local/lib/python3.6/site-packages/imageio/core/functions.py", line 206, in imread
reader = read(uri, format, 'i', **kwargs)
File "/home/ian/.local/lib/python3.6/site-packages/imageio/core/functions.py", line 129, in get_reader
return format.get_reader(request)
File "/home/ian/.local/lib/python3.6/site-packages/imageio/core/format.py", line 168, in get_reader
return self.Reader(self, request)
File "/home/ian/.local/lib/python3.6/site-packages/imageio/core/format.py", line 217, in __init__
self._open(**self.request.kwargs.copy())
File "/home/ian/.local/lib/python3.6/site-packages/imageio/plugins/pillow.py", line 398, in _open
pilmode=pilmode, as_gray=as_gray)
File "/home/ian/.local/lib/python3.6/site-packages/imageio/plugins/pillow.py", line 122, in _open
self._im = factory(self._fp, '')
File "/home/ian/.local/lib/python3.6/site-packages/PIL/JpegImagePlugin.py", line 780, in jpeg_factory
im = JpegImageFile(fp, filename)
File "/home/ian/.local/lib/python3.6/site-packages/PIL/ImageFile.py", line 102, in __init__
self._open()
File "/home/ian/.local/lib/python3.6/site-packages/PIL/JpegImagePlugin.py", line 339, in _open
handler(self, i)
File "/home/ian/.local/lib/python3.6/site-packages/PIL/JpegImagePlugin.py", line 166, in SOF
raise SyntaxError("cannot handle %d-bit layers" % self.bits)
SyntaxError: cannot handle 14-bit layers
```
The RAW data in the image is 14-bit JPEG data, and imageio isn't able to read it. When I tried using pillow, it didn't even recognize the data as JPEG. My question now is: How can I decompress the data without writing my own JPEG decompressor, while keeping in mind that the data is 14 bits?
My code:
```
import io
import imageio
allbytes = open("raw_data.dat", "rb").read()
raw_packed_image_data = allbytes
raw_image_data = imageio.imread(io.BytesIO(raw_packed_image_data))
```
The file `raw_data.dat` is a file containing purely the RAW-Image data compressed with JPEG. Link to [raw\_data.dat](https://drive.google.com/uc?authuser=0&id=1DXhrXU2IAxldhvxwp3otX43JEnl0ofQH&export=download) | 2018/07/20 | [
"https://Stackoverflow.com/questions/51442437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8098236/"
] | `raw_data.dat` is a `JPEG Lossless, Nonhierarchical` file with 2 frames and a precision > 8-bit, a very rare format.
The [imagecodecs](https://pypi.org/project/imagecodecs/) package can read the file (assuming that the `_imagecodecs` Cython extension is present):
```
>>> from imagecodecs import jpegsof3_decode
>>> data = open('raw_data.dat', 'rb').read()
>>> image = jpegsof3_decode(data)
>>> image.shape
(3528, 2640, 2)
>>> image.dtype
dtype('uint16')
```
The LEADTOOLS SDK should also be able to read the file (not tested). | I believe the issue can be solved using a different library to load the image. Similar underlying issue has been posted here [How to combine 3 high range JPEG2000 images into single RGB one?](https://stackoverflow.com/questions/44157201/how-to-combine-3-high-range-jpeg2000-images-into-single-rgb-one). You can use something along these lines:
```
import matplotlib.image as mpimg
img_red = mpimg.imread('raw_data.dat')
```
Then you can use the read bitmap for further manipulation as if it was loaded via PIL/Pillow/imageio. |
73,432,052 | I am writing python code to print all permutation of a number.
Below is my code:
```
a=[1,2,3,4]
for i in range(len(a)):
temp=a[:]
temp[0],temp[i]=temp[i],temp[0]
def p(temp,i):
k=i+1
if k ==len(a)-1:
print(temp)
return
temp[k],temp[k+1]=temp[k+1],temp[k]
p(temp,k)
temp[k],temp[k+1]=temp[k+1],temp[k]
p(temp,k)
p(temp,i=0)
```
The idea is to replace every integer at first place and permutate remaining.
That's what this for loop is doing:
```
for i in range(len(a)):
temp=a[:]
temp[0],temp[i]=temp[i],temp[0]
```
But,for every permutation starting with i,it only prints 4 permutations.
for ex:
Starting with 1,the permutations should be:
```
[1,2,3,4]
[1,2,4,3]
[1,3,2,4]
[1,3,4,2]
[1,4,3,2]
[1,4,2,3]
```
But,its only printing
```
[1,2,3,4]
[1,2,4,3]
[1,3,2,4]
[1,3,4,2]
```
4 at second place is not getting printed.
Missing:
```
[1,4,3,2]
[1,4,2,3]
```
Can anyone tell me what am I doing wrong? | 2022/08/21 | [
"https://Stackoverflow.com/questions/73432052",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19723986/"
] | Edited code:
```py
def p(temp, k=0):
if k == len(temp):
print(temp)
return
for i in range(k, len(temp)):
temp[k], temp[i] = temp[i], temp[k]
p(temp, k+1)
temp[k], temp[i] = temp[i], temp[k]
p([1,2,3,4])
```
result:
```py
[1, 2, 3, 4]
[1, 2, 4, 3]
[1, 3, 2, 4]
[1, 3, 4, 2]
[1, 4, 3, 2]
[1, 4, 2, 3]
[2, 1, 3, 4]
[2, 1, 4, 3]
[2, 3, 1, 4]
[2, 3, 4, 1]
[2, 4, 3, 1]
[2, 4, 1, 3]
[3, 2, 1, 4]
[3, 2, 4, 1]
[3, 1, 2, 4]
[3, 1, 4, 2]
[3, 4, 1, 2]
[3, 4, 2, 1]
[4, 2, 3, 1]
[4, 2, 1, 3]
[4, 3, 2, 1]
[4, 3, 1, 2]
[4, 1, 3, 2]
[4, 1, 2, 3]
``` | Heres a correction for your process
```py
a=[1,2,3,4]
def p(temp,i):
k=i+1
if k ==len(a)-1:
print(temp)
return
temp[k],temp[k+1]=temp[k+1],temp[k]
prev = temp.copy()
p(temp,k)
temp[k],temp[k+1]=temp[k+1],temp[k]
p(temp,k)
temp[k],temp[k+1]=temp[k+1],temp[k]
if temp != prev:
p(temp,k)
for i in range(len(a)):
temp=a[:]
temp[0],temp[i]=temp[i],temp[0]
p(temp,i=0)
```
The problem was that you were only rearranging the temp list 2 times in the actual call of the function p from the for loop. Calling it three times made it to contain '4' in the second place of the permutation. The if statement I added is to check if the previous rearranged temp is not the as the current rearrange temp, this removed duplicate results which occurred when calling the function p three times. Although I do recommend Hamid's answer. |
73,432,052 | I am writing python code to print all permutation of a number.
Below is my code:
```
a=[1,2,3,4]
for i in range(len(a)):
temp=a[:]
temp[0],temp[i]=temp[i],temp[0]
def p(temp,i):
k=i+1
if k ==len(a)-1:
print(temp)
return
temp[k],temp[k+1]=temp[k+1],temp[k]
p(temp,k)
temp[k],temp[k+1]=temp[k+1],temp[k]
p(temp,k)
p(temp,i=0)
```
The idea is to replace every integer at first place and permutate remaining.
That's what this for loop is doing:
```
for i in range(len(a)):
temp=a[:]
temp[0],temp[i]=temp[i],temp[0]
```
But,for every permutation starting with i,it only prints 4 permutations.
for ex:
Starting with 1,the permutations should be:
```
[1,2,3,4]
[1,2,4,3]
[1,3,2,4]
[1,3,4,2]
[1,4,3,2]
[1,4,2,3]
```
But,its only printing
```
[1,2,3,4]
[1,2,4,3]
[1,3,2,4]
[1,3,4,2]
```
4 at second place is not getting printed.
Missing:
```
[1,4,3,2]
[1,4,2,3]
```
Can anyone tell me what am I doing wrong? | 2022/08/21 | [
"https://Stackoverflow.com/questions/73432052",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19723986/"
] | Edited code:
```py
def p(temp, k=0):
if k == len(temp):
print(temp)
return
for i in range(k, len(temp)):
temp[k], temp[i] = temp[i], temp[k]
p(temp, k+1)
temp[k], temp[i] = temp[i], temp[k]
p([1,2,3,4])
```
result:
```py
[1, 2, 3, 4]
[1, 2, 4, 3]
[1, 3, 2, 4]
[1, 3, 4, 2]
[1, 4, 3, 2]
[1, 4, 2, 3]
[2, 1, 3, 4]
[2, 1, 4, 3]
[2, 3, 1, 4]
[2, 3, 4, 1]
[2, 4, 3, 1]
[2, 4, 1, 3]
[3, 2, 1, 4]
[3, 2, 4, 1]
[3, 1, 2, 4]
[3, 1, 4, 2]
[3, 4, 1, 2]
[3, 4, 2, 1]
[4, 2, 3, 1]
[4, 2, 1, 3]
[4, 3, 2, 1]
[4, 3, 1, 2]
[4, 1, 3, 2]
[4, 1, 2, 3]
``` | The logic seems to be wrong. You are swapping the neighboring elements. Because of that, you are unable to generate all the permutations. Ideally, you want to recursively repeat the logic of fixing the first element. |
30,697,197 | I have a text file of the following format:
```
Run#1 Step#1 > Connecting to server
Run#1 Step#2 > Connected OK
Run#1 Step#3 > Sending request: {
"path": "/testpage",
"time": "2015-06-07T00:00:00.000Z"
}
Run#1 Step#4 > Request sent OK
```
What I need to do is to process this file. It would be easier if each step was printed on a separate line:
```
Run#1 Step#1 > Connecting to server
Run#1 Step#2 > Connected OK
Run#1 Step#3 > Sending request: { "path": "/testpage", "time": "2015-06-07T00:00:00.000Z" }
Run#1 Step#4 > Request sent OK
```
How can I do this (either in bash or ruby/python/... script) ? | 2015/06/07 | [
"https://Stackoverflow.com/questions/30697197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1802425/"
] | 1) split("\n")
2) replace("Run#", "\nRun#")
3) delete first line ("\n") | If all your files looks exactly like this one you can solve your problem with this code
```
file=open(filename,"r+")
lines = file.readlines()
for line in lines:
if (line.startswith("Run") and not "{" in line) or "}" in line:
print(line,end='')
else:
print(line.replace("\n",""), end='')
``` |
30,697,197 | I have a text file of the following format:
```
Run#1 Step#1 > Connecting to server
Run#1 Step#2 > Connected OK
Run#1 Step#3 > Sending request: {
"path": "/testpage",
"time": "2015-06-07T00:00:00.000Z"
}
Run#1 Step#4 > Request sent OK
```
What I need to do is to process this file. It would be easier if each step was printed on a separate line:
```
Run#1 Step#1 > Connecting to server
Run#1 Step#2 > Connected OK
Run#1 Step#3 > Sending request: { "path": "/testpage", "time": "2015-06-07T00:00:00.000Z" }
Run#1 Step#4 > Request sent OK
```
How can I do this (either in bash or ruby/python/... script) ? | 2015/06/07 | [
"https://Stackoverflow.com/questions/30697197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1802425/"
] | gnu sed solution
```
cat file | sed ':a; N; $! ba; s/\n//g; s/Run#/\nRun#/g;' | sed '1d;' > outputfile
``` | If all your files looks exactly like this one you can solve your problem with this code
```
file=open(filename,"r+")
lines = file.readlines()
for line in lines:
if (line.startswith("Run") and not "{" in line) or "}" in line:
print(line,end='')
else:
print(line.replace("\n",""), end='')
``` |
30,697,197 | I have a text file of the following format:
```
Run#1 Step#1 > Connecting to server
Run#1 Step#2 > Connected OK
Run#1 Step#3 > Sending request: {
"path": "/testpage",
"time": "2015-06-07T00:00:00.000Z"
}
Run#1 Step#4 > Request sent OK
```
What I need to do is to process this file. It would be easier if each step was printed on a separate line:
```
Run#1 Step#1 > Connecting to server
Run#1 Step#2 > Connected OK
Run#1 Step#3 > Sending request: { "path": "/testpage", "time": "2015-06-07T00:00:00.000Z" }
Run#1 Step#4 > Request sent OK
```
How can I do this (either in bash or ruby/python/... script) ? | 2015/06/07 | [
"https://Stackoverflow.com/questions/30697197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1802425/"
] | using python group the lines based on lines starting with `Run#` and join any sections of lines that don't start with Run# to the previous Run# line regardless of the content, it will also replace the original file and you don't need to read the whole file into memory:
```
from itertools import groupby
from tempfile import NamedTemporaryFile
from shutil import move
with open("file.txt") as f, NamedTemporaryFile("w",dir=".",delete=False) as out:
grouped = groupby(f, key=lambda x: not x.startswith("Run#"))
for k, v in grouped:
if not k:
v, nxt = "".join(v), next(grouped, " ")[1]
out.write("{}{}\n".format(v.rstrip(), "".join(map(str.strip, nxt))))
else:
out.writelines(v)
move(out.name,"file.txt")
```
Output:
```
Run#1 Step#1 > Connecting to server
Run#1 Step#2 > Connected OK
Run#1 Step#3 > Sending request: {"path": "/testpage","time": "2015-06-07T00:00:00.000Z"}
Run#1 Step#4 > Request sent OK
``` | If all your files looks exactly like this one you can solve your problem with this code
```
file=open(filename,"r+")
lines = file.readlines()
for line in lines:
if (line.startswith("Run") and not "{" in line) or "}" in line:
print(line,end='')
else:
print(line.replace("\n",""), end='')
``` |
5,873,072 | I'm using the graphics library [Pyglet](http://en.wikipedia.org/wiki/Pyglet) to do some drawing and want to get the resulting image out as a Python list (so I can convert it to a [NumPy](http://en.wikipedia.org/wiki/NumPy) array).
Pyglet gives me a string of hex characters, like this: '\xff' (indicating a value of 255 at one pixel). How can I convert such a string to an int?
I've tried int('\xff', 16), but that doesn't work. Note that according to [the documentation](http://docs.python.org/reference/lexical_analysis.html#string-literals), '\xnn' is escaped and encoded as a hexedecimal char, but it doesn't tell me how to convert that to an int. | 2011/05/03 | [
"https://Stackoverflow.com/questions/5873072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/736578/"
] | To get a NumPy array straight from a Python string, you can use
```
s = "\xff\x03"
a = numpy.frombuffer(s, numpy.uint8)
```
To get a list you can use
```
a = map(ord, s)
```
An alternative to a list in Python 2.6 or above is to use `bytesarray(s)`. | Try something like this:
```
a = '\xff'
print int(a.encode('hex'), 16)
255
```
Edit: sorry, the previous version had a mistake - decode instead of encode. This works.
Edit 2: I actually misread the question, as commenters noted. This may be already obvious but in case someone finds it helpful the regular python list solution would be:
```
>>> a = '\xff\xfe'
>>> [str(ord(char)) for char in a]
['255', '254']
>>> ' '.join([str(ord(char)) for char in a])
'255 254'
``` |
5,873,072 | I'm using the graphics library [Pyglet](http://en.wikipedia.org/wiki/Pyglet) to do some drawing and want to get the resulting image out as a Python list (so I can convert it to a [NumPy](http://en.wikipedia.org/wiki/NumPy) array).
Pyglet gives me a string of hex characters, like this: '\xff' (indicating a value of 255 at one pixel). How can I convert such a string to an int?
I've tried int('\xff', 16), but that doesn't work. Note that according to [the documentation](http://docs.python.org/reference/lexical_analysis.html#string-literals), '\xnn' is escaped and encoded as a hexedecimal char, but it doesn't tell me how to convert that to an int. | 2011/05/03 | [
"https://Stackoverflow.com/questions/5873072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/736578/"
] | To get a NumPy array straight from a Python string, you can use
```
s = "\xff\x03"
a = numpy.frombuffer(s, numpy.uint8)
```
To get a list you can use
```
a = map(ord, s)
```
An alternative to a list in Python 2.6 or above is to use `bytesarray(s)`. | Yes, `\xff` is the print representation of a hex value in a byte. But `int()` doesn't work on hex representations of bytes, but on string representation of numbers. A number in the base-16 world is '13' or 'ab' or 'ff'. Consequently (but still funnily), `int('ff',16)` works fine. If you want to go that route, you need to get rid of the '\x' :-). |
5,873,072 | I'm using the graphics library [Pyglet](http://en.wikipedia.org/wiki/Pyglet) to do some drawing and want to get the resulting image out as a Python list (so I can convert it to a [NumPy](http://en.wikipedia.org/wiki/NumPy) array).
Pyglet gives me a string of hex characters, like this: '\xff' (indicating a value of 255 at one pixel). How can I convert such a string to an int?
I've tried int('\xff', 16), but that doesn't work. Note that according to [the documentation](http://docs.python.org/reference/lexical_analysis.html#string-literals), '\xnn' is escaped and encoded as a hexedecimal char, but it doesn't tell me how to convert that to an int. | 2011/05/03 | [
"https://Stackoverflow.com/questions/5873072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/736578/"
] | To get a NumPy array straight from a Python string, you can use
```
s = "\xff\x03"
a = numpy.frombuffer(s, numpy.uint8)
```
To get a list you can use
```
a = map(ord, s)
```
An alternative to a list in Python 2.6 or above is to use `bytesarray(s)`. | Here's a generalised approach that handles a hex string with variable length substrings, e.g.:
```
s = '5b1\n5\n3ad44'
```
The following code transforms a string with 3 million variable length hex substrings to a numpy integer array in 2 seconds (on a MacBook) through vectorisation:
```
import numpy as np, pandas as pd, cStringIO
s = ('5b1\n5\n3ad44\n' * 1000000)[:-1] # 3m item hex string (variable element length)
# change hex to 2 digit decimal
for i in range(0,9): s = s.replace(str(i),'0' + str(i))
for i in [['a','10'],['b','11'],['c','12'],['d','13'],['e','14'],['f','15']]:
s = s.replace(i[0],i[1])
# read string into numpy
n = np.array(pd.read_csv(cStringIO.StringIO(s), header=None)[[0]]).astype('int64')
# fix base
n = (n % 100) + 16 * ((n % 10000)/100) + 256 * ((n % 1000000)/10000) + 4096 * ((n % 100000000)/1000000) + 65536 * ((n % 10000000000)/100000000)
n[0:3] # confirm correct transformation to [1457, 5, 240964]
``` |
5,873,072 | I'm using the graphics library [Pyglet](http://en.wikipedia.org/wiki/Pyglet) to do some drawing and want to get the resulting image out as a Python list (so I can convert it to a [NumPy](http://en.wikipedia.org/wiki/NumPy) array).
Pyglet gives me a string of hex characters, like this: '\xff' (indicating a value of 255 at one pixel). How can I convert such a string to an int?
I've tried int('\xff', 16), but that doesn't work. Note that according to [the documentation](http://docs.python.org/reference/lexical_analysis.html#string-literals), '\xnn' is escaped and encoded as a hexedecimal char, but it doesn't tell me how to convert that to an int. | 2011/05/03 | [
"https://Stackoverflow.com/questions/5873072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/736578/"
] | Try something like this:
```
a = '\xff'
print int(a.encode('hex'), 16)
255
```
Edit: sorry, the previous version had a mistake - decode instead of encode. This works.
Edit 2: I actually misread the question, as commenters noted. This may be already obvious but in case someone finds it helpful the regular python list solution would be:
```
>>> a = '\xff\xfe'
>>> [str(ord(char)) for char in a]
['255', '254']
>>> ' '.join([str(ord(char)) for char in a])
'255 254'
``` | Yes, `\xff` is the print representation of a hex value in a byte. But `int()` doesn't work on hex representations of bytes, but on string representation of numbers. A number in the base-16 world is '13' or 'ab' or 'ff'. Consequently (but still funnily), `int('ff',16)` works fine. If you want to go that route, you need to get rid of the '\x' :-). |
5,873,072 | I'm using the graphics library [Pyglet](http://en.wikipedia.org/wiki/Pyglet) to do some drawing and want to get the resulting image out as a Python list (so I can convert it to a [NumPy](http://en.wikipedia.org/wiki/NumPy) array).
Pyglet gives me a string of hex characters, like this: '\xff' (indicating a value of 255 at one pixel). How can I convert such a string to an int?
I've tried int('\xff', 16), but that doesn't work. Note that according to [the documentation](http://docs.python.org/reference/lexical_analysis.html#string-literals), '\xnn' is escaped and encoded as a hexedecimal char, but it doesn't tell me how to convert that to an int. | 2011/05/03 | [
"https://Stackoverflow.com/questions/5873072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/736578/"
] | Try something like this:
```
a = '\xff'
print int(a.encode('hex'), 16)
255
```
Edit: sorry, the previous version had a mistake - decode instead of encode. This works.
Edit 2: I actually misread the question, as commenters noted. This may be already obvious but in case someone finds it helpful the regular python list solution would be:
```
>>> a = '\xff\xfe'
>>> [str(ord(char)) for char in a]
['255', '254']
>>> ' '.join([str(ord(char)) for char in a])
'255 254'
``` | Here's a generalised approach that handles a hex string with variable length substrings, e.g.:
```
s = '5b1\n5\n3ad44'
```
The following code transforms a string with 3 million variable length hex substrings to a numpy integer array in 2 seconds (on a MacBook) through vectorisation:
```
import numpy as np, pandas as pd, cStringIO
s = ('5b1\n5\n3ad44\n' * 1000000)[:-1] # 3m item hex string (variable element length)
# change hex to 2 digit decimal
for i in range(0,9): s = s.replace(str(i),'0' + str(i))
for i in [['a','10'],['b','11'],['c','12'],['d','13'],['e','14'],['f','15']]:
s = s.replace(i[0],i[1])
# read string into numpy
n = np.array(pd.read_csv(cStringIO.StringIO(s), header=None)[[0]]).astype('int64')
# fix base
n = (n % 100) + 16 * ((n % 10000)/100) + 256 * ((n % 1000000)/10000) + 4096 * ((n % 100000000)/1000000) + 65536 * ((n % 10000000000)/100000000)
n[0:3] # confirm correct transformation to [1457, 5, 240964]
``` |
5,873,072 | I'm using the graphics library [Pyglet](http://en.wikipedia.org/wiki/Pyglet) to do some drawing and want to get the resulting image out as a Python list (so I can convert it to a [NumPy](http://en.wikipedia.org/wiki/NumPy) array).
Pyglet gives me a string of hex characters, like this: '\xff' (indicating a value of 255 at one pixel). How can I convert such a string to an int?
I've tried int('\xff', 16), but that doesn't work. Note that according to [the documentation](http://docs.python.org/reference/lexical_analysis.html#string-literals), '\xnn' is escaped and encoded as a hexedecimal char, but it doesn't tell me how to convert that to an int. | 2011/05/03 | [
"https://Stackoverflow.com/questions/5873072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/736578/"
] | Here's a generalised approach that handles a hex string with variable length substrings, e.g.:
```
s = '5b1\n5\n3ad44'
```
The following code transforms a string with 3 million variable length hex substrings to a numpy integer array in 2 seconds (on a MacBook) through vectorisation:
```
import numpy as np, pandas as pd, cStringIO
s = ('5b1\n5\n3ad44\n' * 1000000)[:-1] # 3m item hex string (variable element length)
# change hex to 2 digit decimal
for i in range(0,9): s = s.replace(str(i),'0' + str(i))
for i in [['a','10'],['b','11'],['c','12'],['d','13'],['e','14'],['f','15']]:
s = s.replace(i[0],i[1])
# read string into numpy
n = np.array(pd.read_csv(cStringIO.StringIO(s), header=None)[[0]]).astype('int64')
# fix base
n = (n % 100) + 16 * ((n % 10000)/100) + 256 * ((n % 1000000)/10000) + 4096 * ((n % 100000000)/1000000) + 65536 * ((n % 10000000000)/100000000)
n[0:3] # confirm correct transformation to [1457, 5, 240964]
``` | Yes, `\xff` is the print representation of a hex value in a byte. But `int()` doesn't work on hex representations of bytes, but on string representation of numbers. A number in the base-16 world is '13' or 'ab' or 'ff'. Consequently (but still funnily), `int('ff',16)` works fine. If you want to go that route, you need to get rid of the '\x' :-). |
64,452,847 | I have an .h5 file which contains only the weights of a model that has been trained using Mask-RCNN and the Keras framework to perform object detection. I would like to load this model with Tensorflow + Keras in Python and run object detection on custom images.
I am working with Python 3.8.5, and I have tried to load this model using the `keras.models.load_model()` function, but when trying to do so I receive the error:
>
> ValueError: No model found in config file
>
>
>
I understand this means the .h5 file was saved with only the model weights, and that to make use of those weights I must load them into a model with the same architecture.
I have used the h5py python module to view the groups inside the .h5 file in an attempt to understand the architecture, but I see only keys such as the below: (There are many, many more like this but those are a general overview of the keys I see)
>
> "activation\_1", "add\_32", "bn2a\_branch1", "res5c\_out" etc.
>
>
>
These groups sometimes have dataset members, which when accessed, provide information similar to the following:
```
<HDF5 group "/bn4v_branch2a/bn4v_branch2a" (4 members)\>
<HDF5 dataset "beta:0": shape (256,), type "<f4"\>
<HDF5 dataset "gamma:0": shape (256,), type "<f4"\>
<HDF5 dataset "moving_mean:0": shape (256,), type "<f4"\>
<HDF5 dataset "moving_variance:0": shape (256,), type "<f4"\>
```
I can assume some things, like "activation\_1" probably correspond to activation layers, but the rest of the layers don't seem related to things like Dense, LeakyReLU, BatchNormalization layers, etc. that I'm used to seeing/using when building a model.
What do I need to do here? How can I determine the architecture of a Keras + Mask-RCNN model from an .h5 file with only model weights?
Thank you, any help is greatly appreciated! | 2020/10/20 | [
"https://Stackoverflow.com/questions/64452847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7337087/"
] | You can find it in 'model\_config' of the attributes of the root group. Simply check:
```
f = h5py.File(your_model_name, 'r')
f.attrs.get('model_config')
```
There you'll find all the layer classes with name, shape, activation function, etc. | Try loading the model into another variable then call the model.summary() function
```
model_2 = load_model('old_model.h5')
model_2.summary()
```
This should show the layers of the model like so :
[Summary of model](https://i.stack.imgur.com/asEyF.png) |
64,452,847 | I have an .h5 file which contains only the weights of a model that has been trained using Mask-RCNN and the Keras framework to perform object detection. I would like to load this model with Tensorflow + Keras in Python and run object detection on custom images.
I am working with Python 3.8.5, and I have tried to load this model using the `keras.models.load_model()` function, but when trying to do so I receive the error:
>
> ValueError: No model found in config file
>
>
>
I understand this means the .h5 file was saved with only the model weights, and that to make use of those weights I must load them into a model with the same architecture.
I have used the h5py python module to view the groups inside the .h5 file in an attempt to understand the architecture, but I see only keys such as the below: (There are many, many more like this but those are a general overview of the keys I see)
>
> "activation\_1", "add\_32", "bn2a\_branch1", "res5c\_out" etc.
>
>
>
These groups sometimes have dataset members, which when accessed, provide information similar to the following:
```
<HDF5 group "/bn4v_branch2a/bn4v_branch2a" (4 members)\>
<HDF5 dataset "beta:0": shape (256,), type "<f4"\>
<HDF5 dataset "gamma:0": shape (256,), type "<f4"\>
<HDF5 dataset "moving_mean:0": shape (256,), type "<f4"\>
<HDF5 dataset "moving_variance:0": shape (256,), type "<f4"\>
```
I can assume some things, like "activation\_1" probably correspond to activation layers, but the rest of the layers don't seem related to things like Dense, LeakyReLU, BatchNormalization layers, etc. that I'm used to seeing/using when building a model.
What do I need to do here? How can I determine the architecture of a Keras + Mask-RCNN model from an .h5 file with only model weights?
Thank you, any help is greatly appreciated! | 2020/10/20 | [
"https://Stackoverflow.com/questions/64452847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7337087/"
] | You can find it in 'model\_config' of the attributes of the root group. Simply check:
```
f = h5py.File(your_model_name, 'r')
f.attrs.get('model_config')
```
There you'll find all the layer classes with name, shape, activation function, etc. | You might be able to glean some information from the output you are getting. E.g.
```
<HDF5 group "/bn4v_branch2a/bn4v_branch2a" (4 members)\>
<HDF5 dataset "beta:0": shape (256,), type "<f4"\>
<HDF5 dataset "gamma:0": shape (256,), type "<f4"\>
<HDF5 dataset "moving_mean:0": shape (256,), type "<f4"\>
<HDF5 dataset "moving_variance:0": shape (256,), type "<f4"\>
```
is a batch normalisation layer. I know this from writing manual conversion code to convert from pytorch to tensorflow.
Maybe you could build a test network of your own, extract the weights and read the structure to see what the structures look like with layers you know, and then compare with your unknown model structure? |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | Let's say you have:
```
>>> d = np.linspace(0, 360, 10, endpoint=False)
>>> c = np.cos(np.radians(d))
```
If you don't mind having some brackets and such on the side, then you can simply concatenate column-wise using [`np.c_`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.c_.html), and display:
```
>>> print(np.c_[d, c])
[[ 0.00000000e+00 1.00000000e+00]
[ 3.60000000e+01 8.09016994e-01]
[ 7.20000000e+01 3.09016994e-01]
[ 1.08000000e+02 -3.09016994e-01]
[ 1.44000000e+02 -8.09016994e-01]
[ 1.80000000e+02 -1.00000000e+00]
[ 2.16000000e+02 -8.09016994e-01]
[ 2.52000000e+02 -3.09016994e-01]
[ 2.88000000e+02 3.09016994e-01]
[ 3.24000000e+02 8.09016994e-01]]
```
But if you care about removing them, one possibility is to use a simple regex:
```
>>> import re
>>> print(re.sub(r' *\n *', '\n',
np.array_str(np.c_[d, c]).replace('[', '').replace(']', '').strip()))
0.00000000e+00 1.00000000e+00
3.60000000e+01 8.09016994e-01
7.20000000e+01 3.09016994e-01
1.08000000e+02 -3.09016994e-01
1.44000000e+02 -8.09016994e-01
1.80000000e+02 -1.00000000e+00
2.16000000e+02 -8.09016994e-01
2.52000000e+02 -3.09016994e-01
2.88000000e+02 3.09016994e-01
3.24000000e+02 8.09016994e-01
```
I'm removing the brackets, and then passing it to the regex to remove the spaces on either side in each line.
[`np.array_str`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array_str.html) also lets you set the precision. For more control, you can use [`np.array2string`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array2string.html) instead. | Just in numpy with some format ideas, to use @MaxU 's syntax
```
a = np.array([[i, np.cos(np.deg2rad(i)), np.sin(np.deg2rad(i))]
for i in range(0,361,30)])
args = ["Angle", "Cos", "Sin"]
frmt = ("{:>8.0f}"+"{:>8.3f}"*2)
print(("{:^8}"*3).format(*args))
for i in a:
print(frmt.format(*i))
Angle Cos Sin
0 1.000 0.000
30 0.866 0.500
60 0.500 0.866
90 0.000 1.000
120 -0.500 0.866
150 -0.866 0.500
180 -1.000 0.000
210 -0.866 -0.500
240 -0.500 -0.866
270 -0.000 -1.000
300 0.500 -0.866
330 0.866 -0.500
360 1.000 -0.000
``` |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | You can use python's `zip` function to go through the elements of both lists simultaneously.
```
import numpy as np
degreesVector = np.linspace(0.0, 360.0, 73.0)
cosinesVector = np.cos(np.radians(degreesVector))
for d, c in zip(degreesVector, cosinesVector):
print d, c
```
And if you want to make a numpy array out of the degrees and cosine values, you can modify the `for` loop in this way:
```
table = []
for d, c in zip(degreesVector, cosinesVector):
table.append([d, c])
table = np.array(table)
```
And now on one line!
```
np.array([[d, c] for d, c in zip(degreesVector, cosinesVector)])
``` | Just in numpy with some format ideas, to use @MaxU 's syntax
```
a = np.array([[i, np.cos(np.deg2rad(i)), np.sin(np.deg2rad(i))]
for i in range(0,361,30)])
args = ["Angle", "Cos", "Sin"]
frmt = ("{:>8.0f}"+"{:>8.3f}"*2)
print(("{:^8}"*3).format(*args))
for i in a:
print(frmt.format(*i))
Angle Cos Sin
0 1.000 0.000
30 0.866 0.500
60 0.500 0.866
90 0.000 1.000
120 -0.500 0.866
150 -0.866 0.500
180 -1.000 0.000
210 -0.866 -0.500
240 -0.500 -0.866
270 -0.000 -1.000
300 0.500 -0.866
330 0.866 -0.500
360 1.000 -0.000
``` |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | Let's say you have:
```
>>> d = np.linspace(0, 360, 10, endpoint=False)
>>> c = np.cos(np.radians(d))
```
If you don't mind having some brackets and such on the side, then you can simply concatenate column-wise using [`np.c_`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.c_.html), and display:
```
>>> print(np.c_[d, c])
[[ 0.00000000e+00 1.00000000e+00]
[ 3.60000000e+01 8.09016994e-01]
[ 7.20000000e+01 3.09016994e-01]
[ 1.08000000e+02 -3.09016994e-01]
[ 1.44000000e+02 -8.09016994e-01]
[ 1.80000000e+02 -1.00000000e+00]
[ 2.16000000e+02 -8.09016994e-01]
[ 2.52000000e+02 -3.09016994e-01]
[ 2.88000000e+02 3.09016994e-01]
[ 3.24000000e+02 8.09016994e-01]]
```
But if you care about removing them, one possibility is to use a simple regex:
```
>>> import re
>>> print(re.sub(r' *\n *', '\n',
np.array_str(np.c_[d, c]).replace('[', '').replace(']', '').strip()))
0.00000000e+00 1.00000000e+00
3.60000000e+01 8.09016994e-01
7.20000000e+01 3.09016994e-01
1.08000000e+02 -3.09016994e-01
1.44000000e+02 -8.09016994e-01
1.80000000e+02 -1.00000000e+00
2.16000000e+02 -8.09016994e-01
2.52000000e+02 -3.09016994e-01
2.88000000e+02 3.09016994e-01
3.24000000e+02 8.09016994e-01
```
I'm removing the brackets, and then passing it to the regex to remove the spaces on either side in each line.
[`np.array_str`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array_str.html) also lets you set the precision. For more control, you can use [`np.array2string`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array2string.html) instead. | You can use python's `zip` function to go through the elements of both lists simultaneously.
```
import numpy as np
degreesVector = np.linspace(0.0, 360.0, 73.0)
cosinesVector = np.cos(np.radians(degreesVector))
for d, c in zip(degreesVector, cosinesVector):
print d, c
```
And if you want to make a numpy array out of the degrees and cosine values, you can modify the `for` loop in this way:
```
table = []
for d, c in zip(degreesVector, cosinesVector):
table.append([d, c])
table = np.array(table)
```
And now on one line!
```
np.array([[d, c] for d, c in zip(degreesVector, cosinesVector)])
``` |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | You were close - but if you iterate over angles, just generate the `cosine` for that angle:
```
In [293]: for angle in range(0,60,10):
...: print('{0:8}{1:8.3f}'.format(angle, np.cos(np.radians(angle))))
...:
0 1.000
10 0.985
20 0.940
30 0.866
40 0.766
50 0.643
```
To work with arrays, you have lots of options:
```
In [294]: angles=np.linspace(0,60,7)
In [295]: cosines=np.cos(np.radians(angles))
```
iterate over an index:
```
In [297]: for i in range(angles.shape[0]):
...: print('{0:8}{1:8.3f}'.format(angles[i],cosines[i]))
```
Use `zip` to dish out the values 2 by 2:
```
for a,c in zip(angles, cosines):
print('{0:8}{1:8.3f}'.format(a,c))
```
A slight variant on that:
```
for ac in zip(angles, cosines):
print('{0:8}{1:8.3f}'.format(*ac))
```
You could concatenate the arrays together into a 2d array, and display that:
```
In [302]: np.vstack((angles, cosines)).T
Out[302]:
array([[ 0. , 1. ],
[ 10. , 0.98480775],
[ 20. , 0.93969262],
[ 30. , 0.8660254 ],
[ 40. , 0.76604444],
[ 50. , 0.64278761],
[ 60. , 0.5 ]])
In [318]: print(np.vstack((angles, cosines)).T)
[[ 0. 1. ]
[ 10. 0.98480775]
[ 20. 0.93969262]
[ 30. 0.8660254 ]
[ 40. 0.76604444]
[ 50. 0.64278761]
[ 60. 0.5 ]]
```
`np.column_stack` can do that without the transpose.
And you can pass that array to your formatting with:
```
for ac in np.vstack((angles, cosines)).T:
print('{0:8}{1:8.3f}'.format(*ac))
```
or you could write that to a `csv` style file with `savetxt` (which just iterates over the 'rows' of the 2d array and writes with `fmt`):
```
In [310]: np.savetxt('test.txt', np.vstack((angles, cosines)).T, fmt='%8.1f %8.3f')
In [311]: cat test.txt
0.0 1.000
10.0 0.985
20.0 0.940
30.0 0.866
40.0 0.766
50.0 0.643
60.0 0.500
```
Unfortunately `savetxt` requires the old style formatting. And trying to write to `sys.stdout` runs into byte v unicode string issues in Py3. | Just in numpy with some format ideas, to use @MaxU 's syntax
```
a = np.array([[i, np.cos(np.deg2rad(i)), np.sin(np.deg2rad(i))]
for i in range(0,361,30)])
args = ["Angle", "Cos", "Sin"]
frmt = ("{:>8.0f}"+"{:>8.3f}"*2)
print(("{:^8}"*3).format(*args))
for i in a:
print(frmt.format(*i))
Angle Cos Sin
0 1.000 0.000
30 0.866 0.500
60 0.500 0.866
90 0.000 1.000
120 -0.500 0.866
150 -0.866 0.500
180 -1.000 0.000
210 -0.866 -0.500
240 -0.500 -0.866
270 -0.000 -1.000
300 0.500 -0.866
330 0.866 -0.500
360 1.000 -0.000
``` |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | You can use python's `zip` function to go through the elements of both lists simultaneously.
```
import numpy as np
degreesVector = np.linspace(0.0, 360.0, 73.0)
cosinesVector = np.cos(np.radians(degreesVector))
for d, c in zip(degreesVector, cosinesVector):
print d, c
```
And if you want to make a numpy array out of the degrees and cosine values, you can modify the `for` loop in this way:
```
table = []
for d, c in zip(degreesVector, cosinesVector):
table.append([d, c])
table = np.array(table)
```
And now on one line!
```
np.array([[d, c] for d, c in zip(degreesVector, cosinesVector)])
``` | You were close - but if you iterate over angles, just generate the `cosine` for that angle:
```
In [293]: for angle in range(0,60,10):
...: print('{0:8}{1:8.3f}'.format(angle, np.cos(np.radians(angle))))
...:
0 1.000
10 0.985
20 0.940
30 0.866
40 0.766
50 0.643
```
To work with arrays, you have lots of options:
```
In [294]: angles=np.linspace(0,60,7)
In [295]: cosines=np.cos(np.radians(angles))
```
iterate over an index:
```
In [297]: for i in range(angles.shape[0]):
...: print('{0:8}{1:8.3f}'.format(angles[i],cosines[i]))
```
Use `zip` to dish out the values 2 by 2:
```
for a,c in zip(angles, cosines):
print('{0:8}{1:8.3f}'.format(a,c))
```
A slight variant on that:
```
for ac in zip(angles, cosines):
print('{0:8}{1:8.3f}'.format(*ac))
```
You could concatenate the arrays together into a 2d array, and display that:
```
In [302]: np.vstack((angles, cosines)).T
Out[302]:
array([[ 0. , 1. ],
[ 10. , 0.98480775],
[ 20. , 0.93969262],
[ 30. , 0.8660254 ],
[ 40. , 0.76604444],
[ 50. , 0.64278761],
[ 60. , 0.5 ]])
In [318]: print(np.vstack((angles, cosines)).T)
[[ 0. 1. ]
[ 10. 0.98480775]
[ 20. 0.93969262]
[ 30. 0.8660254 ]
[ 40. 0.76604444]
[ 50. 0.64278761]
[ 60. 0.5 ]]
```
`np.column_stack` can do that without the transpose.
And you can pass that array to your formatting with:
```
for ac in np.vstack((angles, cosines)).T:
print('{0:8}{1:8.3f}'.format(*ac))
```
or you could write that to a `csv` style file with `savetxt` (which just iterates over the 'rows' of the 2d array and writes with `fmt`):
```
In [310]: np.savetxt('test.txt', np.vstack((angles, cosines)).T, fmt='%8.1f %8.3f')
In [311]: cat test.txt
0.0 1.000
10.0 0.985
20.0 0.940
30.0 0.866
40.0 0.766
50.0 0.643
60.0 0.500
```
Unfortunately `savetxt` requires the old style formatting. And trying to write to `sys.stdout` runs into byte v unicode string issues in Py3. |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | Pandas is very convenient module for such tasks:
```
In [174]: import pandas as pd
...:
...: x = pd.DataFrame({'angle': np.linspace(0, 355, 355//5+1),
...: 'cos': np.cos(np.deg2rad(np.linspace(0, 355, 355//5+1)))})
...:
...: pd.options.display.max_rows = 20
...:
...: x
...:
Out[174]:
angle cos
0 0.0 1.000000
1 5.0 0.996195
2 10.0 0.984808
3 15.0 0.965926
4 20.0 0.939693
5 25.0 0.906308
6 30.0 0.866025
7 35.0 0.819152
8 40.0 0.766044
9 45.0 0.707107
.. ... ...
62 310.0 0.642788
63 315.0 0.707107
64 320.0 0.766044
65 325.0 0.819152
66 330.0 0.866025
67 335.0 0.906308
68 340.0 0.939693
69 345.0 0.965926
70 350.0 0.984808
71 355.0 0.996195
[72 rows x 2 columns]
``` | Just in numpy with some format ideas, to use @MaxU 's syntax
```
a = np.array([[i, np.cos(np.deg2rad(i)), np.sin(np.deg2rad(i))]
for i in range(0,361,30)])
args = ["Angle", "Cos", "Sin"]
frmt = ("{:>8.0f}"+"{:>8.3f}"*2)
print(("{:^8}"*3).format(*args))
for i in a:
print(frmt.format(*i))
Angle Cos Sin
0 1.000 0.000
30 0.866 0.500
60 0.500 0.866
90 0.000 1.000
120 -0.500 0.866
150 -0.866 0.500
180 -1.000 0.000
210 -0.866 -0.500
240 -0.500 -0.866
270 -0.000 -1.000
300 0.500 -0.866
330 0.866 -0.500
360 1.000 -0.000
``` |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | Let's say you have:
```
>>> d = np.linspace(0, 360, 10, endpoint=False)
>>> c = np.cos(np.radians(d))
```
If you don't mind having some brackets and such on the side, then you can simply concatenate column-wise using [`np.c_`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.c_.html), and display:
```
>>> print(np.c_[d, c])
[[ 0.00000000e+00 1.00000000e+00]
[ 3.60000000e+01 8.09016994e-01]
[ 7.20000000e+01 3.09016994e-01]
[ 1.08000000e+02 -3.09016994e-01]
[ 1.44000000e+02 -8.09016994e-01]
[ 1.80000000e+02 -1.00000000e+00]
[ 2.16000000e+02 -8.09016994e-01]
[ 2.52000000e+02 -3.09016994e-01]
[ 2.88000000e+02 3.09016994e-01]
[ 3.24000000e+02 8.09016994e-01]]
```
But if you care about removing them, one possibility is to use a simple regex:
```
>>> import re
>>> print(re.sub(r' *\n *', '\n',
np.array_str(np.c_[d, c]).replace('[', '').replace(']', '').strip()))
0.00000000e+00 1.00000000e+00
3.60000000e+01 8.09016994e-01
7.20000000e+01 3.09016994e-01
1.08000000e+02 -3.09016994e-01
1.44000000e+02 -8.09016994e-01
1.80000000e+02 -1.00000000e+00
2.16000000e+02 -8.09016994e-01
2.52000000e+02 -3.09016994e-01
2.88000000e+02 3.09016994e-01
3.24000000e+02 8.09016994e-01
```
I'm removing the brackets, and then passing it to the regex to remove the spaces on either side in each line.
[`np.array_str`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array_str.html) also lets you set the precision. For more control, you can use [`np.array2string`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array2string.html) instead. | You were close - but if you iterate over angles, just generate the `cosine` for that angle:
```
In [293]: for angle in range(0,60,10):
...: print('{0:8}{1:8.3f}'.format(angle, np.cos(np.radians(angle))))
...:
0 1.000
10 0.985
20 0.940
30 0.866
40 0.766
50 0.643
```
To work with arrays, you have lots of options:
```
In [294]: angles=np.linspace(0,60,7)
In [295]: cosines=np.cos(np.radians(angles))
```
iterate over an index:
```
In [297]: for i in range(angles.shape[0]):
...: print('{0:8}{1:8.3f}'.format(angles[i],cosines[i]))
```
Use `zip` to dish out the values 2 by 2:
```
for a,c in zip(angles, cosines):
print('{0:8}{1:8.3f}'.format(a,c))
```
A slight variant on that:
```
for ac in zip(angles, cosines):
print('{0:8}{1:8.3f}'.format(*ac))
```
You could concatenate the arrays together into a 2d array, and display that:
```
In [302]: np.vstack((angles, cosines)).T
Out[302]:
array([[ 0. , 1. ],
[ 10. , 0.98480775],
[ 20. , 0.93969262],
[ 30. , 0.8660254 ],
[ 40. , 0.76604444],
[ 50. , 0.64278761],
[ 60. , 0.5 ]])
In [318]: print(np.vstack((angles, cosines)).T)
[[ 0. 1. ]
[ 10. 0.98480775]
[ 20. 0.93969262]
[ 30. 0.8660254 ]
[ 40. 0.76604444]
[ 50. 0.64278761]
[ 60. 0.5 ]]
```
`np.column_stack` can do that without the transpose.
And you can pass that array to your formatting with:
```
for ac in np.vstack((angles, cosines)).T:
print('{0:8}{1:8.3f}'.format(*ac))
```
or you could write that to a `csv` style file with `savetxt` (which just iterates over the 'rows' of the 2d array and writes with `fmt`):
```
In [310]: np.savetxt('test.txt', np.vstack((angles, cosines)).T, fmt='%8.1f %8.3f')
In [311]: cat test.txt
0.0 1.000
10.0 0.985
20.0 0.940
30.0 0.866
40.0 0.766
50.0 0.643
60.0 0.500
```
Unfortunately `savetxt` requires the old style formatting. And trying to write to `sys.stdout` runs into byte v unicode string issues in Py3. |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | Side-by-Side Array Comparison using Numpy
=========================================
A built-in Numpy approach using the `column_stack((...))` method.
>
> **`numpy.column_stack((A, B))`** is a column stack with Numpy which allows you to compare two or more matrices/arrays.
>
>
>
Use the `numpy.column_stack((A, B))` method with a tuple. The tuple must be represented with `()` parenthesizes representing a single argument with **as many matrices/arrays as you want**.
```
import numpy as np
A = np.random.uniform(size=(10,1))
B = np.random.uniform(size=(10,1))
C = np.random.uniform(size=(10,1))
np.column_stack((A, B, C)) ## <-- Compare Side-by-Side
```
The result looks like this:
```
array([[0.40323596, 0.95947336, 0.21354263],
[0.18001121, 0.35467198, 0.47653884],
[0.12756083, 0.24272134, 0.97832504],
[0.95769626, 0.33855075, 0.76510239],
[0.45280595, 0.33575171, 0.74295859],
[0.87895151, 0.43396391, 0.27123183],
[0.17721346, 0.06578044, 0.53619146],
[0.71395251, 0.03525021, 0.01544952],
[0.19048783, 0.16578012, 0.69430883],
[0.08897691, 0.41104408, 0.58484384]])
```
Numpy `column_stack` is useful for AI/ML applications when comparing the predicted results with the expected answers. This determines the effectiveness of the Neural Net training. It is a quick way to detect where errors are in the network calculations. | You can use python's `zip` function to go through the elements of both lists simultaneously.
```
import numpy as np
degreesVector = np.linspace(0.0, 360.0, 73.0)
cosinesVector = np.cos(np.radians(degreesVector))
for d, c in zip(degreesVector, cosinesVector):
print d, c
```
And if you want to make a numpy array out of the degrees and cosine values, you can modify the `for` loop in this way:
```
table = []
for d, c in zip(degreesVector, cosinesVector):
table.append([d, c])
table = np.array(table)
```
And now on one line!
```
np.array([[d, c] for d, c in zip(degreesVector, cosinesVector)])
``` |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | Side-by-Side Array Comparison using Numpy
=========================================
A built-in Numpy approach using the `column_stack((...))` method.
>
> **`numpy.column_stack((A, B))`** is a column stack with Numpy which allows you to compare two or more matrices/arrays.
>
>
>
Use the `numpy.column_stack((A, B))` method with a tuple. The tuple must be represented with `()` parenthesizes representing a single argument with **as many matrices/arrays as you want**.
```
import numpy as np
A = np.random.uniform(size=(10,1))
B = np.random.uniform(size=(10,1))
C = np.random.uniform(size=(10,1))
np.column_stack((A, B, C)) ## <-- Compare Side-by-Side
```
The result looks like this:
```
array([[0.40323596, 0.95947336, 0.21354263],
[0.18001121, 0.35467198, 0.47653884],
[0.12756083, 0.24272134, 0.97832504],
[0.95769626, 0.33855075, 0.76510239],
[0.45280595, 0.33575171, 0.74295859],
[0.87895151, 0.43396391, 0.27123183],
[0.17721346, 0.06578044, 0.53619146],
[0.71395251, 0.03525021, 0.01544952],
[0.19048783, 0.16578012, 0.69430883],
[0.08897691, 0.41104408, 0.58484384]])
```
Numpy `column_stack` is useful for AI/ML applications when comparing the predicted results with the expected answers. This determines the effectiveness of the Neural Net training. It is a quick way to detect where errors are in the network calculations. | Pandas is very convenient module for such tasks:
```
In [174]: import pandas as pd
...:
...: x = pd.DataFrame({'angle': np.linspace(0, 355, 355//5+1),
...: 'cos': np.cos(np.deg2rad(np.linspace(0, 355, 355//5+1)))})
...:
...: pd.options.display.max_rows = 20
...:
...: x
...:
Out[174]:
angle cos
0 0.0 1.000000
1 5.0 0.996195
2 10.0 0.984808
3 15.0 0.965926
4 20.0 0.939693
5 25.0 0.906308
6 30.0 0.866025
7 35.0 0.819152
8 40.0 0.766044
9 45.0 0.707107
.. ... ...
62 310.0 0.642788
63 315.0 0.707107
64 320.0 0.766044
65 325.0 0.819152
66 330.0 0.866025
67 335.0 0.906308
68 340.0 0.939693
69 345.0 0.965926
70 350.0 0.984808
71 355.0 0.996195
[72 rows x 2 columns]
``` |
40,443,888 | I'm trying to create a table of cosines using numpy in python. I want to have the angle next to the cosine of the angle, so it looks something like this:
```
0.0 1.000 5.0 0.996 10.0 0.985 15.0 0.966
20.0 0.940 25.0 0.906 and so on.
```
I'm trying to do it using a for loop but I'm not sure how to get this to work.
Currently, I have .
Any suggestions? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7120485/"
] | Let's say you have:
```
>>> d = np.linspace(0, 360, 10, endpoint=False)
>>> c = np.cos(np.radians(d))
```
If you don't mind having some brackets and such on the side, then you can simply concatenate column-wise using [`np.c_`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.c_.html), and display:
```
>>> print(np.c_[d, c])
[[ 0.00000000e+00 1.00000000e+00]
[ 3.60000000e+01 8.09016994e-01]
[ 7.20000000e+01 3.09016994e-01]
[ 1.08000000e+02 -3.09016994e-01]
[ 1.44000000e+02 -8.09016994e-01]
[ 1.80000000e+02 -1.00000000e+00]
[ 2.16000000e+02 -8.09016994e-01]
[ 2.52000000e+02 -3.09016994e-01]
[ 2.88000000e+02 3.09016994e-01]
[ 3.24000000e+02 8.09016994e-01]]
```
But if you care about removing them, one possibility is to use a simple regex:
```
>>> import re
>>> print(re.sub(r' *\n *', '\n',
np.array_str(np.c_[d, c]).replace('[', '').replace(']', '').strip()))
0.00000000e+00 1.00000000e+00
3.60000000e+01 8.09016994e-01
7.20000000e+01 3.09016994e-01
1.08000000e+02 -3.09016994e-01
1.44000000e+02 -8.09016994e-01
1.80000000e+02 -1.00000000e+00
2.16000000e+02 -8.09016994e-01
2.52000000e+02 -3.09016994e-01
2.88000000e+02 3.09016994e-01
3.24000000e+02 8.09016994e-01
```
I'm removing the brackets, and then passing it to the regex to remove the spaces on either side in each line.
[`np.array_str`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array_str.html) also lets you set the precision. For more control, you can use [`np.array2string`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array2string.html) instead. | Pandas is very convenient module for such tasks:
```
In [174]: import pandas as pd
...:
...: x = pd.DataFrame({'angle': np.linspace(0, 355, 355//5+1),
...: 'cos': np.cos(np.deg2rad(np.linspace(0, 355, 355//5+1)))})
...:
...: pd.options.display.max_rows = 20
...:
...: x
...:
Out[174]:
angle cos
0 0.0 1.000000
1 5.0 0.996195
2 10.0 0.984808
3 15.0 0.965926
4 20.0 0.939693
5 25.0 0.906308
6 30.0 0.866025
7 35.0 0.819152
8 40.0 0.766044
9 45.0 0.707107
.. ... ...
62 310.0 0.642788
63 315.0 0.707107
64 320.0 0.766044
65 325.0 0.819152
66 330.0 0.866025
67 335.0 0.906308
68 340.0 0.939693
69 345.0 0.965926
70 350.0 0.984808
71 355.0 0.996195
[72 rows x 2 columns]
``` |
13,175,374 | I recently wrote a client/server pair in python using sockets,but the problem is client doesn't connect to server on another network.I've tried port forwarding and making internal IP address static, a question which really bother's me is do i need External/Public IP address to make the client connect and if this is the case what to do when the ISP changes my External IP address. Please give some suggestions,thanks.
```
code:
PORT=8888
srvsock = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
srvsock.setsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR, 1 )
srvsock.bind( ('', PORT) )
srvsock.listen( 10 )
print 'server now listening on PORT '+str(PORT)
while 1:
clisock, (remhost, remport) = srvsock.accept()
dl_information_file="server.txt"
if os.path.exists(dl_information_file):
f=open('server.txt','rb')
read=f.read()
clisock.send( read )
f.close()
``` | 2012/11/01 | [
"https://Stackoverflow.com/questions/13175374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1779646/"
] | `for i=1:length(vector)` is giving you an index into the vector - it will always be consecutive because it represents the `first..last` position of the vector.
It sounds like you want to get an identifier *out* of the vector. You can do this within your existing loop: `id=vector[i]` | Have you thought about using Matlab structs?
```
s = struct(field1,value1,...,fieldN,valueN)
```
You can have an array 'people' of structs (of type person) which you can loop through...
```
for i=1:length(people)
if people(i).HasBeenFeed = False
% feed this person...
end
end
``` |
13,175,374 | I recently wrote a client/server pair in python using sockets,but the problem is client doesn't connect to server on another network.I've tried port forwarding and making internal IP address static, a question which really bother's me is do i need External/Public IP address to make the client connect and if this is the case what to do when the ISP changes my External IP address. Please give some suggestions,thanks.
```
code:
PORT=8888
srvsock = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
srvsock.setsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR, 1 )
srvsock.bind( ('', PORT) )
srvsock.listen( 10 )
print 'server now listening on PORT '+str(PORT)
while 1:
clisock, (remhost, remport) = srvsock.accept()
dl_information_file="server.txt"
if os.path.exists(dl_information_file):
f=open('server.txt','rb')
read=f.read()
clisock.send( read )
f.close()
``` | 2012/11/01 | [
"https://Stackoverflow.com/questions/13175374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1779646/"
] | If you want to loop through an arbitrary vector, just use that vector directly in a for loop. For example:
```
vector = [3, 4, 7, 1, 1]
for i = vector
disp(i)
end
```
will output `3 4 7 1 1`. This is the equivalent of "for i in vector do something." | `for i=1:length(vector)` is giving you an index into the vector - it will always be consecutive because it represents the `first..last` position of the vector.
It sounds like you want to get an identifier *out* of the vector. You can do this within your existing loop: `id=vector[i]` |
13,175,374 | I recently wrote a client/server pair in python using sockets,but the problem is client doesn't connect to server on another network.I've tried port forwarding and making internal IP address static, a question which really bother's me is do i need External/Public IP address to make the client connect and if this is the case what to do when the ISP changes my External IP address. Please give some suggestions,thanks.
```
code:
PORT=8888
srvsock = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
srvsock.setsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR, 1 )
srvsock.bind( ('', PORT) )
srvsock.listen( 10 )
print 'server now listening on PORT '+str(PORT)
while 1:
clisock, (remhost, remport) = srvsock.accept()
dl_information_file="server.txt"
if os.path.exists(dl_information_file):
f=open('server.txt','rb')
read=f.read()
clisock.send( read )
f.close()
``` | 2012/11/01 | [
"https://Stackoverflow.com/questions/13175374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1779646/"
] | If you want to loop through an arbitrary vector, just use that vector directly in a for loop. For example:
```
vector = [3, 4, 7, 1, 1]
for i = vector
disp(i)
end
```
will output `3 4 7 1 1`. This is the equivalent of "for i in vector do something." | Have you thought about using Matlab structs?
```
s = struct(field1,value1,...,fieldN,valueN)
```
You can have an array 'people' of structs (of type person) which you can loop through...
```
for i=1:length(people)
if people(i).HasBeenFeed = False
% feed this person...
end
end
``` |
69,506,993 | Json code:Below we have json data format which I am pulling from site using API
```py
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
```
Python Code: I am using below code to parse the json data but I am unable to filters into different column specially filters part into separate column like Inside filter i a want to make field , comparator separate column
```py
records=[]
for data in response['result']:
id = data['id']
title = data['title']
sharedWithOrganization = data['sharedWithOrganization']
ownerId = data['ownerId']
sharedWithUsers = '|'.join(data['sharedWithUsers'])
filters = data['filters']
print(filters)
records.append([id,title,sharedWithOrganization,ownerId,sharedWithUsers])
#print(records)
ExcelApp = win32.Dispatch('Excel.Application')
ExcelApp.Visible= True
#creating excel and renaming sheet
wb = ExcelApp.Workbooks.Add()
ws= wb.Worksheets(1)
ws.Name="Get_Views"
#assigning header value
header_labels=('Id','Title','SharedWithOrganization','OwnerId','sharedWithUsers')
for index,val in enumerate(header_labels):
ws.Cells(1, index+1).Value=val
row_tracker = 2
column_size = len(header_labels)
for row in records:
ws.Range(ws.cells(row_tracker,1),ws.cells(row_tracker,column_size)).value = row
row_tracker +=1
```
[](https://i.stack.imgur.com/nykar.png)
I am doing API pull i am getting this kind of format and I am passing that .json format to python to achieve the data into excel but I am unable to filters list data into separate column can you please help me with | 2021/10/09 | [
"https://Stackoverflow.com/questions/69506993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15904548/"
] | Try:
```
df = pd.Series(response).explode().apply(pd.Series).reset_index(drop=True)
df = df.join(df['filters'].explode().apply(pd.Series)).drop(columns=['filters'])
df['sharedWithUsers'] = df['sharedWithUsers'].str.join('|')
```
Output:
```
id title sharedWithOrganization ownerId sharedWithUsers field comparator value
0 1000 Fishing Team View True 324425 1223|w2qee3 tag5 == fishing
1 2000 Farming Team View False 00000 00000|11111 tag5 !@ farming
``` | You can see how I did it below because that returns a list you can select the first item of the list with [0], like datafield["filters"][0]["field"], also you can create a csv file and import that in excel instead.
```
import json
import csv
rows = []
headers = ["Id", "Title", "SharedWithOrganization", "OwnerId","SharedWithUsers","field", "comparator", "value"]
for datafield in response["result"] :
susers = ""
for u in datafield["sharedWithUsers"] :
susers = susers + u + "|"
susers = susers[:-1]
if datafield["sharedWithOrganization"] :
boolval = "TRUE"
else :
boolval = "FALSE"
rows.append([datafield["id"], datafield["title"], boolval, datafield["ownerId"], susers, datafield["filters"][0]["field"], datafield["filters"][0]["comparator"], datafield["filters"][0]["value"]])
with open('responseOutput.csv', 'w', encoding='UTF8', newline='') as f:
writer = csv.writer(f)
# write the header
writer.writerow(headers)
# write multiple rows
writer.writerows(rows)
``` |
69,506,993 | Json code:Below we have json data format which I am pulling from site using API
```py
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
```
Python Code: I am using below code to parse the json data but I am unable to filters into different column specially filters part into separate column like Inside filter i a want to make field , comparator separate column
```py
records=[]
for data in response['result']:
id = data['id']
title = data['title']
sharedWithOrganization = data['sharedWithOrganization']
ownerId = data['ownerId']
sharedWithUsers = '|'.join(data['sharedWithUsers'])
filters = data['filters']
print(filters)
records.append([id,title,sharedWithOrganization,ownerId,sharedWithUsers])
#print(records)
ExcelApp = win32.Dispatch('Excel.Application')
ExcelApp.Visible= True
#creating excel and renaming sheet
wb = ExcelApp.Workbooks.Add()
ws= wb.Worksheets(1)
ws.Name="Get_Views"
#assigning header value
header_labels=('Id','Title','SharedWithOrganization','OwnerId','sharedWithUsers')
for index,val in enumerate(header_labels):
ws.Cells(1, index+1).Value=val
row_tracker = 2
column_size = len(header_labels)
for row in records:
ws.Range(ws.cells(row_tracker,1),ws.cells(row_tracker,column_size)).value = row
row_tracker +=1
```
[](https://i.stack.imgur.com/nykar.png)
I am doing API pull i am getting this kind of format and I am passing that .json format to python to achieve the data into excel but I am unable to filters list data into separate column can you please help me with | 2021/10/09 | [
"https://Stackoverflow.com/questions/69506993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15904548/"
] | Try:
```
df = pd.Series(response).explode().apply(pd.Series).reset_index(drop=True)
df = df.join(df['filters'].explode().apply(pd.Series)).drop(columns=['filters'])
df['sharedWithUsers'] = df['sharedWithUsers'].str.join('|')
```
Output:
```
id title sharedWithOrganization ownerId sharedWithUsers field comparator value
0 1000 Fishing Team View True 324425 1223|w2qee3 tag5 == fishing
1 2000 Farming Team View False 00000 00000|11111 tag5 !@ farming
``` | You can use [`pd.json_normalize`](https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.json_normalize.html) to normalize the Json: Specify different parameters `record_path=` to read up to different depths for fields under tag `result` and tag `filters` respectively.
Then, join the 2 resultant dataframes together, as follows:
```
# read fields under tag `result`
df_result = pd.json_normalize(response, record_path=['result'])
# read fields under tag `filters` within `result`
df_filters = pd.json_normalize(response, record_path=['result', 'filters'])
# Join 2 resultant dataframes together
df = df_result.join(df_filters).drop('filters', axis=1)
# Join fields in `sharedWithUsers`
df['sharedWithUsers'] = df['sharedWithUsers'].str.join('|')
```
**Result:**
```
print(df)
id title sharedWithOrganization ownerId sharedWithUsers field comparator value
0 1000 Fishing Team View True 324425 1223|w2qee3 tag5 == fishing
1 2000 Farming Team View False 00000 00000|11111 tag5 !@ farming
``` |
69,506,993 | Json code:Below we have json data format which I am pulling from site using API
```py
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
```
Python Code: I am using below code to parse the json data but I am unable to filters into different column specially filters part into separate column like Inside filter i a want to make field , comparator separate column
```py
records=[]
for data in response['result']:
id = data['id']
title = data['title']
sharedWithOrganization = data['sharedWithOrganization']
ownerId = data['ownerId']
sharedWithUsers = '|'.join(data['sharedWithUsers'])
filters = data['filters']
print(filters)
records.append([id,title,sharedWithOrganization,ownerId,sharedWithUsers])
#print(records)
ExcelApp = win32.Dispatch('Excel.Application')
ExcelApp.Visible= True
#creating excel and renaming sheet
wb = ExcelApp.Workbooks.Add()
ws= wb.Worksheets(1)
ws.Name="Get_Views"
#assigning header value
header_labels=('Id','Title','SharedWithOrganization','OwnerId','sharedWithUsers')
for index,val in enumerate(header_labels):
ws.Cells(1, index+1).Value=val
row_tracker = 2
column_size = len(header_labels)
for row in records:
ws.Range(ws.cells(row_tracker,1),ws.cells(row_tracker,column_size)).value = row
row_tracker +=1
```
[](https://i.stack.imgur.com/nykar.png)
I am doing API pull i am getting this kind of format and I am passing that .json format to python to achieve the data into excel but I am unable to filters list data into separate column can you please help me with | 2021/10/09 | [
"https://Stackoverflow.com/questions/69506993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15904548/"
] | Try:
```
df = pd.Series(response).explode().apply(pd.Series).reset_index(drop=True)
df = df.join(df['filters'].explode().apply(pd.Series)).drop(columns=['filters'])
df['sharedWithUsers'] = df['sharedWithUsers'].str.join('|')
```
Output:
```
id title sharedWithOrganization ownerId sharedWithUsers field comparator value
0 1000 Fishing Team View True 324425 1223|w2qee3 tag5 == fishing
1 2000 Farming Team View False 00000 00000|11111 tag5 !@ farming
``` | The code below generates `out.csv` which can be opened by Excel.
Note that the code **does not require** any external library.
```
import csv
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
fields = ['id','title','sharedWithOrganization','ownerId','sharedWithUsers']
data = []
for entry in response['result']:
data.append(['|'.join(entry[f]) if isinstance(entry[f],list) else entry[f] for f in fields])
with open('out.csv','w') as f:
writer = csv.writer(f)
writer.writerow(fields)
for line in data:
writer.writerow(line)
```
out.csv
```
id,title,sharedWithOrganization,ownerId,sharedWithUsers
1000,Fishing Team View,True,324425,1223|w2qee3
2000,Farming Team View,False,00000,00000|11111
``` |
69,506,993 | Json code:Below we have json data format which I am pulling from site using API
```py
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
```
Python Code: I am using below code to parse the json data but I am unable to filters into different column specially filters part into separate column like Inside filter i a want to make field , comparator separate column
```py
records=[]
for data in response['result']:
id = data['id']
title = data['title']
sharedWithOrganization = data['sharedWithOrganization']
ownerId = data['ownerId']
sharedWithUsers = '|'.join(data['sharedWithUsers'])
filters = data['filters']
print(filters)
records.append([id,title,sharedWithOrganization,ownerId,sharedWithUsers])
#print(records)
ExcelApp = win32.Dispatch('Excel.Application')
ExcelApp.Visible= True
#creating excel and renaming sheet
wb = ExcelApp.Workbooks.Add()
ws= wb.Worksheets(1)
ws.Name="Get_Views"
#assigning header value
header_labels=('Id','Title','SharedWithOrganization','OwnerId','sharedWithUsers')
for index,val in enumerate(header_labels):
ws.Cells(1, index+1).Value=val
row_tracker = 2
column_size = len(header_labels)
for row in records:
ws.Range(ws.cells(row_tracker,1),ws.cells(row_tracker,column_size)).value = row
row_tracker +=1
```
[](https://i.stack.imgur.com/nykar.png)
I am doing API pull i am getting this kind of format and I am passing that .json format to python to achieve the data into excel but I am unable to filters list data into separate column can you please help me with | 2021/10/09 | [
"https://Stackoverflow.com/questions/69506993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15904548/"
] | The code below generates `out.csv` which can be opened by Excel.
Note that the code **does not require** any external library.
```
import csv
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
fields = ['id','title','sharedWithOrganization','ownerId','sharedWithUsers']
data = []
for entry in response['result']:
data.append(['|'.join(entry[f]) if isinstance(entry[f],list) else entry[f] for f in fields])
with open('out.csv','w') as f:
writer = csv.writer(f)
writer.writerow(fields)
for line in data:
writer.writerow(line)
```
out.csv
```
id,title,sharedWithOrganization,ownerId,sharedWithUsers
1000,Fishing Team View,True,324425,1223|w2qee3
2000,Farming Team View,False,00000,00000|11111
``` | You can see how I did it below because that returns a list you can select the first item of the list with [0], like datafield["filters"][0]["field"], also you can create a csv file and import that in excel instead.
```
import json
import csv
rows = []
headers = ["Id", "Title", "SharedWithOrganization", "OwnerId","SharedWithUsers","field", "comparator", "value"]
for datafield in response["result"] :
susers = ""
for u in datafield["sharedWithUsers"] :
susers = susers + u + "|"
susers = susers[:-1]
if datafield["sharedWithOrganization"] :
boolval = "TRUE"
else :
boolval = "FALSE"
rows.append([datafield["id"], datafield["title"], boolval, datafield["ownerId"], susers, datafield["filters"][0]["field"], datafield["filters"][0]["comparator"], datafield["filters"][0]["value"]])
with open('responseOutput.csv', 'w', encoding='UTF8', newline='') as f:
writer = csv.writer(f)
# write the header
writer.writerow(headers)
# write multiple rows
writer.writerows(rows)
``` |
69,506,993 | Json code:Below we have json data format which I am pulling from site using API
```py
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
```
Python Code: I am using below code to parse the json data but I am unable to filters into different column specially filters part into separate column like Inside filter i a want to make field , comparator separate column
```py
records=[]
for data in response['result']:
id = data['id']
title = data['title']
sharedWithOrganization = data['sharedWithOrganization']
ownerId = data['ownerId']
sharedWithUsers = '|'.join(data['sharedWithUsers'])
filters = data['filters']
print(filters)
records.append([id,title,sharedWithOrganization,ownerId,sharedWithUsers])
#print(records)
ExcelApp = win32.Dispatch('Excel.Application')
ExcelApp.Visible= True
#creating excel and renaming sheet
wb = ExcelApp.Workbooks.Add()
ws= wb.Worksheets(1)
ws.Name="Get_Views"
#assigning header value
header_labels=('Id','Title','SharedWithOrganization','OwnerId','sharedWithUsers')
for index,val in enumerate(header_labels):
ws.Cells(1, index+1).Value=val
row_tracker = 2
column_size = len(header_labels)
for row in records:
ws.Range(ws.cells(row_tracker,1),ws.cells(row_tracker,column_size)).value = row
row_tracker +=1
```
[](https://i.stack.imgur.com/nykar.png)
I am doing API pull i am getting this kind of format and I am passing that .json format to python to achieve the data into excel but I am unable to filters list data into separate column can you please help me with | 2021/10/09 | [
"https://Stackoverflow.com/questions/69506993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15904548/"
] | Using [`DictWriter`](https://docs.python.org/3/library/csv.html#csv.DictWriter) you can write required columns into `.csv` file which you can open in Excel.
**Code:**
```py
from csv import DictWriter
response = { ... }
with open("result.csv", "w", newline="") as f:
writer = DictWriter(
f,
("id", "title", "sharedWithOrganization", "ownerId", "sharedWithUsers"),
extrasaction="ignore"
)
writer.writeheader()
for obj in response["result"]:
writer.writerow({**obj, "sharedWithUsers": "|".join(obj["sharedWithUsers"])})
```
Or you can use [`csv.writer`](https://docs.python.org/3/library/csv.html#csv.writer) *(will consume a bit less memory as doesn't copy all fields)*:
```py
import csv
from operator import itemgetter
response = { ... }
keys = "id", "title", "sharedWithOrganization", "ownerId", "sharedWithUsers"
getter = itemgetter(*keys)
with open("result.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(keys)
for obj in response["result"]:
row = getter(obj)
writer.writerow(row[:-1] + ("|".join(row[-1]),))
``` | You can see how I did it below because that returns a list you can select the first item of the list with [0], like datafield["filters"][0]["field"], also you can create a csv file and import that in excel instead.
```
import json
import csv
rows = []
headers = ["Id", "Title", "SharedWithOrganization", "OwnerId","SharedWithUsers","field", "comparator", "value"]
for datafield in response["result"] :
susers = ""
for u in datafield["sharedWithUsers"] :
susers = susers + u + "|"
susers = susers[:-1]
if datafield["sharedWithOrganization"] :
boolval = "TRUE"
else :
boolval = "FALSE"
rows.append([datafield["id"], datafield["title"], boolval, datafield["ownerId"], susers, datafield["filters"][0]["field"], datafield["filters"][0]["comparator"], datafield["filters"][0]["value"]])
with open('responseOutput.csv', 'w', encoding='UTF8', newline='') as f:
writer = csv.writer(f)
# write the header
writer.writerow(headers)
# write multiple rows
writer.writerows(rows)
``` |
69,506,993 | Json code:Below we have json data format which I am pulling from site using API
```py
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
```
Python Code: I am using below code to parse the json data but I am unable to filters into different column specially filters part into separate column like Inside filter i a want to make field , comparator separate column
```py
records=[]
for data in response['result']:
id = data['id']
title = data['title']
sharedWithOrganization = data['sharedWithOrganization']
ownerId = data['ownerId']
sharedWithUsers = '|'.join(data['sharedWithUsers'])
filters = data['filters']
print(filters)
records.append([id,title,sharedWithOrganization,ownerId,sharedWithUsers])
#print(records)
ExcelApp = win32.Dispatch('Excel.Application')
ExcelApp.Visible= True
#creating excel and renaming sheet
wb = ExcelApp.Workbooks.Add()
ws= wb.Worksheets(1)
ws.Name="Get_Views"
#assigning header value
header_labels=('Id','Title','SharedWithOrganization','OwnerId','sharedWithUsers')
for index,val in enumerate(header_labels):
ws.Cells(1, index+1).Value=val
row_tracker = 2
column_size = len(header_labels)
for row in records:
ws.Range(ws.cells(row_tracker,1),ws.cells(row_tracker,column_size)).value = row
row_tracker +=1
```
[](https://i.stack.imgur.com/nykar.png)
I am doing API pull i am getting this kind of format and I am passing that .json format to python to achieve the data into excel but I am unable to filters list data into separate column can you please help me with | 2021/10/09 | [
"https://Stackoverflow.com/questions/69506993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15904548/"
] | The code below generates `out.csv` which can be opened by Excel.
Note that the code **does not require** any external library.
```
import csv
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
fields = ['id','title','sharedWithOrganization','ownerId','sharedWithUsers']
data = []
for entry in response['result']:
data.append(['|'.join(entry[f]) if isinstance(entry[f],list) else entry[f] for f in fields])
with open('out.csv','w') as f:
writer = csv.writer(f)
writer.writerow(fields)
for line in data:
writer.writerow(line)
```
out.csv
```
id,title,sharedWithOrganization,ownerId,sharedWithUsers
1000,Fishing Team View,True,324425,1223|w2qee3
2000,Farming Team View,False,00000,00000|11111
``` | You can use [`pd.json_normalize`](https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.json_normalize.html) to normalize the Json: Specify different parameters `record_path=` to read up to different depths for fields under tag `result` and tag `filters` respectively.
Then, join the 2 resultant dataframes together, as follows:
```
# read fields under tag `result`
df_result = pd.json_normalize(response, record_path=['result'])
# read fields under tag `filters` within `result`
df_filters = pd.json_normalize(response, record_path=['result', 'filters'])
# Join 2 resultant dataframes together
df = df_result.join(df_filters).drop('filters', axis=1)
# Join fields in `sharedWithUsers`
df['sharedWithUsers'] = df['sharedWithUsers'].str.join('|')
```
**Result:**
```
print(df)
id title sharedWithOrganization ownerId sharedWithUsers field comparator value
0 1000 Fishing Team View True 324425 1223|w2qee3 tag5 == fishing
1 2000 Farming Team View False 00000 00000|11111 tag5 !@ farming
``` |
69,506,993 | Json code:Below we have json data format which I am pulling from site using API
```py
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
```
Python Code: I am using below code to parse the json data but I am unable to filters into different column specially filters part into separate column like Inside filter i a want to make field , comparator separate column
```py
records=[]
for data in response['result']:
id = data['id']
title = data['title']
sharedWithOrganization = data['sharedWithOrganization']
ownerId = data['ownerId']
sharedWithUsers = '|'.join(data['sharedWithUsers'])
filters = data['filters']
print(filters)
records.append([id,title,sharedWithOrganization,ownerId,sharedWithUsers])
#print(records)
ExcelApp = win32.Dispatch('Excel.Application')
ExcelApp.Visible= True
#creating excel and renaming sheet
wb = ExcelApp.Workbooks.Add()
ws= wb.Worksheets(1)
ws.Name="Get_Views"
#assigning header value
header_labels=('Id','Title','SharedWithOrganization','OwnerId','sharedWithUsers')
for index,val in enumerate(header_labels):
ws.Cells(1, index+1).Value=val
row_tracker = 2
column_size = len(header_labels)
for row in records:
ws.Range(ws.cells(row_tracker,1),ws.cells(row_tracker,column_size)).value = row
row_tracker +=1
```
[](https://i.stack.imgur.com/nykar.png)
I am doing API pull i am getting this kind of format and I am passing that .json format to python to achieve the data into excel but I am unable to filters list data into separate column can you please help me with | 2021/10/09 | [
"https://Stackoverflow.com/questions/69506993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15904548/"
] | Using [`DictWriter`](https://docs.python.org/3/library/csv.html#csv.DictWriter) you can write required columns into `.csv` file which you can open in Excel.
**Code:**
```py
from csv import DictWriter
response = { ... }
with open("result.csv", "w", newline="") as f:
writer = DictWriter(
f,
("id", "title", "sharedWithOrganization", "ownerId", "sharedWithUsers"),
extrasaction="ignore"
)
writer.writeheader()
for obj in response["result"]:
writer.writerow({**obj, "sharedWithUsers": "|".join(obj["sharedWithUsers"])})
```
Or you can use [`csv.writer`](https://docs.python.org/3/library/csv.html#csv.writer) *(will consume a bit less memory as doesn't copy all fields)*:
```py
import csv
from operator import itemgetter
response = { ... }
keys = "id", "title", "sharedWithOrganization", "ownerId", "sharedWithUsers"
getter = itemgetter(*keys)
with open("result.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(keys)
for obj in response["result"]:
row = getter(obj)
writer.writerow(row[:-1] + ("|".join(row[-1]),))
``` | You can use [`pd.json_normalize`](https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.json_normalize.html) to normalize the Json: Specify different parameters `record_path=` to read up to different depths for fields under tag `result` and tag `filters` respectively.
Then, join the 2 resultant dataframes together, as follows:
```
# read fields under tag `result`
df_result = pd.json_normalize(response, record_path=['result'])
# read fields under tag `filters` within `result`
df_filters = pd.json_normalize(response, record_path=['result', 'filters'])
# Join 2 resultant dataframes together
df = df_result.join(df_filters).drop('filters', axis=1)
# Join fields in `sharedWithUsers`
df['sharedWithUsers'] = df['sharedWithUsers'].str.join('|')
```
**Result:**
```
print(df)
id title sharedWithOrganization ownerId sharedWithUsers field comparator value
0 1000 Fishing Team View True 324425 1223|w2qee3 tag5 == fishing
1 2000 Farming Team View False 00000 00000|11111 tag5 !@ farming
``` |
69,506,993 | Json code:Below we have json data format which I am pulling from site using API
```py
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
```
Python Code: I am using below code to parse the json data but I am unable to filters into different column specially filters part into separate column like Inside filter i a want to make field , comparator separate column
```py
records=[]
for data in response['result']:
id = data['id']
title = data['title']
sharedWithOrganization = data['sharedWithOrganization']
ownerId = data['ownerId']
sharedWithUsers = '|'.join(data['sharedWithUsers'])
filters = data['filters']
print(filters)
records.append([id,title,sharedWithOrganization,ownerId,sharedWithUsers])
#print(records)
ExcelApp = win32.Dispatch('Excel.Application')
ExcelApp.Visible= True
#creating excel and renaming sheet
wb = ExcelApp.Workbooks.Add()
ws= wb.Worksheets(1)
ws.Name="Get_Views"
#assigning header value
header_labels=('Id','Title','SharedWithOrganization','OwnerId','sharedWithUsers')
for index,val in enumerate(header_labels):
ws.Cells(1, index+1).Value=val
row_tracker = 2
column_size = len(header_labels)
for row in records:
ws.Range(ws.cells(row_tracker,1),ws.cells(row_tracker,column_size)).value = row
row_tracker +=1
```
[](https://i.stack.imgur.com/nykar.png)
I am doing API pull i am getting this kind of format and I am passing that .json format to python to achieve the data into excel but I am unable to filters list data into separate column can you please help me with | 2021/10/09 | [
"https://Stackoverflow.com/questions/69506993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15904548/"
] | Using [`DictWriter`](https://docs.python.org/3/library/csv.html#csv.DictWriter) you can write required columns into `.csv` file which you can open in Excel.
**Code:**
```py
from csv import DictWriter
response = { ... }
with open("result.csv", "w", newline="") as f:
writer = DictWriter(
f,
("id", "title", "sharedWithOrganization", "ownerId", "sharedWithUsers"),
extrasaction="ignore"
)
writer.writeheader()
for obj in response["result"]:
writer.writerow({**obj, "sharedWithUsers": "|".join(obj["sharedWithUsers"])})
```
Or you can use [`csv.writer`](https://docs.python.org/3/library/csv.html#csv.writer) *(will consume a bit less memory as doesn't copy all fields)*:
```py
import csv
from operator import itemgetter
response = { ... }
keys = "id", "title", "sharedWithOrganization", "ownerId", "sharedWithUsers"
getter = itemgetter(*keys)
with open("result.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(keys)
for obj in response["result"]:
row = getter(obj)
writer.writerow(row[:-1] + ("|".join(row[-1]),))
``` | The code below generates `out.csv` which can be opened by Excel.
Note that the code **does not require** any external library.
```
import csv
response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
fields = ['id','title','sharedWithOrganization','ownerId','sharedWithUsers']
data = []
for entry in response['result']:
data.append(['|'.join(entry[f]) if isinstance(entry[f],list) else entry[f] for f in fields])
with open('out.csv','w') as f:
writer = csv.writer(f)
writer.writerow(fields)
for line in data:
writer.writerow(line)
```
out.csv
```
id,title,sharedWithOrganization,ownerId,sharedWithUsers
1000,Fishing Team View,True,324425,1223|w2qee3
2000,Farming Team View,False,00000,00000|11111
``` |
69,227,105 | I have a dataset like the below, multiple groups, completed values with over 200 columns (denoting days)
Input
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | GROUP |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/08/2021 | 100% | 75% | 60% | 50% | 40% | 30% | 0% | A |
| 08/08/2021 | 100% | 95% | 80% | 60% | 30% | 10% | 0% | A |
| 15/08/2021 | 100% | 85% | 60% | 40% | 20% | 10% | 5% | A |
| 01/08/2021 | 100% | 70% | 65% | 55% | 45% | 35% | 0% | B |
| 08/08/2021 | 100% | 90% | 80% | 60% | 30% | 10% | 0% | B |
| 15/08/2021 | 100% | 95% | 60% | 40% | 30% | 20% | 5% | B |
Now, I have an incomplete dataset like the below. I would like to compute similarity metric for each group and state which series is most similar.
For purpose of similarity, I am using CORREL in Excel at the moment and in case of tie, I am using the latest one. For comparison, only complete values in both groups are compared (i.e. so missing values in expected output are not used for similarity metric calculation).
This is a VBA macro which I am shifting to python (either pandas or pyspark).
I am confused on how best to proceed. Any other similarity metric can be tried out too. Thanks
Expected Output
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Similarity\_Score | Similarity\_Week | Group |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/09/2021 | | | | | 39% | 28% | 0% | 0.99 | 01/08/2021 | A |
| 08/09/2021 | | | 62% | 44% | 21% | 12% | 7% | 0.99 | 15/08/2021 | A |
| 15/09/2021 | | | | | | 8% | 0% | 1.00 | 08/08/2021 | A |
| 15/09/2021 | | | | | 30% | 19% | 0% | 1.00 | 15/08/2021 | B | | 2021/09/17 | [
"https://Stackoverflow.com/questions/69227105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4261149/"
] | Compatibility issue.
Your `chrome driver` version is `94.0.4606.41` and this `driver` version supports `Chrome browser 94`
Please do anyone of the following.
* Update the `chrome browser` version to `94`
* Degrade the `driver` version to `93` (Download `93` version from here *<https://chromedriver.storage.googleapis.com/index.html?path=93.0.4577.63/>*) | I think there is another way to solve this problem. Uninstall the protarctor and reinstall it and see magic.
```
npm uninstall protractor
npm install protractor
```
[Session not created: This version of ChromeDriver only supports](https://medium.com/@prshannoct/session-not-created-this-version-of-chromedriver-only-supports-chrome-version-chromedrive-issue-3f17fd4e6a94) |
69,227,105 | I have a dataset like the below, multiple groups, completed values with over 200 columns (denoting days)
Input
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | GROUP |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/08/2021 | 100% | 75% | 60% | 50% | 40% | 30% | 0% | A |
| 08/08/2021 | 100% | 95% | 80% | 60% | 30% | 10% | 0% | A |
| 15/08/2021 | 100% | 85% | 60% | 40% | 20% | 10% | 5% | A |
| 01/08/2021 | 100% | 70% | 65% | 55% | 45% | 35% | 0% | B |
| 08/08/2021 | 100% | 90% | 80% | 60% | 30% | 10% | 0% | B |
| 15/08/2021 | 100% | 95% | 60% | 40% | 30% | 20% | 5% | B |
Now, I have an incomplete dataset like the below. I would like to compute similarity metric for each group and state which series is most similar.
For purpose of similarity, I am using CORREL in Excel at the moment and in case of tie, I am using the latest one. For comparison, only complete values in both groups are compared (i.e. so missing values in expected output are not used for similarity metric calculation).
This is a VBA macro which I am shifting to python (either pandas or pyspark).
I am confused on how best to proceed. Any other similarity metric can be tried out too. Thanks
Expected Output
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Similarity\_Score | Similarity\_Week | Group |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/09/2021 | | | | | 39% | 28% | 0% | 0.99 | 01/08/2021 | A |
| 08/09/2021 | | | 62% | 44% | 21% | 12% | 7% | 0.99 | 15/08/2021 | A |
| 15/09/2021 | | | | | | 8% | 0% | 1.00 | 08/08/2021 | A |
| 15/09/2021 | | | | | 30% | 19% | 0% | 1.00 | 15/08/2021 | B | | 2021/09/17 | [
"https://Stackoverflow.com/questions/69227105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4261149/"
] | Compatibility issue.
Your `chrome driver` version is `94.0.4606.41` and this `driver` version supports `Chrome browser 94`
Please do anyone of the following.
* Update the `chrome browser` version to `94`
* Degrade the `driver` version to `93` (Download `93` version from here *<https://chromedriver.storage.googleapis.com/index.html?path=93.0.4577.63/>*) | If you're using Mac run
```
brew reinstall chromedriver
``` |
69,227,105 | I have a dataset like the below, multiple groups, completed values with over 200 columns (denoting days)
Input
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | GROUP |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/08/2021 | 100% | 75% | 60% | 50% | 40% | 30% | 0% | A |
| 08/08/2021 | 100% | 95% | 80% | 60% | 30% | 10% | 0% | A |
| 15/08/2021 | 100% | 85% | 60% | 40% | 20% | 10% | 5% | A |
| 01/08/2021 | 100% | 70% | 65% | 55% | 45% | 35% | 0% | B |
| 08/08/2021 | 100% | 90% | 80% | 60% | 30% | 10% | 0% | B |
| 15/08/2021 | 100% | 95% | 60% | 40% | 30% | 20% | 5% | B |
Now, I have an incomplete dataset like the below. I would like to compute similarity metric for each group and state which series is most similar.
For purpose of similarity, I am using CORREL in Excel at the moment and in case of tie, I am using the latest one. For comparison, only complete values in both groups are compared (i.e. so missing values in expected output are not used for similarity metric calculation).
This is a VBA macro which I am shifting to python (either pandas or pyspark).
I am confused on how best to proceed. Any other similarity metric can be tried out too. Thanks
Expected Output
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Similarity\_Score | Similarity\_Week | Group |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/09/2021 | | | | | 39% | 28% | 0% | 0.99 | 01/08/2021 | A |
| 08/09/2021 | | | 62% | 44% | 21% | 12% | 7% | 0.99 | 15/08/2021 | A |
| 15/09/2021 | | | | | | 8% | 0% | 1.00 | 08/08/2021 | A |
| 15/09/2021 | | | | | 30% | 19% | 0% | 1.00 | 15/08/2021 | B | | 2021/09/17 | [
"https://Stackoverflow.com/questions/69227105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4261149/"
] | Compatibility issue.
Your `chrome driver` version is `94.0.4606.41` and this `driver` version supports `Chrome browser 94`
Please do anyone of the following.
* Update the `chrome browser` version to `94`
* Degrade the `driver` version to `93` (Download `93` version from here *<https://chromedriver.storage.googleapis.com/index.html?path=93.0.4577.63/>*) | This made me go crazy I solved it like this we are using selenium npm module.
Run the code below and it will tell you what executable path you are using.
```
const { Builder, By, Key, util } = require("selenium-webdriver");
const chrome = require("selenium-webdriver/chrome");
console.log(chrome.getDefaultService().executable_);
```
I had installed chromedriver globally
```
npm i chromedriver -g
```
The executable path showed me it was using this and an older version.
Uninstalled this.
```
npm uninstall chromedriver -g
```
Now it started using the version I dowloaded and added to me PATH.
Download the latest chromedriver from here.
<https://chromedriver.chromium.org/downloads>
Add it to your path in the .zshrc file.
```
export PATH=/Users/dave/SeleniumWebdrivers:$PATH
```
Drag your downloaded drivers into this folder. |
69,227,105 | I have a dataset like the below, multiple groups, completed values with over 200 columns (denoting days)
Input
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | GROUP |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/08/2021 | 100% | 75% | 60% | 50% | 40% | 30% | 0% | A |
| 08/08/2021 | 100% | 95% | 80% | 60% | 30% | 10% | 0% | A |
| 15/08/2021 | 100% | 85% | 60% | 40% | 20% | 10% | 5% | A |
| 01/08/2021 | 100% | 70% | 65% | 55% | 45% | 35% | 0% | B |
| 08/08/2021 | 100% | 90% | 80% | 60% | 30% | 10% | 0% | B |
| 15/08/2021 | 100% | 95% | 60% | 40% | 30% | 20% | 5% | B |
Now, I have an incomplete dataset like the below. I would like to compute similarity metric for each group and state which series is most similar.
For purpose of similarity, I am using CORREL in Excel at the moment and in case of tie, I am using the latest one. For comparison, only complete values in both groups are compared (i.e. so missing values in expected output are not used for similarity metric calculation).
This is a VBA macro which I am shifting to python (either pandas or pyspark).
I am confused on how best to proceed. Any other similarity metric can be tried out too. Thanks
Expected Output
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Similarity\_Score | Similarity\_Week | Group |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/09/2021 | | | | | 39% | 28% | 0% | 0.99 | 01/08/2021 | A |
| 08/09/2021 | | | 62% | 44% | 21% | 12% | 7% | 0.99 | 15/08/2021 | A |
| 15/09/2021 | | | | | | 8% | 0% | 1.00 | 08/08/2021 | A |
| 15/09/2021 | | | | | 30% | 19% | 0% | 1.00 | 15/08/2021 | B | | 2021/09/17 | [
"https://Stackoverflow.com/questions/69227105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4261149/"
] | This error occurred because you have different versions of Google Chrome and driver. It is better to update the driver, rather than install the old version of Google, since in the future it will be constantly updated (why do you want to use outdated technologies?).
I usually use :
```
ChromeDriverManager
```
because at any time without going to the web driver website you can simply download the driver with the following command:
```
driver = webdriver.Chrome(ChromeDriverManager().install())
```
Further, using the path given by this command, you can use the freshly installed version:
```
driver = webdriver.Chrome(executable_path=r"C:\path_to_chrome_driver_executable\chromedriver.exe")
``` | I think there is another way to solve this problem. Uninstall the protarctor and reinstall it and see magic.
```
npm uninstall protractor
npm install protractor
```
[Session not created: This version of ChromeDriver only supports](https://medium.com/@prshannoct/session-not-created-this-version-of-chromedriver-only-supports-chrome-version-chromedrive-issue-3f17fd4e6a94) |
69,227,105 | I have a dataset like the below, multiple groups, completed values with over 200 columns (denoting days)
Input
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | GROUP |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/08/2021 | 100% | 75% | 60% | 50% | 40% | 30% | 0% | A |
| 08/08/2021 | 100% | 95% | 80% | 60% | 30% | 10% | 0% | A |
| 15/08/2021 | 100% | 85% | 60% | 40% | 20% | 10% | 5% | A |
| 01/08/2021 | 100% | 70% | 65% | 55% | 45% | 35% | 0% | B |
| 08/08/2021 | 100% | 90% | 80% | 60% | 30% | 10% | 0% | B |
| 15/08/2021 | 100% | 95% | 60% | 40% | 30% | 20% | 5% | B |
Now, I have an incomplete dataset like the below. I would like to compute similarity metric for each group and state which series is most similar.
For purpose of similarity, I am using CORREL in Excel at the moment and in case of tie, I am using the latest one. For comparison, only complete values in both groups are compared (i.e. so missing values in expected output are not used for similarity metric calculation).
This is a VBA macro which I am shifting to python (either pandas or pyspark).
I am confused on how best to proceed. Any other similarity metric can be tried out too. Thanks
Expected Output
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Similarity\_Score | Similarity\_Week | Group |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/09/2021 | | | | | 39% | 28% | 0% | 0.99 | 01/08/2021 | A |
| 08/09/2021 | | | 62% | 44% | 21% | 12% | 7% | 0.99 | 15/08/2021 | A |
| 15/09/2021 | | | | | | 8% | 0% | 1.00 | 08/08/2021 | A |
| 15/09/2021 | | | | | 30% | 19% | 0% | 1.00 | 15/08/2021 | B | | 2021/09/17 | [
"https://Stackoverflow.com/questions/69227105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4261149/"
] | This error occurred because you have different versions of Google Chrome and driver. It is better to update the driver, rather than install the old version of Google, since in the future it will be constantly updated (why do you want to use outdated technologies?).
I usually use :
```
ChromeDriverManager
```
because at any time without going to the web driver website you can simply download the driver with the following command:
```
driver = webdriver.Chrome(ChromeDriverManager().install())
```
Further, using the path given by this command, you can use the freshly installed version:
```
driver = webdriver.Chrome(executable_path=r"C:\path_to_chrome_driver_executable\chromedriver.exe")
``` | If you're using Mac run
```
brew reinstall chromedriver
``` |
69,227,105 | I have a dataset like the below, multiple groups, completed values with over 200 columns (denoting days)
Input
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | GROUP |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/08/2021 | 100% | 75% | 60% | 50% | 40% | 30% | 0% | A |
| 08/08/2021 | 100% | 95% | 80% | 60% | 30% | 10% | 0% | A |
| 15/08/2021 | 100% | 85% | 60% | 40% | 20% | 10% | 5% | A |
| 01/08/2021 | 100% | 70% | 65% | 55% | 45% | 35% | 0% | B |
| 08/08/2021 | 100% | 90% | 80% | 60% | 30% | 10% | 0% | B |
| 15/08/2021 | 100% | 95% | 60% | 40% | 30% | 20% | 5% | B |
Now, I have an incomplete dataset like the below. I would like to compute similarity metric for each group and state which series is most similar.
For purpose of similarity, I am using CORREL in Excel at the moment and in case of tie, I am using the latest one. For comparison, only complete values in both groups are compared (i.e. so missing values in expected output are not used for similarity metric calculation).
This is a VBA macro which I am shifting to python (either pandas or pyspark).
I am confused on how best to proceed. Any other similarity metric can be tried out too. Thanks
Expected Output
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Similarity\_Score | Similarity\_Week | Group |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/09/2021 | | | | | 39% | 28% | 0% | 0.99 | 01/08/2021 | A |
| 08/09/2021 | | | 62% | 44% | 21% | 12% | 7% | 0.99 | 15/08/2021 | A |
| 15/09/2021 | | | | | | 8% | 0% | 1.00 | 08/08/2021 | A |
| 15/09/2021 | | | | | 30% | 19% | 0% | 1.00 | 15/08/2021 | B | | 2021/09/17 | [
"https://Stackoverflow.com/questions/69227105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4261149/"
] | This error occurred because you have different versions of Google Chrome and driver. It is better to update the driver, rather than install the old version of Google, since in the future it will be constantly updated (why do you want to use outdated technologies?).
I usually use :
```
ChromeDriverManager
```
because at any time without going to the web driver website you can simply download the driver with the following command:
```
driver = webdriver.Chrome(ChromeDriverManager().install())
```
Further, using the path given by this command, you can use the freshly installed version:
```
driver = webdriver.Chrome(executable_path=r"C:\path_to_chrome_driver_executable\chromedriver.exe")
``` | This made me go crazy I solved it like this we are using selenium npm module.
Run the code below and it will tell you what executable path you are using.
```
const { Builder, By, Key, util } = require("selenium-webdriver");
const chrome = require("selenium-webdriver/chrome");
console.log(chrome.getDefaultService().executable_);
```
I had installed chromedriver globally
```
npm i chromedriver -g
```
The executable path showed me it was using this and an older version.
Uninstalled this.
```
npm uninstall chromedriver -g
```
Now it started using the version I dowloaded and added to me PATH.
Download the latest chromedriver from here.
<https://chromedriver.chromium.org/downloads>
Add it to your path in the .zshrc file.
```
export PATH=/Users/dave/SeleniumWebdrivers:$PATH
```
Drag your downloaded drivers into this folder. |
69,227,105 | I have a dataset like the below, multiple groups, completed values with over 200 columns (denoting days)
Input
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | GROUP |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/08/2021 | 100% | 75% | 60% | 50% | 40% | 30% | 0% | A |
| 08/08/2021 | 100% | 95% | 80% | 60% | 30% | 10% | 0% | A |
| 15/08/2021 | 100% | 85% | 60% | 40% | 20% | 10% | 5% | A |
| 01/08/2021 | 100% | 70% | 65% | 55% | 45% | 35% | 0% | B |
| 08/08/2021 | 100% | 90% | 80% | 60% | 30% | 10% | 0% | B |
| 15/08/2021 | 100% | 95% | 60% | 40% | 30% | 20% | 5% | B |
Now, I have an incomplete dataset like the below. I would like to compute similarity metric for each group and state which series is most similar.
For purpose of similarity, I am using CORREL in Excel at the moment and in case of tie, I am using the latest one. For comparison, only complete values in both groups are compared (i.e. so missing values in expected output are not used for similarity metric calculation).
This is a VBA macro which I am shifting to python (either pandas or pyspark).
I am confused on how best to proceed. Any other similarity metric can be tried out too. Thanks
Expected Output
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Similarity\_Score | Similarity\_Week | Group |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/09/2021 | | | | | 39% | 28% | 0% | 0.99 | 01/08/2021 | A |
| 08/09/2021 | | | 62% | 44% | 21% | 12% | 7% | 0.99 | 15/08/2021 | A |
| 15/09/2021 | | | | | | 8% | 0% | 1.00 | 08/08/2021 | A |
| 15/09/2021 | | | | | 30% | 19% | 0% | 1.00 | 15/08/2021 | B | | 2021/09/17 | [
"https://Stackoverflow.com/questions/69227105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4261149/"
] | If you're using Mac run
```
brew reinstall chromedriver
``` | I think there is another way to solve this problem. Uninstall the protarctor and reinstall it and see magic.
```
npm uninstall protractor
npm install protractor
```
[Session not created: This version of ChromeDriver only supports](https://medium.com/@prshannoct/session-not-created-this-version-of-chromedriver-only-supports-chrome-version-chromedrive-issue-3f17fd4e6a94) |
69,227,105 | I have a dataset like the below, multiple groups, completed values with over 200 columns (denoting days)
Input
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | GROUP |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/08/2021 | 100% | 75% | 60% | 50% | 40% | 30% | 0% | A |
| 08/08/2021 | 100% | 95% | 80% | 60% | 30% | 10% | 0% | A |
| 15/08/2021 | 100% | 85% | 60% | 40% | 20% | 10% | 5% | A |
| 01/08/2021 | 100% | 70% | 65% | 55% | 45% | 35% | 0% | B |
| 08/08/2021 | 100% | 90% | 80% | 60% | 30% | 10% | 0% | B |
| 15/08/2021 | 100% | 95% | 60% | 40% | 30% | 20% | 5% | B |
Now, I have an incomplete dataset like the below. I would like to compute similarity metric for each group and state which series is most similar.
For purpose of similarity, I am using CORREL in Excel at the moment and in case of tie, I am using the latest one. For comparison, only complete values in both groups are compared (i.e. so missing values in expected output are not used for similarity metric calculation).
This is a VBA macro which I am shifting to python (either pandas or pyspark).
I am confused on how best to proceed. Any other similarity metric can be tried out too. Thanks
Expected Output
| Series | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Similarity\_Score | Similarity\_Week | Group |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 01/09/2021 | | | | | 39% | 28% | 0% | 0.99 | 01/08/2021 | A |
| 08/09/2021 | | | 62% | 44% | 21% | 12% | 7% | 0.99 | 15/08/2021 | A |
| 15/09/2021 | | | | | | 8% | 0% | 1.00 | 08/08/2021 | A |
| 15/09/2021 | | | | | 30% | 19% | 0% | 1.00 | 15/08/2021 | B | | 2021/09/17 | [
"https://Stackoverflow.com/questions/69227105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4261149/"
] | This made me go crazy I solved it like this we are using selenium npm module.
Run the code below and it will tell you what executable path you are using.
```
const { Builder, By, Key, util } = require("selenium-webdriver");
const chrome = require("selenium-webdriver/chrome");
console.log(chrome.getDefaultService().executable_);
```
I had installed chromedriver globally
```
npm i chromedriver -g
```
The executable path showed me it was using this and an older version.
Uninstalled this.
```
npm uninstall chromedriver -g
```
Now it started using the version I dowloaded and added to me PATH.
Download the latest chromedriver from here.
<https://chromedriver.chromium.org/downloads>
Add it to your path in the .zshrc file.
```
export PATH=/Users/dave/SeleniumWebdrivers:$PATH
```
Drag your downloaded drivers into this folder. | I think there is another way to solve this problem. Uninstall the protarctor and reinstall it and see magic.
```
npm uninstall protractor
npm install protractor
```
[Session not created: This version of ChromeDriver only supports](https://medium.com/@prshannoct/session-not-created-this-version-of-chromedriver-only-supports-chrome-version-chromedrive-issue-3f17fd4e6a94) |
47,867,995 | I am using AWS, boto3 and Pycharm to write a very simple program on python that compare one faces from one face image to another face from another face image.
I written the following simple source code:
```
import boto3
s3 = boto3.resource('s3',
aws_access_key_id = "xxx",
aws_secret_access_key = "yyy")
BUCKET = "eyeglasses-images"
KEY_SOURCE = "Foucault.jpg"
KEY_TARGET = "Ricoeur.jpg"
def compare_faces(bucket, key, bucket_target, key_target, threshold=80, region="eu-west-1"):
rekognition = boto3.client("rekognition", region)
response = rekognition.compare_faces(
SourceImage={
"S3Object": {
"Bucket": bucket,
"Name": key,
}
},
TargetImage={
"S3Object": {
"Bucket": bucket_target,
"Name": key_target,
}
},
SimilarityThreshold=threshold,
)
return response['SourceImageFace'], response['FaceMatches']
source_face, matches = compare_faces(BUCKET, KEY_SOURCE, BUCKET, KEY_TARGET)
# the main source face
print
"Source Face ({Confidence}%)".format(**source_face)
# one match for each target face
for match in matches:
print
"Target Face ({Confidence}%)".format(**match['Face'])
print
" Similarity : {}%".format(match['Similarity'])
```
However I get the following error:
```
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
```
(Obviously in the place of xxx and yyy I am using the real keys)
What is the problem and how can I fix this? | 2017/12/18 | [
"https://Stackoverflow.com/questions/47867995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9024698/"
] | You can use session first. Eg.
```
session = boto3.Session(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN,
)
```
and then get your resource
```
s3 = session.resource('s3')
```
Note that aws\_session\_token is optional and will be used if you use STS to get temp sessions.
Just saying
```
s3 = boto3.client('s3')
s3 = boto3.resource('s3')
```
will use default creds in `~/.aws/credentials` file. So it cannot be used in your case.
It would be good if you can directly use -
```
s3 = boto3.client(
's3',
# Hard coded strings as credentials, not recommended.
aws_access_key_id='xx',
aws_secret_access_key='yy'
)
```
For more details refer - [boto3 configuration documentation](http://boto3.readthedocs.io/en/latest/guide/configuration.html#configuring-credentials) | The issue is in the line: `rekognition = boto3.client("rekognition", region)`
Because you did not specify the keys here, boto3 is trying to find the credentials file in ~/.aws. Update this to use the `session` object as mentioned by Aniket. Your code should change to:
```
session = boto3.session.Session(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY
)
rekognition = session.client("rekognition", region)
``` |
67,985,851 | here is the code(it is working all fine till the entry of username and password but gives error for clicking the "submit"/"play" button ):-
```
from selenium import webdriver # Used to import the driver
from selenium.webdriver import ActionChains
#from pynput.mouse import Button, Controller
import time
#mouse = Controller()
def bot(usr,pas):
br=webdriver.Firefox()
br.get("https://tankionline.com/play/")
time.sleep(10)
br.find_element_by_link_text("Sign in").click()
time.sleep(2)
#user=br.find_element_by_css_selector("username")
username = br.find_element_by_id("username")
username.send_keys(usr)
password=br.find_element_by_id("password")
password.send_keys(pas)
ActionChains(webdriver).click("<span>Play </span>").perform()
#main driver code
bot("Ricochet_Master","********")
```
AND HERE IS THE ERROr(is there an alternative to the button auto click to avoid this error?):-
```
Traceback (most recent call last):
File "/home/aaryan/selenium auto login 2.py", line 24, in <module>
bot("Ricochet_Master","********")
File "/home/aaryan/selenium auto login 2.py", line 20, in bot
ActionChains(webdriver).click("<span>Play </span>").perform()
File "/home/aaryan/.local/lib/python3.8/site-packages/selenium/webdriver/common/action_chains.py", line 72, in __init__
if self._driver.w3c:
AttributeError: module 'selenium.webdriver' has no attribute 'w3c'
``` | 2021/06/15 | [
"https://Stackoverflow.com/questions/67985851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16223398/"
] | You can use `df.apply()` on each column and then for each column series, use [`np.unique()`](https://numpy.org/doc/stable/reference/generated/numpy.unique.html) to get the sorted unique item list (with empty string skipped) and then use `pd.Series` to recreate the columns.
```
import numpy as np
df.apply(lambda x: pd.Series(np.unique(x[x!=''])))
```
Output:
```
code des price year brand
0 code1 des1 price1 year1 brand1
1 code2 des2 price2 year2 brand2
2 code3 des3 price3 year3 brand3
3 code4 des4 price4 year4 brand4
4 code5 des5 price5 year5 brand5
5 code6 des6 price6 year6 brand6
``` | Is this what you are looking for? First fill the empty space with `np.nan` then drop the na rows using apply
```
df = df.replace(r'^\s*$', np.nan, regex=True)
df.apply(lambda x: pd.Series(x.dropna().values))
code des price year brand
0 code1 des1 price1 year1 brand1
1 code2 des3 price2 year3 brand2
2 code3 des5 price3 year5 brand3
3 code4 des2 price4 year2 brand4
4 code5 des4 price5 year4 brand5
5 code6 des6 price6 year6 brand6
6 code2 NaN NaN NaN NaN
7 code4 NaN NaN NaN NaN
8 code6 NaN NaN NaN NaN
``` |
18,827,951 | With python 2.7 I am reading as unicode and writing as utf-16-le. Most characters are correctly interpreted. But some are not, for example, u'\u810a', also known as [unichr(33034)](http://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint=810A&useutf8=false). The following code code does not write correctly:
```
import codecs
with open('temp.txt','w') as temp:
temp.write(codecs.BOM_UTF16_LE)
text = unichr(33034) # text = u'\u810a'
temp.write(text.encode('utf-16-le'))
```
But either of these things, when replaced above, make the code work.
1. unichr(33033) and unichr(33035) work correctly.
2. 'utf-8' encoding (without BOM, [byte-order mark](http://en.wikipedia.org/wiki/Byte_order_mark)).
How can I recognize characters that won't write correctly, and how can I write a 'utf-16-le' encoded file with BOM that either prints these characters or some replacement? | 2013/09/16 | [
"https://Stackoverflow.com/questions/18827951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2327328/"
] | You are opening the file in text mode, which means that [line-break characters/bytes will be translated to the local convention](http://docs.python.org/2/library/functions.html#open). Unfortunately the character you are trying to write includes a byte, `0A`, that is interpreted as a line break and does not make it to the file correctly.
Open the file in binary mode instead:
```
open('temp.txt','wb')
``` | You're already using the codecs library. When working with that file, you should swap out using open() with codecs.open() to transparently handle encoding.
```
import codecs
with codecs.open('temp.txt', 'w', encoding='utf-16-le') as temp:
temp.write(unichr(33033))
temp.write(unichr(33034))
temp.write(unichr(33035))
```
If you have a problem after that, you might have an issue with your viewer, not your Python script. |
18,827,951 | With python 2.7 I am reading as unicode and writing as utf-16-le. Most characters are correctly interpreted. But some are not, for example, u'\u810a', also known as [unichr(33034)](http://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint=810A&useutf8=false). The following code code does not write correctly:
```
import codecs
with open('temp.txt','w') as temp:
temp.write(codecs.BOM_UTF16_LE)
text = unichr(33034) # text = u'\u810a'
temp.write(text.encode('utf-16-le'))
```
But either of these things, when replaced above, make the code work.
1. unichr(33033) and unichr(33035) work correctly.
2. 'utf-8' encoding (without BOM, [byte-order mark](http://en.wikipedia.org/wiki/Byte_order_mark)).
How can I recognize characters that won't write correctly, and how can I write a 'utf-16-le' encoded file with BOM that either prints these characters or some replacement? | 2013/09/16 | [
"https://Stackoverflow.com/questions/18827951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2327328/"
] | @Joni's answer is the root of the problem, but if you use `codecs.open` instead it always opens in binary mode, even if not specified. Using the `utf16` codec also automatically writes the BOM using native endian-ness as well:
```
import codecs
with codecs.open('temp.txt','w','utf16') as temp:
temp.write(u'\u810a')
```
Hex dump of `temp.txt`:
```
FF FE 0A 81
```
Reference: [codecs.open](http://docs.python.org/3/library/codecs.html?highlight=codecs.open#codecs.open) | You're already using the codecs library. When working with that file, you should swap out using open() with codecs.open() to transparently handle encoding.
```
import codecs
with codecs.open('temp.txt', 'w', encoding='utf-16-le') as temp:
temp.write(unichr(33033))
temp.write(unichr(33034))
temp.write(unichr(33035))
```
If you have a problem after that, you might have an issue with your viewer, not your Python script. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.