qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
63,940,481 | I know that my view is correct, however, when I put `{% url 'sub_detail' subc.id %}`in index.html it suddenly gives an error of no reverse match. Once I remove it index works fine. I tried changing id, but it did not change anything as it still gives the same error.
Thanks in advance.
views.py:
```
from django.shortcuts import render, redirect, reverse, get_object_or_404
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from .models import Slides, MainContent, SubContent
from .forms import TitleForm, SubContentForm, SlidesForm
def index(request):
slides = Slides.objects.all()
maincontent = MainContent.objects.all()
subcontent = SubContent.objects.all()
context = {
'slides': slides,
'maincontent': maincontent,
'subcontent': subcontent,
}
return render(request, 'home/index.html', context)
def sub_detail(request, subc_id):
subcontent = get_object_or_404(SubContent, pk=subc_id)
context = {
'subcontent': subcontent,
}
return render(request, 'home/sub_detail.html', context)
```
urls.py:
```
path('', views.index, name='home'),
path('<int:subc_id>/', views.sub_detail, name='sub_detail'),
path('manage/', views.manage, name='manage'),
path('slides/', views.slides, name='slides'),
path('title/', views.add_title, name='add_title'),
path('sub/', views.add_sub_content, name='add_sub_content'),
]
```
models.py:
```
class SubContent(models.Model):
class Meta:
verbose_name_plural = 'Sub Content'
title = models.CharField(max_length=28, null=False, blank=False)
image = models.ImageField()
description = models.TextField()
def __str__(self):
return self.title
```
index.html:
```
<a href="{% url 'sub_detail' subc.id %}">
<div class="col-md-6 section-index-img">
<img src="{{ sub.image.url }}" class="rounded img-fluid" alt=""/>
</div>
</a>
```
error code:
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/
Django Version: 3.1
Python Version: 3.8.3
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'allauth',
'allauth.account',
'allauth.socialaccount',
'home',
'crispy_forms',
'products']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Template error:
In template /workspace/hunt-interiors/templates/base.html, error at line 0
Reverse for 'sub_detail' with arguments '('',)' not found. 1 pattern(s) tried: ['(?P<subc_id>[0-9]+)/$']
1 : <!doctype html>
2 : {% load static %}
3 :
4 :
5 : <html lang="en">
6 : <head>
7 :
8 : {% block meta %}
9 : <meta http-equiv="X-UA-Compatible" content="ie=edge">
10 : <meta charset="utf-8">
Traceback (most recent call last):
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/handlers/base.py", line 179, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/workspace/hunt-interiors/home/views.py", line 18, in index
return render(request, 'home/index.html', context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/shortcuts.py", line 19, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/loader.py", line 62, in render_to_string
return template.render(context, request)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 170, in render
return self._render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 162, in _render
return self.nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 162, in _render
return self.nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/defaulttags.py", line 211, in render
nodelist.append(node.render_annotated(context))
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/defaulttags.py", line 312, in render
return nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/defaulttags.py", line 312, in render
return nodelist.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/template/defaulttags.py", line 446, in render
url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/urls/base.py", line 87, in reverse
return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/urls/resolvers.py", line 685, in _reverse_with_prefix
raise NoReverseMatch(msg)
Exception Type: NoReverseMatch at /
Exception Value: Reverse for 'sub_detail' with arguments '('',)' not found. 1 pattern(s) tried: ['(?P<subc_id>[0-9]+)/$']
``` | 2020/09/17 | [
"https://Stackoverflow.com/questions/63940481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14294805/"
] | Rather than many `if` statements, I just reproduced the `match` statement
with a repetition `$( ... )*` for all the available branches.
It seems to behave like the extensive `match` expression.
```rust
macro_rules! run_questions {
( $chosen_question: expr, $( $question_num: expr, $question_mod: expr ), * ) => {
match $chosen_question {
$($question_num => run_question($question_mod),)*
_ => {
println!("Question doesn't exist.");
}
}
};
}
``` | The error message explained:
```
macro_rules! run_questions {
($chosen_question: expr, $($question_num: expr, $question_mod: expr),*) => {{
```
In the above pattern you have a repetition with the `*` operator that involves variables `$question_num` and `$question_mod`
```
if $chosen_question == $question_num {
run_question($question_mod::solve);
}
```
In the corresponding code, you can't use `$question_num` and `$question_mod` directly: since they are repeated they potentially have more than one value and which one should the compiler use here? Instead, you need to tell the compiler to repeat the block of code that uses these variables. This is done by surrounding the repeated code block with `$()` and adding the `*` operator:
```
$(if $chosen_question == $question_num {
run_question($question_mod::solve);
})*
```
Although as pointed out by @prog-fh's answer, better to use a `match` in the macro, same as in the straight code:
```
match $chosen_question {
$($question_num => run_question ($question_mod::solve),)*
_ => println!("Question doesn't exist.")
};
``` | 16,256 |
70,699,537 | Given two arrays:
```
import numpy as np
array1 = np.array([7, 2, 4, 1, 20], dtype = "int")
array2 = np.array([2, 4, 4, 3, 10], dtype = "int")
```
and WITHOUT the use of any loop or if-else statement; I am trying to create a third array that will take the value equal to the sum of the
(corresponding) elements from array1 and array2 if the element from
array1 is bigger than the element from array2. If they are equal, the new element should have a value equal to their product. If the element from array2 is bigger, then the new element should be the difference between the element from array2 and array1
I have tried to implement this using python list and if-else statement with loop, but would like to know how to implement with numpy methods.
My implementation:
```
array1 = [7, 2, 4, 1, 20]
array2 = [2, 4, 4, 3, 10]
array3 = []
for i, j in enumerate(array1):
if j>array2[i]:
sum = j + array2[i]
array3.append(sum)
elif j==array2[i]:
product = j * array2[i]
array3.append(product)
else:
sub = array2[i] - j
array3.append(sub)
print("output: ",array3)
```
output: [9, 2, 16, 2, 30] | 2022/01/13 | [
"https://Stackoverflow.com/questions/70699537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12256590/"
] | You can use three mask arrays, like so:
```
>>> array3 = np.zeros(array1.shape, dtype=array1.dtype)
>>> a1_gt = array1 > array2 # for when element at array 1 is greater
>>> a2_gt = array1 < array2 # for when element at array 2 is greater
>>> a1_eq_a2 = array1 == array2 # for when elements at array 1 and array 2 are equal
>>> array3[a1_gt] = array1[a1_gt] + array2[a1_gt]
>>> array3[a2_gt] = array2[a2_gt] - array1[a2_gt]
>>> array3[a1_eq_a2] = array2[a1_eq_a2] * array1[a1_eq_a2]
>>> array3
array([ 9., 2., 16., 2., 30.])
``` | I renamed your arrays to `a` and `b`
```
print((a>b)*(a+b)+(a==b)*(a*b)+(a<b)*(b-a))
```
direct comparison between arrays give you boolean reasults that you can interpret as `0` or `1`. That means a simple multiplication can turn an element "on" or "off". So we can just piece everything together. | 16,257 |
39,372,494 | ```
#!/usr/bin/python
# -*- coding: utf-8 -*-
def to_weird_case(string):
lines = string.split()
new_word = ''
new_line = ''
for word in lines:
for item in word:
if word.index(item) %2 ==0:
item = item.upper()
new_word += item
else:
new_word += item
new_line = new_word +' '
return new_line
print to_weird_case('what do you mean')
```
I want to get `WhAt Do YoU MeAn`, instead I got `WhAtDoYoUMeAn`. I already add the line `new_line = new_word +' '`. where is my problem? | 2016/09/07 | [
"https://Stackoverflow.com/questions/39372494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6511336/"
] | First, you overwrite `new_line` with every iteration. Second, `new_word` is getting longer because you never "clear" it. Third, you add space to the end of the entire `new_line` and not after every new word (because of *Second*).
*See comments*
```
def to_weird_case(string):
lines = string.split()
new_line = ''
for word in lines:
new_word = '' # start new word from an empty string
for item in word:
if word.index(item) %2 ==0:
item = item.upper()
new_word += item
else:
new_word += item
print new_word
new_line = new_line + new_word + " " # add new word to the existing new line
return new_line
``` | It is correct that your code did not reset the value of `new_word` and you overwrote the `new_line` within the loop, but I'd like to share a next to one-liner solution with a regex:
```
import re
def to_weird_case(string):
return re.sub(r'(\S)(\S?)', lambda m: "{0}{1}".format(m.group(1).upper(), m.group(2)), string);
print to_weird_case('what do you mean')
```
See [Python demo](https://ideone.com/QIzRyE)
The `(\S)(\S?)` regex captures a non-whitespace into Group 1 and one or zero non-whitespaces into Group 2, and then, inside the `re.sub`, the Group 1 value is replaced with the uppercased counterpart.
Look at how [`(\S)(\S?)`](https://regex101.com/r/yV9rJ7/1) matches your `what do you mean`:
* `wh` is matches and `w` is in Group 1 and `h` is in Group 2 ([](https://i.stack.imgur.com/VhLfo.png)). The match is passed as `m` to the lambda expression, and Group 1 is modified, and Group 2 is just passed as is.
* The next match contains `at`, and the same thing happens with groups
* Next, the space is not matched since `\S` matches any character but a whitespace.
* `do` is matched, the same things happens as described above
* space, `yo` are matched and handled as described above
* `u` + *empty space* are matched next because the second `\S` has a `?` quantifier that matches one or zero occurrences of the pattern it modifies. Thus, the first char is uppercased and the second empty string is used as is.
* Then the rest is processed in a similar way. | 16,262 |
26,650,057 | I am working on a simple python script for retrieving information from a mysql database.
Here are my two examples which are almost IDENTICAL and the first successfully compiles while the second returns:
```
File "dbconnection.py", line 17
print ip
^
SyntaxError: invalid syntax
```
I have tried deleting the try catch code from the second example with no results though.
There is a relevant post [on syntax error on Python 3](https://stackoverflow.com/questions/826948/syntax-error-on-print-with-python-3) which makes my second example successfully run but it doesn't answer the question "why these almost identical examples have different behavior".
First example:
```
import datetime
import mysql.connector
cnx = mysql.connector.connect(user='user', database='test', password='test')
cursor = cnx.cursor()
query = ("SELECT ip FROM guralps")
cursor.execute(query)
for (ip) in cursor:
print ip
cursor.close()
cnx.close()
```
Second which does not compile:
```
from __future__ import print_function
from datetime import date, datetime, timedelta
import mysql.connector
from mysql.connector import errorcode
import time
try:
cnx = mysql.connector.connect(user='user', database='test', password='test')
cursor = cnx.cursor()
query = ("SELECT ip FROM guralps")
cursor.execute(query)
for (ip) in cursor:
print ip
cursor.close()
cnx.close()
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exists")
else:
print(err)
else:
cnx.close()
``` | 2014/10/30 | [
"https://Stackoverflow.com/questions/26650057",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1210196/"
] | You used:
```
from __future__ import print_function
```
at the top of your module. This *disables* the `print` statement for that module so you can use the [`print()` **function**](https://docs.python.org/2/library/functions.html#print) instead:
```
print(id)
```
From the function documentation:
>
> **Note**: This function is not normally available as a built-in since the name `print` is recognized as the [`print`](https://docs.python.org/2/reference/simple_stmts.html#print) statement. To disable the statement and use the `print()` function, use this future statement at the top of your module:
>
>
>
> ```
> from __future__ import print_function
>
> ```
>
> | from **future** import print\_function, division require Python 2.6 or later. **print\_function** will allow you to use print as a function. So you can't use it as **print ip**.
```
>>> from __future__ import print_function
>>>print('# of entries', len(dictionary), file=sys.stderr)
``` | 16,263 |
58,048,079 | Upon attempting to compile python 3.7 I hit `Could not import runpy module`:
```
jeremyr@b88:$ wget https://www.python.org/ftp/python/3.7.3/Python-3.7.3.tar.xz
....
jeremyr@b88:~/Python-3.7.3$ ./configure --enable-optimizations
jeremyr@b88:~/Python-3.7.3$ make clean
jeremyr@b88:~/Python-3.7.3$ make -j32
....
gcc -pthread -Xlinker -export-dynamic -o Programs/_testembed Programs/_testembed.o libpython3.7m.a -lcrypt -lpthread -ldl -lutil -lm
./python -E -S -m sysconfig --generate-posix-vars ;\
if test $? -ne 0 ; then \
echo "generate-posix-vars failed" ; \
rm -f ./pybuilddir.txt ; \
exit 1 ; \
fi
Could not import runpy module
Traceback (most recent call last):
File "/home/jeremyr/Python-3.7.3/Lib/runpy.py", line 15, in <module>
import importlib.util
File "/home/jeremyr/Python-3.7.3/Lib/importlib/util.py", line 14, in <module>
from contextlib import contextmanager
File "/home/jeremyr/Python-3.7.3/Lib/contextlib.py", line 4, in <module>
import _collections_abc
SystemError: <built-in function compile> returned NULL without setting an error
generate-posix-vars failed
Makefile:603: recipe for target 'pybuilddir.txt' failed
make[1]: *** [pybuilddir.txt] Error 1
make[1]: Leaving directory '/home/jeremyr/Python-3.7.3'
Makefile:531: recipe for target 'profile-opt' failed
make: *** [profile-opt] Error 2
jeremyr@88:~/Python-3.7.3$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.11 (jessie)
Release: 8.11
Codename: jessie
jeremyr@88:~/Python-3.7.3$ gcc --version
gcc (Debian 4.9.2-10+deb8u2) 4.9.2
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
jeremyr@88:~/Python-3.7.3$ sudo apt upgrade gcc
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... gcc is already the newest version.
jeremyr@b88:~/Python-3.7.3$ echo $PYTHONPATH
```
Any advice on how to overcome this and install python3.7 appreciated.
Edit - the solution listed below seems to work for various other python versions, so I changed title to python 3.x from 3.7 | 2019/09/22 | [
"https://Stackoverflow.com/questions/58048079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3817456/"
] | It seems the enable-optimizations was the problem,
```
jeremyr@b88:~/Python-3.7.3$ ./configure
jeremyr@b88:~/Python-3.7.3$ make clean
```
takes care of it in my case. | In case others come across this question: I encountered the same problem on Centos 7. I also had `--enable-optimizations` but didn't want to remove that flag. Updating my build dependencies and then re-running solved the problem. To do that I ran:
```
sudo yum groupinstall "Development Tools" -y
```
In case the yum group is not available, you can also install the pacakges individually using:
```
sudo yum install bison byacc cscope ctags cvs diffstat doxygen flex gcc gcc-c++ gcc-gfortran gettext git indent intltool libtool patch patchutils rcs redhat-rpm-config rpm-build subversion swig systemtap
``` | 16,264 |
50,685,300 | I want to upload a flask server to bluemix. The structure of my project is something like this
* Classes
+ functions.py
* Watson
+ bot.py
* requirements.txt
* runtime.txt
* Procfile
* manifest.yml
my bot.py has this dependency:
```
from classes import functions
```
I have tried to include it in the manifest using things like this:
./classes or ./classes/functions
but I have had no luck, it keeps saying either that module is not found or things like pip.exceptions.InstallationError: Invalid requirement: './classes/functions'
I dont know how to add the dependency
manifest.yml
```
---
applications:
- name: chatbotstest
random-route: true
memory: 256M
```
Procfile (the file that I use to run the app)
```
web: python watson/bot.py
```
when I print my sys.path I get this:
```
['..', '/home/vcap/app/watson', '/home/vcap/deps/0/python/lib/python36.zip', '/home/vcap/deps/0/py
e/vcap/deps/0/python/lib/python3.6/lib-dynload', '/home/vcap/deps/0/python/lib/python3.6/site-packages', '/home/vcap/deps/0/python/lib/python3.6/site-
-py3.6.egg', '/home/vcap/deps/0/python/lib/python3.6/site-packages/pip-9.0.1-py3.6.egg']
```
I have tried to add the folder parent to my script using
Thanks a lot for your help!!! | 2018/06/04 | [
"https://Stackoverflow.com/questions/50685300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4590839/"
] | You don't need to include it into the manifest file. Your entire app directory and its subdirectories are uploaded as part of the `push` command. Thereafter, it is possible to reference the file as shown.
This imports a file in the current directory:
```
import myfile
```
This should work for your `functions.py`:
```
from classes import functions
``` | Thanks a lot, this finally worked for me, the answered you pointed me to gave me the solution, thanks a lot again!
```
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
``` | 16,267 |
14,594,402 | I have 3 files a.py, b.py, c.py
I am trying to dynamically import a class called "C" defined in c.py from within a.py
and have the evaluated name available in b.py
python a.py is currently catching the NameError. I'm trying to avoid this and create an
instance in b.py which calls C.do\_int(10)
a.py
```
import b
#older
#services = __import__('services')
#interface = eval('services.MyRestInterface')
# python2.7
import importlib
module = importlib.import_module('c')
interface = eval('module.C')
# will work
i = interface()
print i.do_int(10)
# interface isn't defined in b.py after call to eval
try:
print b.call_eval('interface')
except NameError:
print "b.call_eval('interface'): interface is not defined in b.py"
```
---
b.py
```
def call_eval(name):
interface = eval(name)
i = interface()
return i.do_int(10)
```
---
c.py
```
class C(object):
my_int = 32
def do_int(self, number):
self.my_int += number
return self.my_int
```
How can I achieve this? | 2013/01/30 | [
"https://Stackoverflow.com/questions/14594402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/566741/"
] | One way would be to use indexOf() to see if /admin is at pos 0.
```
var msg = "/admin this is a message";
var n = msg.indexOf("/admin");
```
If n = 0, then you know /admin was at the start of the message.
If the string does not exist in the message, n would equal -1. | You could use [`Array.slice(beg, end)`](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Array/slice):
```javascript
var message = '/admin this is a message';
if (message.slice(0, 6) === '/admin') {
var adminMessage = message.slice(6).trim();
// Now do something with the "adminMessage".
}
``` | 16,268 |
25,395,915 | I'm after a threadsafe queue that can be pickled or serialized to disk. Are there any datastructures in python that do this. The standard python Queue could not be pickled. | 2014/08/20 | [
"https://Stackoverflow.com/questions/25395915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3716723/"
] | This can be done using the [`copy_reg`](https://docs.python.org/2/library/copy_reg.html) module, but it's not the most elegant thing in the world:
```
import copy_reg
import threading
import pickle
from Queue import Queue as _Queue
# Make Queue a new-style class, so it can be used with copy_reg
class Queue(_Queue, object):
pass
def pickle_queue(q):
# Shallow copy of __dict__ (the underlying deque isn't actually copied, so this is fast)
q_dct = q.__dict__.copy()
# Remove all non-picklable synchronization primitives
del q_dct['mutex']
del q_dct['not_empty']
del q_dct['not_full']
del q_dct['all_tasks_done']
return Queue, (), q_dct
def unpickle_queue(state):
# Recreate our queue.
q = state[0]()
q.mutex = threading.Lock()
q.not_empty = threading.Condition(q.mutex)
q.not_full = threading.Condition(q.mutex)
q.all_tasks_done = threading.Condition(q.mutex)
q.__dict__ = state[2]
return q
copy_reg.pickle(Queue, pickle_queue, unpickle_queue)
q = Queue()
q.put("hey")
d = pickle.dumps(q)
new_q = pickle.loads(d)
print new_q.get()
# Outputs 'hey'
```
`copy_reg` allows you to register helper functions or pickling and unpickling arbitrary objects. So, we register a new-style version of the `Queue` class, and use the helper functions to remove all the unpickleable `Lock`/`Condition` instance variables prior to pickling, and add them back after unpickling. | There are modules like `dill` and `cloudpickle` that already know how to serialize a `Queue`.
They already have done the `copy_reg` for you.
```
>>> from Queue import Queue
>>> q = Queue()
>>> q.put('hey')
>>> import dill as pickle
>>> d = pickle.dumps(q)
>>> _q = pickle.loads(d)
>>> print _q.get()
hey
>>>
```
It's that easy! Just `import dill as pickle` and problem solved.
Get `dill` here: <https://github.com/uqfoundation> | 16,273 |
45,447,325 | I am using service workers to create an offline page for my website.
At the moment I am saving `offline.html` into cache so that the browser can show this file if there is no interent connection.
In the `fetch` event of my service worker I attempt to load `index.html`, and if this fails (no internet connection) I load `offline.html` from cache.
However, whenever I check offline mode in developer tools and refresh the page `index.html` still shows...
The request isn't failing, and it looks like `index.html` is being cached even though I didn't specify it to be.
Here is my HTML for `index.html`:
```
<!DOCTYPE html>
<html>
<head>
<title>Service Workers - Test</title>
</head>
<body>
<h1> Online page! </h1>
<h3> You are connected to the internet. </h3>
</body>
<script>
if ('serviceWorker' in navigator)
{
navigator.serviceWorker.register('service-worker.js');
}
</script>
</html>
```
Here is my HTML for `offline.html`:
```
<!DOCTYPE html>
<html>
<head>
<title>You are Offline - Service Workers - Test</title>
</head>
<body>
<h1> Welcome to the Offline Page!</h1>
<h2> You are not connected to the internet but you can still do certain things offline. </h2>
</body>
</html>
```
Here is my javascript for `service-worker.js`:
```
const PRECACHE = "version1"
const CACHED = ["offline.html"];
// Caches "offline.html" incase there is no internet
self.addEventListener('install', event => {
console.log("[Service Worker] Installed");
caches.delete(PRECACHE)
event.waitUntil (
caches.open(PRECACHE)
.then(cache => cache.addAll(CACHED))
.then( _ => self.skipWaiting())
);
});
// Clears any caches that do not match this version
self.addEventListener("activate", event => {
event.waitUntil (
caches.keys()
.then(keys => {
return Promise.all (
keys.filter(key => {
return !key.startsWith(PRECACHE);
})
.map(key => {
return caches.delete(key);
})
);
})
.then(() => {
console.log('[Service Worker] Cleared Old Cache');
})
);
});
this.addEventListener('fetch', function(event) {
if (event.request.method !== 'GET') return;
console.log("[Service Worker] Handling Request ");
// If the request to `index.html` works it shows it, but if it fails it shows the cached version of `offline.html`
// This isn't working because `fetch` doesn't fail when there is no internet for some reason...
event.respondWith (
fetch(event.request)
.then(response => {
console.log("[Service Worker] Served from NETWORK");
return response;
}, () => {
console.log("[Service Worker] Served from CACHE");
return catches.match(event.request.url + OFFLINE_URL);
})
);
});
```
I am running a server using python's simple http server like so:
```
python -m SimpleHTTPServer
```
Does anyone know why the offline page isn't working and how I can fix this?
Thanks for the help,
David
**EDIT:**
These images are showing that `index.html` (localhost) is still loading without internet which means it must be cached.
[](https://i.stack.imgur.com/kqUbY.png)
[](https://i.stack.imgur.com/lOLci.png)
**Edit 2:**
I've tried to add `no-cache` to the fetch of `index.html` and it still is fetching `index.html` when I have offline checked.
```
fetch(event.request, {cache: "no-cache"}) ...
``` | 2017/08/01 | [
"https://Stackoverflow.com/questions/45447325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1541397/"
] | I think we have all forgotten how the network request works from a browser's point of view.
The issue here is, `index.html` is served from the disk cache when the service worker intercepts requests.
**browser** ===> **Service Worker** ===>**fetch event**
>
> inside the fetch event, we have ,
>
>
> * Check If there is network connectivity
> + If there is, fetch from network and respond
> + Else, fetch from cache and respond
>
>
>
Now, how does
>
> If there is network connectivity, fetch from network work?
>
>
>
**Service Worker OnFetch** ===> **Check in Disk Cache** ===>**Nothing? Fetch Online**
The page being fetched here, is `index.html`
and the `cache-control` headers for `index.html` ,
**Do Not** Specify a `no-cache`
Hence the whole issue of the offline page not showing up.
### Solution
* Set a `cache-control` header with limiting values for `index.html` - On the server side
* Or, add headers in the fetch request to the effect
+ `pragma:no-cache`
+ `cache-control:no-cache`
### How Do I add these headers to fetch?
Apparently, fetch and the browser have their own reservations about the request body when it comes to a GET
Also, weirdness and utter chaos happens If you reuse the `event.request` object, for a fetch request, and add custom headers.
The chaos is a list of `Uncaught Exceptions` due to the `fetch` event's `request.mode` attribute , which bars you from adding custom headers to a fetch when under a no-cors or a navigate mode.
Our goal is to :
**Identify** that the browser is truly **offline** and then serve a page that says so
Here's How:
>
> Check If you can fetch a dummy html page say `test-connectivity.html` under your origin, with a custom `cache: no-cache` header. If you can, proceed, else throw the offline page
>
>
>
```
self.addEventListener( 'fetch', ( event ) => {
let headers = new Headers();
headers.append( 'cache-control', 'no-cache' );
headers.append( 'pragma', 'no-cache' );
var req = new Request( 'test-connectivity.html', {
method: 'GET',
mode: 'same-origin',
headers: headers,
redirect: 'manual' // let browser handle redirects
} );
event.respondWith( fetch( req, {
cache: 'no-store'
} )
.then( function ( response ) {
return fetch( event.request )
} )
.catch( function ( err ) {
return new Response( '<div><h2>Uh oh that did not work</h2></div>', {
headers: {
'Content-type': 'text/html'
}
} )
} ) )
} );
```
The `{cache:'no-store'}` object as the second parameter to `fetch` , is an unfortunate **NO-OP**. Just doesn't work.
Just keep it for the sake of a future scenario. It is **really** optional as of today.
If that worked, then you do not need to build a whole new `Request` object for `fetch`
cheers!
>
> The code piece that creates a new request is generously borrowed from
> @pirxpilot 's answer [here](https://stackoverflow.com/questions/35420980/how-to-alter-the-headers-of-a-request)
>
>
>
### The offline worker for this specific question on pastebin
<https://pastebin.com/sNCutAw7> | David, you have two errors in one line.
Your line
```
return catches.match(event.request.url + OFFLINE_URL);
```
should be
```
return caches.match('offline.html');
```
It's `catches` and you haven't defined `OFFLINE_URL` and you don't need event request url | 16,274 |
73,558,009 | I am attempting to run celery on it's own container from my Flask app. Right now I am just setting up a simple email app. The container CMD is
>
> "["celery", "worker", "--loglevel=info"]"
>
>
>
The message gets sent to the redis broker and celery picks it up, but celery gives me the error.
>
> "Received unregistered task of type
> 'flask\_project.views.send\_async\_email'. The message has been ignored
> and discarded."
>
>
>
I am setting the include in the celery config on my flask app. I have restarted and rebuilt my containers and still the same issue.
```
from flask import Blueprint, current_app
from flask_mail import Mail
from os import getenv
from celery import Celery
from .support_func import decorator_require_api
views = Blueprint('views', __name__)
celery = Celery(views.name,
broker='redis://redis:6379/0',
include=["views.tasks"])
@celery.task
def send_async_email(email_data):
mail = Mail()
mail.send(email_data)
@views.route('/')
def home():
with current_app.app_context():
email_data = {'sender': getenv('MAIL_USERNAME'), 'recipients': ['mrjoli021@gmail.com'],
'message': "This is a test email"}
send_async_email.delay(email_data)
return "Message sent!"
```
Compose:
```
---
version: "3.9"
services:
flask:
build:
context: ./Docker/flask
container_name: flask
volumes:
- ./app/:/app
restart: unless-stopped
stdin_open: true
#entrypoint: /bin/bash
networks:
- api
nginx:
image: nginx:latest
container_name: nginx
depends_on:
- flask
#entrypoint: /bin/bash
volumes:
- ./nginx_config:/etc/nginx/conf.d
- ./app/:/app
ports:
- "5000:443"
networks:
- api
celery:
build:
context: ./Docker/celery
container_name: celery
depends_on:
- redis
restart: unless-stopped
stdin_open: true
networks:
- api
redis:
image: redis:latest
container_name: redis
depends_on:
- flask
#entrypoint: /bin/bash
networks:
- api
networks:
api:
driver: bridge
-----------------
DockerFile:
FROM python:3.9.7-slim-buster
WORKDIR /app
RUN apt-get update && apt-get install -y \
build-essential # python-dev libssl-dev openssl
COPY ./ .
RUN pip3 install -r requirements.txt
ENV CELERY_BROKER_URL=redis://redis:6379/0
CMD ["celery", "worker", "--loglevel=info"]
``` | 2022/08/31 | [
"https://Stackoverflow.com/questions/73558009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2236794/"
] | You need to pass the celery app to the worker with `--app` or `-A` flag (see my answer/example [here](https://stackoverflow.com/a/45908901/1011253)).
I would recommend to refactor a bit and extract this snippet:
```
celery = Celery(views.name,
broker='redis://redis:6379/0',
include=["views.tasks"])
```
to external file, such as `celery_app.py` and then import it for your flask app and use it for the worker:
```
["celery", "--app", "your_module.celery_app:celery", "worker", "--loglevel=info"]
```
You should see the registered tasks within the worker's startup logs (when you see the big C (Celery) logo.. | I finally figured it out. I used <https://blog.miguelgrinberg.com/post/celery-and-the-flask-application-factory-pattern>
as a reference. Now I can register new blueprints without touching the celery config. It is a work in progress, but now the containers are all up and running.
```
.
├── Docker
│ ├── celery
│ │ ├── Dockerfile
│ │ └── requirements.txt
│ └── flask
│ ├── Dockerfile
│ └── requirements.txt
├── app
│ ├── flask_project
│ │ ├── __init__.py
│ │ ├── celery_app.py
│ │ └── views.py
├── docker-compose.yml
Compose:
--------------------------------------------------------------------------------
---
version: "3.9"
services:
flask:
build:
context: ./Docker/flask
container_name: flask
volumes:
- ./app/:/app
restart: unless-stopped
stdin_open: true
networks:
- api
nginx:
image: nginx:latest
container_name: nginx
depends_on:
- flask
#entrypoint: /bin/bash
volumes:
- ./nginx_config:/etc/nginx/conf.d
- ./app/:/app
ports:
- "5000:443"
networks:
- api
celery:
build:
context: ./Docker/celery
container_name: celery
depends_on:
- redis
volumes:
- ./app/:/app
restart: unless-stopped
stdin_open: true
networks:
- api
redis:
image: redis:latest
container_name: redis
depends_on:
- flask
#entrypoint: /bin/bash
networks:
- api
networks:
api:
driver: bridge
celery_app.py:
--------------------------------------------------------------------------------
from . import celery, create_app
app = create_app()
app.app_context().push()
__init__.py:
--------------------------------------------------------------------------------
from celery import Celery
celery = Celery(__name__, broker=getenv('CELERY_BROKER_URL'))
def create_app():
app = Flask(__name__)
# Celery stuff
celery.conf.update(app.config)
# Register Blueprints
from .views import views
app.register_blueprint(views, url_prefix='/')
return app
views.py:
--------------------------------------------------------------------------------
from flask import Blueprint, current_app
from flask_mail import Message, Mail
from os import getenv
from . import celery
views = Blueprint('views', __name__)
@celery.task
def send_async_email(email_data):
msg = Message(email_data['subject'],
sender=email_data['sender'],
recipients=email_data['recipients'],
)
msg.body = email_data['message']
mail = Mail()
mail.send(msg)
@views.route('/')
def home():
with current_app.app_context():
email_data = {'sender': getenv('MAIL_USERNAME'),
'recipients': ['some_email@gmail.com'],
'subject': 'testing123',
'message': "testing123"
}
msg = Message(email_data['subject'],
sender=email_data['sender'],
recipients=email_data['recipients'],
)
msg.body = email_data['message']
send_async_email.delay(email_data)
return "Message sent!"
``` | 16,276 |
69,776,068 | I created a list of files in a directory using os.listdir(), and I'm trying to move percentages of the files(which are images) to different folders. So, I'm trying to move 70%, 15%, and 15% of the files to three different target folders.
Here is a slice of the file list:
```
print(cnv_list[0:5])
['CNV-9890872-5.jpeg', 'CNV-9911627-97.jpeg', 'CNV-9935363-11.jpeg', 'CNV-9911627-15.jpeg', 'CNV-9935363-118.jpeg']
```
So, I'm trying to send 70% of these files to one folder, 15% of them to another folder, and 15% to a third folder.
I saw this code below in another answer here which addresses how to move files, but not my specific question around percentages of those files:
[Moving all files from one directory to another using Python](https://stackoverflow.com/questions/41826868/moving-all-files-from-one-directory-to-another-using-python)
```
import shutil
import os
source_dir = '/path/to/source_folder'
target_dir = '/path/to/dest_folder'
file_names = os.listdir(source_dir)
for file_name in file_names:
shutil.move(os.path.join(source_dir, file_name), target_dir)
``` | 2021/10/30 | [
"https://Stackoverflow.com/questions/69776068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7248794/"
] | If you can partition a list 70/30, and partition a list 50/50, then you can get 70/15/15 just by partitioning twice (once 70/30, once 50/50).
```
def partition_pct(lst, point):
idx = int(len(lst) * point)
return lst[:idx], lst[idx:]
l = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
l_70, l_30 = partition_pct(l, 0.7)
l_15_1, l_15_2 = partition_pct(l_30, 0.5)
```
Assign `l` from `os.listdir()`, and you get filenames instead of numbers. Thus, given your preexisting `cnv_list` of filenames:
```
cnv_list_70, cnv_list_30 = partition_pct(cnv_list, .7)
cnv_list_15_1, cnv_list_15_2 = partition_pct(cnv_list_30, .5)
for (file_list, dirname) in ((cnv_list_70, 'dst_70'),
(cnv_list_15_1, 'dst_15_1'),
(cnv_list_15_2, 'dst_15_2')):
for f in file_list:
shutil.move(f, dirname)
```
...will move 70% of your files to the directory `dst_70`, 15% to `dst_15_1`, and another 15% to `dst_15_2`. | Don't know if there's a better way but that's what i have:
```
def split(lst, weights):
sizes = []
fractions = []
for i in weights:
sizes.append(round(i * len(lst)))
fractions.append((i * len(lst)) % 1)
if sum(sizes) < len(lst):
i = max(range(len(fractions)), key=fractions.__getitem__)
sizes[i] += 1
elif sum(sizes) > len(lst):
i = min(range(len(fractions)), key=fractions.__getitem__)
sizes[i] -= 1
it = iter(lst)
return [[next(it) for _ in range(size)] for size in sizes]
```
It take as a argument two lists one the list to split and the other with weights, it handles any configuration of weights or list lenght e.g. :
```
print(split(range(19), [.1,.5,.4]))
```
Outputs:
```
[[0, 1], [2, 3, 4, 5, 6, 7, 8, 9, 10], [11, 12, 13, 14, 15, 16, 17, 18]]
```
Note that weights are floats and sum up to 1 | 16,277 |
54,758,444 | We have 32 V-CPUs with 28 GB ram with `Local Executor` but still airflow is utilizing all the resources and this is resulting in over-utilization of resources which ultimately breaks the system execution.
Below is the output for ps -aux ordered by memory usage.
```
PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1336 3.5 0.9 1600620 271644 ? Ss Feb18 23:41 /usr/bin/python /usr/local/bin/airflow webs
9434 32.3 0.9 1835796 267844 ? Sl 03:09 0:31 [ready] gunicorn: worker [airflow-webserver
10043 9.1 0.9 1835796 267844 ? Sl 03:05 0:33 [ready] gunicorn: worker [airflow-webserver
25397 17.4 0.9 1835796 267844 ? Sl 03:08 0:30 [ready] gunicorn: worker [airflow-webserver
30680 13.0 0.9 1835796 267844 ? Sl 03:06 0:36 [ready] gunicorn: worker [airflow-webserver
28434 60.5 0.5 1720548 152380 ? Rl 03:10 0:12 gunicorn: worker [airflow-webserver]
20202 2.2 0.3 1671280 111316 ? Sl 03:07 0:04 /usr/bin/python /usr/local/bin/airflow run
14353 1.9 0.3 1671484 111208 ? Sl 03:07 0:04 /usr/bin/python /usr/local/bin/airflow run
14497 1.8 0.3 1671480 111192 ? Sl 03:07 0:03 /usr/bin/python /usr/local/bin/airflow run
25170 2.0 0.3 1671024 110964 ? Sl 03:08 0:03 /usr/bin/python /usr/local/bin/airflow run
21887 1.8 0.3 1670692 110672 ? Sl 03:07 0:03 /usr/bin/python /usr/local/bin/airflow run
5211 4.7 0.3 1670488 110456 ? Sl 03:09 0:05 /usr/bin/python /usr/local/bin/airflow run
8819 4.9 0.3 1670140 110264 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
6034 3.9 0.3 1670324 110080 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
8817 4.6 0.3 1670136 110044 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
8829 4.0 0.3 1670076 110012 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
14349 1.6 0.3 1670360 109988 ? Sl 03:07 0:03 /usr/bin/python /usr/local/bin/airflow run
8815 3.5 0.3 1670140 109984 ? Sl 03:09 0:03 /usr/bin/python /usr/local/bin/airflow run
8917 4.2 0.3 1669980 109980 ? Sl 03:09 0:04 /usr/bin/python /usr/local/bin/airflow run
```
From the `RSS` field we can see that the RAM being utilized for web-server is more than 10 GB and per task an average of 1 GB is being used.
The tasks are just for monitoring an end point of a rest API.
Below is the Airflow Configuration file
```
[core]
# The home folder for airflow, default is ~/airflow
airflow_home = /airflow
# The folder where your airflow pipelines live, most likely a
# subfolder in a code repository
# This path must be absolute
dags_folder = /airflow/dags
# The folder where airflow should store its log files
# This path must be absolute
base_log_folder = /airflow/logs/
# Airflow can store logs remotely in AWS S3 or Google Cloud Storage. Users
# must supply an Airflow connection id that provides access to the storage
# location.
remote_logging = True
remote_log_conn_id = datalake_gcp_connection
encrypt_s3_logs = False
# Logging level
logging_level = INFO
# Logging class
# Specify the class that will specify the logging configuration
# This class has to be on the python classpath
logging_config_class = log_config.LOGGING_CONFIG
# Log format
log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s
simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s
# The executor class that airflow should use. Choices include
# SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor
executor = LocalExecutor
# The SqlAlchemy connection string to the metadata database.
# SqlAlchemy supports many different database engine, more information
# their website
sql_alchemy_conn = mysql://user:pass@127.0.0.1/airflow_db
# The SqlAlchemy pool size is the maximum number of database connections
# in the pool.
sql_alchemy_pool_size = 400
# The SqlAlchemy pool recycle is the number of seconds a connection
# can be idle in the pool before it is invalidated. This config does
# not apply to sqlite.
sql_alchemy_pool_recycle = 3000
# The amount of parallelism = 32
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 64
# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 32
# Are DAGs paused by default at creation
dags_are_paused_at_creation = True
# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 400
# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
# Whether to load the examples that ship with Airflow. It's good to
# get started, but you probably want to set this to False in a production
# environment
load_examples = False
# Where your Airflow plugins are stored
plugins_folder = /airflow/plugins
# Secret key to save connection passwords in the db
fernet_key = <FERNET KEY>
# Whether to disable pickling dags
donot_pickle = False
# How long before timing out a python file import while filling the DagBag
dagbag_import_timeout = 120
# The class to use for running task instances in a subprocess
task_runner = BashTaskRunner
# If set, tasks without a `run_as_user` argument will be run with this user
# Can be used to de-elevate a sudo user running Airflow when executing tasks
default_impersonation =
# What security module to use (for example kerberos):
security =
# Turn unit test mode on (overwrites many configuration options with test
# values at runtime)
unit_test_mode = False
# Name of handler to read task instance logs.
# Default to use file task handler.
task_log_reader = gcs.task
# Whether to enable pickling for xcom (note that this is insecure and allows for
# RCE exploits). This will be deprecated in Airflow 2.0 (be forced to False).
enable_xcom_pickling = True
# When a task is killed forcefully, this is the amount of time in seconds that
# it has to cleanup after it is sent a SIGTERM, before it is SIGKILLED
killed_task_cleanup_time = 60
[cli]
# In what way should the cli access the API. The LocalClient will use the
# database directly, while the json_client will use the api running on the
# webserver
api_client = airflow.api.client.json_client
endpoint_url = http://0.0.0.0:8080
[api]
# How to authenticate users of the API
auth_backend = airflow.api.auth.backend.default
[operators]
# The default owner assigned to each new operator, unless
# provided explicitly or passed via `default_args`
default_owner = Airflow
default_cpus = 1
default_ram = 125
default_disk = 125
default_gpus = 0
[webserver]
# The base url of your website as airflow cannot guess what domain or
# cname you are using. This is used in automated emails that
# airflow sends to point links to the right web server
base_url = http://localhost:8080
authenticate = False
auth_backend = airflow.contrib.auth.backends.password_auth
# The ip specified when starting the web server
web_server_host = 0.0.0.0
# The port on which to run the web server
web_server_port = 8080
# Paths to the SSL certificate and key for the web server. When both are
# provided SSL will be enabled. This does not change the web server port.
web_server_ssl_cert =
web_server_ssl_key =
# Number of seconds the gunicorn webserver waits before timing out on a worker
web_server_worker_timeout = 120
# Number of workers to refresh at a time. When set to 0, worker refresh is
# disabled. When nonzero, airflow periodically refreshes webserver workers by
# bringing up new ones and killing old ones.
worker_refresh_batch_size = 1
# Number of seconds to wait before refreshing a batch of workers.
worker_refresh_interval = 30
# Secret key used to run your flask app
secret_key = temporary_key
# Number of workers to run the Gunicorn web server
workers = 4
# The worker class gunicorn should use. Choices include
# sync (default), eventlet, gevent
worker_class = sync
# Log files for the gunicorn webserver. '-' means log to stderr.
access_logfile = -
error_logfile = -
# Expose the configuration file in the web server
expose_config = False
# Set to true to turn on authentication:
# http://pythonhosted.org/airflow/security.html#web-authentication
#authenticate = False
# Filter the list of dags by owner name (requires authentication to be enabled)
filter_by_owner = False
# Filtering mode. Choices include user (default) and ldapgroup.
# Ldap group filtering requires using the ldap backend
#
# Note that the ldap server needs the "memberOf" overlay to be set up
# in order to user the ldapgroup mode.
owner_mode = user
# Default DAG view. Valid values are:
# tree, graph, duration, gantt, landing_times
dag_default_view = graph
# Default DAG orientation. Valid values are:
# LR (Left->Right), TB (Top->Bottom), RL (Right->Left), BT (Bottom->Top)
dag_orientation = LR
# Puts the webserver in demonstration mode; blurs the names of Operators for
# privacy.
demo_mode = False
# The amount of time (in secs) webserver will wait for initial handshake
# while fetching logs from other worker machine
log_fetch_timeout_sec = 5
# By default, the webserver shows paused DAGs. Flip this to hide paused
# DAGs by default
hide_paused_dags_by_default = True
# Consistent page size across all listing views in the UI
page_size = 40
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = smtp.gmail.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
#smtp_user = airflow
#smtp_password = airflow
smtp_port = 25
smtp_mail_from = airflow@example.com
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above
# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor
# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
celeryd_concurrency = 16
# When you start an airflow worker, airflow starts a tiny web server
# subprocess to serve the workers local log files to the airflow main
# web server, who then builds pages and sends them to users. This defines
# the port on which the logs are served. It needs to be unused, and open
# visible from the main web server to connect into the workers.
worker_log_server_port = 8793
# The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
# a sqlalchemy database. Refer to the Celery documentation for more
# information.
broker_url = sqla+mysql://user:pass@127.0.0.1/airflow_db
# Another key Celery setting
celery_result_backend = db+mysql://user:pass@127.0.0.1/airflow_db
# Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
# it `airflow flower`. This defines the IP that Celery Flower runs on
flower_host = 0.0.0.0
# This defines the port that Celery Flower runs on
flower_port = 5555
# Default queue that tasks get assigned to and that worker listen on.
default_queue = default
# Import path for celery configuration options
celery_config_options = airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG
[dask]
# This section only applies if you are using the DaskExecutor in
# [core] section above
# The IP address and port of the Dask cluster's scheduler.
cluster_address = 127.0.0.1:8786
[scheduler]
# Task instances listen for external kill signal (when you clear tasks
# from the CLI or the UI), this defines the frequency at which they should
# listen (in seconds).
job_heartbeat_sec = 20
# The scheduler constantly tries to trigger new tasks (look at the
# scheduler section in the docs for more information). This defines
# how often the scheduler should run (in seconds).
scheduler_heartbeat_sec = 60
# after how much time should the scheduler terminate in seconds
# -1 indicates to run continuously (see also num_runs)
run_duration = -1
# after how much time a new DAGs should be picked up from the filesystem
min_file_process_interval = 5
dag_dir_list_interval = 300
# How often should stats be printed to the logs
print_stats_interval = 30
child_process_log_directory = /airflow/logs/scheduler
# Local task jobs periodically heartbeat to the DB. If the job has
# not heartbeat in this many seconds, the scheduler will mark the
# associated task instance as failed and will re-schedule the task.
scheduler_zombie_task_threshold = 300
# Turn off scheduler catchup by setting this to False.
# Default behavior is unchanged and
# Command Line Backfills still work, but the scheduler
# will not do scheduler catchup if this is False,
# however it can be set on a per DAG basis in the
# DAG definition (catchup)
catchup_by_default = False
# This changes the batch size of queries in the scheduling main loop.
# This depends on query length limits and how long you are willing to hold locks.
# 0 for no limit
max_tis_per_query = 256
# Statsd (https://github.com/etsy/statsd) integration settings
statsd_on = False
statsd_host = localhost
statsd_port = 8125
statsd_prefix = airflow
# The scheduler can run multiple threads in parallel to schedule dags.
# This defines how many threads will run.
max_threads = 12
authenticate = False
[ldap]
# set this to ldaps://<your.ldap.server>:<port>
uri =
user_filter = objectClass=*
user_name_attr = uid
group_member_attr = memberOf
superuser_filter =
data_profiler_filter =
bind_user = cn=Manager,dc=example,dc=com
bind_password = insecure
basedn = dc=example,dc=com
cacert = /etc/ca/ldap_ca.crt
search_scope = LEVEL
[mesos]
# Mesos master address which MesosExecutor will connect to.
master = localhost:5050
# The framework name which Airflow scheduler will register itself as on mesos
framework_name = Airflow
# Number of cpu cores required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_cpu = 1
# Memory in MB required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_memory = 256
# Enable framework checkpointing for mesos
# See http://mesos.apache.org/documentation/latest/slave-recovery/
checkpoint = False
# Failover timeout in milliseconds.
# When checkpointing is enabled and this option is set, Mesos waits
# until the configured timeout for
# the MesosExecutor framework to re-register after a failover. Mesos
# shuts down running tasks if the
# MesosExecutor framework fails to re-register within this timeframe.
# failover_timeout = 604800
# Enable framework authentication for mesos
# See http://mesos.apache.org/documentation/latest/configuration/
authenticate = False
# Mesos credentials, if authentication is enabled
# default_principal = admin
# default_secret = admin
[kerberos]
ccache = /tmp/airflow_krb5_ccache
# gets augmented with fqdn
principal = airflow
reinit_frequency = 3600
kinit_path = kinit
keytab = airflow.keytab
[github_enterprise]
api_rev = v3
[admin]
# UI to hide sensitive variable fields when set to True
hide_sensitive_variable_fields = True
```
What are we doing wrong here? | 2019/02/19 | [
"https://Stackoverflow.com/questions/54758444",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6823560/"
] | [The size shown in `RSS` field is in `KB`](http://man7.org/linux/man-pages/man1/ps.1.html). The first process is using about 265 MB, not something over 10 GB.
The `MEM` field shows the memory usage in *percentage*, not GB. 0.9% of 28 GB is 252 MB. You can see stats about memory with the `free` command.
See <http://man7.org/linux/man-pages/man1/ps.1.html>. In short, it's not airflow over utilising resources that's breaking your system. | A recommended method is to set the CPUQuota of Airflow to max 80%. This will ensure that Airflow process does not eat up all the CPU resources which sometimes cause the system to hang.
You can use a ready-made AMI (namely, LightningFLow) from AWS Marketplace which is pre-configured with the recommended configurations.
Note: LightningFlow also comes pre-integrated with all required libraries, Livy, custom operators, and local Spark cluster.
Link for AWS Marketplace: <https://aws.amazon.com/marketplace/pp/Lightning-Analytics-Inc-LightningFlow-Integrated-o/B084BSD66V> | 16,278 |
56,791,917 | In the shell:
```
$ date
Do 27. Jun 15:13:13 CEST 2019
```
In python:
```
>>> from datetime import datetime
>>> datetime.now()
datetime.datetime(2019, 6, 27, 15, 14, 51, 314560)
>>> a = datetime.now()
>>> a.strftime("%Y%m%d")
'20190627'
```
What is the format specifier needed to get the *exactly same output* as `date`, including evaluation of the locale settings? | 2019/06/27 | [
"https://Stackoverflow.com/questions/56791917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10895273/"
] | Looks like you need to use the [locale](https://docs.python.org/2/library/locale.html) module
Playing in the shell:
```
$ date
Thu Jun 27 10:01:03 EDT 2019
$ LC_ALL=fr_FR.UTF-8 date
jeu. juin 27 10:01:12 EDT 2019
```
In python
```
$ LC_ALL=fr_FR.UTF-8 python
Python 2.7.5 (default, Jun 20 2019, 20:27:34)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from datetime import datetime
>>> datetime.now().strftime("%c")
'Thu Jun 27 10:03:13 2019'
```
Hmm, I expected python to respect my environment. Let's force the issue:
```
>>> import locale
>>> import os
>>> locale.setlocale(locale.LC_ALL, os.environ['LC_ALL'])
'fr_FR.UTF-8'
>>> datetime.now().strftime("%c")
'jeu. 27 juin 2019 10:04:48 '
```
Ah.
---
Reading a little further into the locale docs, I see
>
> Initially, when a program is started, the locale is the C locale, no matter what the user’s preferred locale is. The program must explicitly say that it wants the user’s preferred locale settings by calling `setlocale(LC_ALL, '')`.
>
>
> | you can use [`.strftime`](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) to get your own string format.
in your case you want:
```py
from datetime import datetime
now = datetime.now()
print(now.strftime("%a %d. %b %H:%M:%S %Z %Y"))
```
**NOTE:** how the day/month name are printed will be affected by your machine's current locale. you can set a custom datetime locale and timezone if you need specific ones. | 16,279 |
34,636,391 | I've been searching for the last few hours and cannot find a library that allows me to add hyperlinks to a word document using python. In my ideal world I'd be able to manipulate a word doc using python to add hyperlinks to footnotes which link to internal documents. Python-docx doesn't seem to have this feature.
It breaks down into 2 questions. 1) Is there a way to add hyperlinks to word docs using python? 2) Is there a way to manipulate footnotes in word docs using python?
Does anyone know how to do this or any part of this? | 2016/01/06 | [
"https://Stackoverflow.com/questions/34636391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3368835/"
] | Hyperlinks can be added using the win32com package:
```
import win32com.client
#connect to Word (start it if it isn't already running)
wordapp = win32com.client.Dispatch("Word.Application")
#add a new document
doc = wordapp.Documents.Add()
#add some text and turn it into a hyperlink
para = doc.Paragraphs.Add()
para.Range.Text = "Adding hyperlinks in Microsoft word using python"
doc.Hyperlinks.Add(Anchor=para.Range, Address="http://stackoverflow.com/questions/34636391/adding-hyperlinks-in-microsoft-word-using-python")
#In theory you should be able to also pass in a TextToDisplay argument to the above call but I haven't been able to get this to work
#The workaround is to insert the link text into the document first and then convert it into a hyperlink
``` | ```
# How to insert hyperlinks into an existing MS Word document using win32com:
# Use the same call as in the example above to connect to Word:
wordapp = win32com.client.Dispatch("Word.Application")
# Open the input file where you want to insert the hyperlinks:
wordapp.Documents.Open("my_input_file.docx")
# Select the currently active document
doc = wordapp.ActiveDocument
# For my application, I want to replace references to identifiers in another
# document with the general format of "MSS-XXXX", where X is any digit, with
# hyperlinks to local html pages that capture the supporting details...
# First capture the entire document's content as text
docText = doc.Content.text
# Search for all identifiers that match the format criteria in the document:
mss_ids_to_link = re.findall('MSS-[0-9]+', docText)
# Now loop over all the identifier strings that were found, construct the link
# address for each html page to be linked, select the desired text where I want
# to insert the hyperlink, and then apply the link to the correct range of
# characters:
for linkIndex in range(len(mss_ids_to_link)):
current_string_to_link = mss_ids_to_link[linkIndex]
link_address = html_file_pathname + \
current_string_to_link + '.htm'
if wordapp.Selection.Find.Execute(FindText=current_string_to_link, \
Address=link_address) == True:
doc.Hyperlinks.Add(Anchor=wordapp.Selection.Range, \
Address=link_address)
# Save off the result:
doc.SaveAs('my_input_file.docx')
``` | 16,282 |
27,310,426 | I am trying to create an application which can detect heartbeat using your computer webcam. I am working on the code since 2 weeks and developed this code and here I got so far
How does it works? Illustrated below ...
1. Detecting face using opencv
2. Getting image of forehead
3. Applying filter to convert it into grayscale image [you can skip it]
4. Finding the average intensity of green pixle per frame
5. Saving the averages into an Array
6. Applying FFT (I have used minim library)Extract heart beat from FFT spectrum (Here, I need some help)
Here, I need help for extracting heartbeat from FFT spectrum. Can anyone help me. [Here](https://github.com/thearn/webcam-pulse-detector), is the similar application developed in python but I am not able to undersand this code so I am developing same in the proessing. Can anyone help me to undersatnd the part of this python code where it is extracting the heartbeat.
```
//---------import required ilbrary -----------
import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import java.util.*;
import ddf.minim.analysis.*;
import ddf.minim.*;
//----------create objects---------------------------------
Capture video; // camera object
OpenCV opencv; // opencv object
Minim minim;
FFT fft;
//IIRFilter filt;
//--------- Create ArrayList--------------------------------
ArrayList<Float> poop = new ArrayList();
float[] sample;
int bufferSize = 128;
int sampleRate = 512;
int bandWidth = 20;
int centerFreq = 80;
//---------------------------------------------------
void setup() {
size(640, 480); // size of the window
minim = new Minim(this);
fft = new FFT( bufferSize, sampleRate);
video = new Capture(this, 640/2, 480/2); // initializing video object
opencv = new OpenCV(this, 640/2, 480/2); // initializing opencv object
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); // loading haar cscade file for face detection
video.start(); // start video
}
void draw() {
background(0);
// image(video, 0, 0 ); // show video in the background
opencv.loadImage(video);
Rectangle[] faces = opencv.detect();
video.loadPixels();
//------------ Finding faces in the video -----------
float gavg = 0;
for (int i = 0; i < faces.length; i++) {
noFill();
stroke(#FFB700); // yellow rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // creating rectangle around the face (YELLOW)
stroke(#0070FF); //blue rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height-2*faces[i].height/3); // creating a blue rectangle around the forehead
//-------------------- storing forehead white rectangle part into an image -------------------
stroke(0, 255, 255);
rect(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15);
PImage img = video.get(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15); // storing the forehead aera into a image
img.loadPixels();
img.filter(GRAY); // converting capture image rgb to gray
img.updatePixels();
int numPixels = img.width*img.height;
for (int px = 0; px < numPixels; px++) { // For each pixel in the video frame...
final color c = img.pixels[px];
final color luminG = c>>010 & 0xFF;
final float luminRangeG = luminG/255.0;
gavg = gavg + luminRangeG;
}
//--------------------------------------------------------
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
}
sample = new float[poop.size()];
for (int i=0;i<poop.size();i++) {
Float f = (float) poop.get(i);
sample[i] = f;
}
if (sample.length>=bufferSize) {
//fft.window(FFT.NONE);
fft.forward(sample, 0);
// bpf = new BandPass(centerFreq, bandwidth, sampleRate);
// in.addEffect(bpf);
float bw = fft.getBandWidth(); // returns the width of each frequency band in the spectrum (in Hz).
println(bw); // returns 21.5332031 Hz for spectrum [0] & [512]
for (int i = 0; i < fft.specSize(); i++)
{
// println( " Freq" + max(sample));
stroke(0, 255, 0);
float x = map(i, 0, fft.specSize(), 0, width);
line( x, height, x, height - fft.getBand(i)*100);
// text("FFT FREQ " + fft.getFreq(i), width/2-100, 10*(i+1));
// text("FFT BAND " + fft.getBand(i), width/2+100, 10*(i+1));
}
}
else {
println(sample.length + " " + poop.size());
}
}
void captureEvent(Capture c) {
c.read();
}
``` | 2014/12/05 | [
"https://Stackoverflow.com/questions/27310426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3789164/"
] | The FFT is applied in a window with 128 samples.
```
int bufferSize = 128;
```
During the draw method the samples are stored in a array until fill the buffer for the FFT to be applied. Then after that the buffer is keep full. To insert a new sample the oldest is removed. gavg is the average gray channel color.
```
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
```
Coping poop to sample
```
sample = new float[poop.size()];
for (int i=0;i < poop.size();i++) {
Float f = (float) poop.get(i);
sample[i] = f;
}
```
Now is possible to apply the FFT to sample Array
```
fft.forward(sample, 0);
```
In the code is only show the spectrum result. The heartbeat frequency must be calculated.
For each band in fft you have to find the maximum and that position is the frequency of heartbeat.
```
for(int i = 0; i < fft.specSize(); i++)
{ // draw the line for frequency band i, scaling it up a bit so we can see it
heartBeatFrequency = max(heartBeatFrequency,fft.getBand(i));
}
```
Then get the bandwidth to know the frequency.
```
float bw = fft.getBandWidth();
```
Adjusting frequency.
```
heartBeatFrequency = fft.getBandWidth() * heartBeatFrequency ;
``` | After you get samples size 128 that is bufferSize value or greater than that, forward the fft with the samples array and then get the peak value of the spectrum which would be our heartBeatRate
Following Papers explains the same :
1. Measuring Heart Rate from Video - *Isabel Bush* - Stanford - [link](https://web.stanford.edu/class/cs231a/prev_projects_2016/finalReport.pdf) (Page 4 paragraphs below Figure 2 explain this.)
2. Real Time Heart Rate Monitoring From Facial RGB Color Video Using Webcam - *H. Rahman, M.U. Ahmed, S. Begum, P. Funk* - [link](http://www.ep.liu.se/ecp/129/002/ecp16129002.pdf) (Page 4)
---
After looking at your question , I thought let me get my hands onto this and I tried making a [repository](https://github.com/pishangujeniya/FaceToHeart) for this.
Well, having some issues if someone can have a look at it.
Thank you [David Clifte](https://stackoverflow.com/users/940802/david-clifte) for [this](https://stackoverflow.com/a/27317153/7703497) answer it helped a lot. | 16,283 |
53,477,114 | When i run `sudo docker-compose build` i get
```
Building web
Step 1/8 : FROM python:3.7-alpine
ERROR: Service 'web' failed to build: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html><body><h1>403 Forbidden</h1>\nSince Docker is a US company, we must comply with US export control regulations. In an effort to comply with these, we now block all IP addresses that are located in Cuba, Iran, North Korea, Republic of Crimea, Sudan, and Syria. If you are not in one of these cities, countries, or regions and are blocked, please reach out to https://support.docker.com\n</body></html>\n\n"
```
I need to set proxy for `docker-compose` for build
things i have tried:
looking at <https://docs.docker.com/network/proxy/#configure-the-docker-client>
* i have tried setting `~/.docker/config.json`
```
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:9278"
}
}
}
```
* tried with `--env` argument
* tried setting proxy variables on the server with no result
* i also have tried this [link](https://stackoverflow.com/a/36084324/4626485)
```
services:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
```
but i get this on `version: '3.6'`
```
Unsupported config option for services.web: 'args'
```
these settings seem to be set on docker and not docker-compose
i also don't need to set any proxy on my local device (i don't want to loose portability if possible)
```
docker-compose version 1.23.1, build b02f1306
Docker version 18.06.1-ce, build e68fc7a
``` | 2018/11/26 | [
"https://Stackoverflow.com/questions/53477114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4626485/"
] | You must be from restricted countries which are banned by docker (from [403](https://httpstatuses.com/403) status code). only way is to use proxies in your docker service.
>
> [Service]
>
>
> ...
>
>
> Environment="HTTP\_PROXY=http://proxy.example.com:80/
> HTTPS\_PROXY=http://proxy.example.com:80/"
>
>
> ...
>
>
>
after that you should issue:
```
$ systemctl daemon-reload
$ systemctl restart docker
``` | Include proxy details for each service in docker-compose.yml file, the sample configuration looks as below mentioned. Restart the docker and then run "docker-compose build" again. You might also run "docker-compose ps" to see if all the services mentioned in the compose file running successfully.
```
services:
<service_name>:
image:
hostname:
container_name:
ports:
environment:
HTTP_PROXY: 'http://host:port'
HTTPS_PROXY: 'http://host:port'
NO_PROXY: 'localhost, *.test.lan'
``` | 16,284 |
27,058,171 | I am fairly new to python coding, I am getting this error when i try to run my python script, can anyone tell me what i am doing wrong here?
I am trying to make a maths competition program, it should first ask for both player's names, then continue on to give each player a question until both players have answered 10 questions each. After that, it should show the score each player has got, and tell them who the winner is.
```
## Maths Competition ##
import sys
import time
import random
p1_score = 0
p2_score = 0
main_loop = 'y'
loop = 'y'
if sys.platform == 'darwin':
print('Welcome Mac user')
elif sys.plaform == 'win32' or 'win64':
print('Welcome Windows user')
else:
print('Welcome Linux user')
time.sleep(2)
print('This is a two player maths competition game, Player 1, please enter your name.')
p1_name = input()
print('now player 2 please..')
p2_name = input()
print('Processing...')
time.sleep(2)
print(p1_name+''' will first be given a random maths question,
they then have to answer that question or just press enter if they can't get it.
Then '''+ p2_name +''' will be given a question and they have to do the same thing. Each
time a player gets an answer correct, 10 points are automatically added to their score.
Each player will be given 10 questions in total, in the end, the one with the most right
answers will win. If it is a draw, a penalty round will happen, enjoy
Ps. '**' means 'to the power off'. ''')
time.sleep(5)
while main_loop == 'y':
num_of_tries = 0
while loop == 'y':
num_of_tries = num_of_tries + 1
if num_of_tries >20:
break
ops = ['x','/','+','-','**']
num1 = random.randrange(100)
num2 = random.randrange(35)
sel_op = random.choice(ops)
print(p1_name+', please press enter once you are ready to get your question')
input()
if sel_op == 'x':
ans = num1 * num2
elif sel_op == '/':
ans = num1 / num2
elif sel_op == '+':
ans = num1 + num2
elif sel_op == '-':
ans = num1 - num2
elif sel_op == '**':
ans = num1 ** num2
p1_ans = input('Your question is: %d %s %d' % (num1,sel_op,num2))
if p1_ans == ans:
p1_score = p1_score + 10
num1 = random.randrange(100)
num2 = random.randrange(35)
sel_op = random.choice(ops)
print(p2_name+', please press enter once you are ready to get your question')
input()
if sel_op == 'x':
ans2 = num1 * num2
elif sel_op == '/':
ans2 = num1 / num2
elif sel_op == '+':
ans2 = num1 + num2
elif sel_op == '-':
ans2 = num1 - num2
elif sel_op == '**':
ans2 = num1 ** num2
p2_ans = input('Your question is: %d %s %d' % (num1,sel_op,num2))
if p2_ans == ans2:
p2_score = p2_score + 10
print(p1_name+' got %d' % (p1_score))
print(p2_name+' got %d' % (p2_score))
if p1_score > p2_score:
print(p1_name+' is the WINNER!')
elif p2_score > p1_score:
print(p2_name+' is the WINNER!')
print('Would you like to play another? y/n')
repeat = input()
if any ( [repeat == 'y', repeat == 'Y'] ):
print('Sure thing, wait a couple of seconds for me to set things up again...')
time.sleep(3)
elif any ( [repeat == 'n', repeat == 'N'] ):
break
else:
print('I\'ll take that as a NO')
time.sleep(2)
break
``` | 2014/11/21 | [
"https://Stackoverflow.com/questions/27058171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4277883/"
] | ```
num2 = random.randrange(35)
```
can give you zero and will lead to a division by zero in this line:
```
ans2 = num1 / num2
```
you probably want something like:
```
random.randrange(start = 1, stop = 35 + 1)
```
which will generate numbers between 1 and 35 (both inclusive).
---
A side remark: unless you want the user to enter floating point numbers such as 0.8333333333333334 (which is very likely NOT to be exactly equal to the value calculated in your program) for division (assuming you are using python3), it is better to throw a value for the result and for the divisor and then calculate the dividend from it. | Andre Holzner is correct. Here is some Examples of basic usage:
`>>> random.random() # Random float x, 0.0 <= x < 1.0
0.37444887175646646`
`>>> random.uniform(1, 10) # Random float x, 1.0 <= x < 10.0
1.1800146073117523`
`>>> random.randint(1, 10) # Integer from 1 to 10, endpoints included
7`
`>>> random.randrange(0, 101, 2) # Even integer from 0 to 100
26`
`>>> random.choice('abcdefghij') # Choose a random element
'c'`
```
>>> items = [1, 2, 3, 4, 5, 6, 7]
>>> random.shuffle(items)
>>> items
[7, 3, 2, 5, 6, 4, 1]
```
`>>> random.sample([1, 2, 3, 4, 5], 3) # Choose 3 elements
[4, 1, 5]`
**To learn more about random here is the** [Link](https://docs.python.org/2/library/random.html) | 16,289 |
41,241,005 | I am new to python and pandas. Trying to implement below condition but getting below error:
```
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
```
Below is my code:
```
df['col2'].fillna('.', inplace=True)
import copy
dict_YM = {}
for yearmonth in [201104, 201105, 201106,201107,201108,201109,201110,201111,201112,
201201,201202,201203,201204, 201205, 201206,201207,201208,201209,201210,201211,201212,
201301,201302,201303,201304, 201305, 201306,201307,201308,201309,201310,201311,201312,
201401,201402,201403,201404, 201405, 201406,201407,201408,201409,201410,201411,201412,
201501,201502,201503,201504, 201505, 201506,201507,201508,201509,201510,201511,201512,
201601,201602,201603,201604,201605]:
key_name = 'text'+str(yearmonth)
c1=df['col1']
c2=df['col2']
c3=df['flag']
if((yearmonth >= c1) & (yearmonth < c2) & (c3==1)):
print "in if ..."
dict_YM [key_name] = copy.deepcopy(df)
dict_YM [key_name].loc[:, 'col4'] = yearmonth
elif((yearmonth >= c1) & (c2==".") & (c3==1)):
print "in else if"
dict_YM [key_name] = copy.deepcopy(df)
dict_YM [key_name].loc[:, 'col4'] = yearmonth
dict_YM
```
Now i understand we need to use c1.all() or c1.any(). But my requirement is for all the only true values of yearmonth >= c1 and yearmonth < c2 and c3==1 want do some operation. But if i use all then few of rows has true records and if go by any then the false record are also getting. Please help me how i can solve obove condition as each value/row check not as a series.
Note: col1 is int and col2,flag are float as they contain nan as well.
Edit: I am not trying to compare the string(yearmonth) is greater with whole df column (col1) but actually i want to iterate over the column col1 of df if condition satisfies then respective operation should perform those rows only.
```
Also df has huge records with various columns but col1 and col2 will have data as we have yearmonth in dictonary and flag will have 1 or nan.
Col2 has data like this {192106.0,192107.0, 195103.0 etc} and col1 has data like this {192104,201204,201206 etc}
```
Please let me know if you need any other inputs.
EDIT2: df col1, col2, flag sample data values
```
df
col1 col2 flag
192104 NaN 1.0
192104 200301.0 1.0
200301 201204.0 1.0
201204 NaN 0.0
200410 201206.0 1.0
201206 NaN 0.0
192104 198001.0 1.0
198001 NaN 1.0
```
Edit 3: I have tried like this but getting is not working seems did any one get any idea i am stuck with this issue:
```
dict_YM [key_name] =np.where(
(df[(df['col1']<=yearmonth) &
(df['col2']>yearmonth) & (df['Active']==1)]),
copy.deepcopy(df),
np.where((df[(df['col1']<=yearmonth) &
(df['col2']==".") & (df['Active']==1)]),
copy.deepcopy(df),np.nan))
then i can add col4 once dict_YM [key_name] got generated
```
Got below error when i tried above code:
```
ValueError: operands could not be broadcast together with shapes (2,3) (8,3) ()
``` | 2016/12/20 | [
"https://Stackoverflow.com/questions/41241005",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7320512/"
] | You are printing `result` to `stdout`. The objects in this list are of type `Person`. That is the `Person.toString()` method is used to get a string representation of `result`.
As mentioned in the comments either change the `toString` method of Person to just return the value of `age` or iterate over the result and write the value of `age` to `stdout`. | The method `public static <T> List<T> searchIn( List<T> list , Matcher<T> m )` returns `List<T>`, in your case Person if you want to get person age
try `result.stream().map(Person::getAge).forEach(System.out::println);` | 16,290 |
39,800,524 | The below function retains the values in its list every time it is run. I recently learned about this issue as a Python ['gotcha'](http://docs.python-guide.org/en/latest/writing/gotchas/) due to using a mutable default argument.
How do I fix it? Creating a global variable outside the function causes the same issue. Passing a list into the function breaks the recursion and only displays the first level of categories.
```
def build_category_list(categories, depth=0, items=[]):
'''Builds category data for parent select field'''
for category in categories:
items.append((category.id, '-' * depth + ' ' + category.name))
if category.children:
build_category_list(category.children, depth + 1)
return items
``` | 2016/09/30 | [
"https://Stackoverflow.com/questions/39800524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/827174/"
] | There is no need to pass the list inside the recursive function, just concatenate the results of the subsequent calls to the current list:
```
def build_category_list(categories, depth=0):
'''Builds category data for parent select field'''
items = []
for category in categories:
items.append((category.id, '-' * depth + ' ' + category.name))
if category.children:
items += build_category_list(category.children, depth + 1)
return items
``` | Passing the list in or by checking a null value would solve the issue. But you need to pass the list down the recursion:
```
def build_category_list(categories, depth=0, items=None):
if not items:
items = []
'''Builds category data for parent select field'''
for category in categories:
items.append((category.id, '-' * depth + ' ' + category.name))
if category.children:
build_category_list(category.children, depth + 1, items)
^^^^^
return items
```
Alternatively, use the return value to construct the answer - my preference see Antoine's answer... | 16,291 |
48,621,360 | I was browsing the python `asyncio` module documentation this night looking for some ideas for one of my course projects, but I soon find that there might be a lack of feature in python's standard `aysncio` module.
If you look through the documentation, you'll find that there's a callback based API and a coroutine based API. And the callback API could be used for building both UDP and TCP applications, while it looks that the coroutine API could only be used for building TCP application, as it utilizes the use of a stream-style API.
This quite causes a problem for me because I was looking for a coroutine-based API for UDP networking, although I did find that `asyncio` supports low-level coroutine based socket methods like [`sock_recv`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.sock_recv) and [`sock_sendall`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.sock_sendall), but the crucial APIs for UDP networking, `recvfrom` and `sendto` are not there.
What I wish to do is to write some codes like:
```
async def handle_income_packet(sock):
await data, addr = sock.recvfrom(4096)
# data handling here...
await sock.sendto(addr, response)
```
I know that this could be equivalently implemented using a callback API, but the problem here is that callbacks are not coroutines but regular functions, so that in it you cannot yield control back to the event loop and preserve the function execution state.
Just look at the above code, if we need to do some blocking-IO operations in the data handling part, we won't have a problem in the coroutine version as long as our IO operations are done in coroutines as well:
```
async def handle_income_packet(sock):
await data, addr = sock.recvfrom(4096)
async with aiohttp.ClientSession() as session:
info = await session.get(...)
response = generate_response_from_info(info)
await sock.sendto(addr, response)
```
As long as we use `await` the event loop would take the control flow from that point to handle other things until that IO is done. But sadly these codes are **not** usable at this moment because we do not have a coroutined version of `socket.sendto` and `socket.recvfrom` in `asyncio`.
What we could implement this in is to use the transport-protocol callback API:
```
class EchoServerClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
self.transport = transport
def data_received(self, data):
info = requests.get(...)
response = generate_response_from_info(info)
self.transport.write(response)
self.transport.close()
```
we cannot `await` a coroutine there because callbacks are not coroutines, and using a blocking IO call like above would stall the control flow in the callback and prevent the loop to handle any other events until the IO is done
Another recommended implementation idea is to create a `Future` object in the `data_received` function, add it to the event loop, and store any needed state variable in the Protocol class, then explicitly return control to the loop. While this could work, it does create a lot of complex codes where in the coroutine version they're not needed in any way.
Also [here](https://www.pythonsheets.com/notes/python-asyncio.html#simple-asyncio-udp-echo-server) we have an example of using non-blocking socket and `add_reader` for handle UDP sockets. But the code still looks complex comparing to coroutine-version's a few lines.
The point I want to make is that coroutine is a really good design that could utilize the power of concurrency in one single thread while also has a really straightforward design pattern that could save both brainpower and unnecessary lines of codes, but the crucial part to get it work for UDP networking is really lacking in our `asyncio` standard library.
What do you guys think about this?
Also, if there's any other suggestions for 3rd party libraries supporting this kind of API for UDP networking, I would be extremely grateful for the sake of my course project. I found [Bluelet](https://github.com/sampsyo/bluelet) is quite like such a thing but it does not seem to be actively maintained.
edit:
It seems that this [PR](https://github.com/python/asyncio/pull/321) did implement this feature but was rejected by the `asyncio` developers. The developers claim that all functions could be implemented using `create_datagram_endpoint()`, the protocol-transfer API. But just as I have discussed above, coroutine API has the power of simpleness compared to using the callback API in many use cases, it is really unfortunate that we do not have these with UDP. | 2018/02/05 | [
"https://Stackoverflow.com/questions/48621360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1548129/"
] | The reason a stream-based API is not provided is because streams offer *ordering* on top of the callbacks, and UDP communication is inherently unordered, so the two are fundamentally incompatible.
But none of that means you can't invoke coroutines from your callbacks - it's in fact quite easy! Starting from the [`EchoServerProtocol` example](https://docs.python.org/3/library/asyncio-protocol.html#udp-echo-server), you can do this:
```
def datagram_received(self, data, addr):
loop = asyncio.get_event_loop()
loop.create_task(self.handle_income_packet(data, addr))
async def handle_income_packet(self, data, addr):
# echo back the message, but 2 seconds later
await asyncio.sleep(2)
self.transport.sendto(data, addr)
```
Here `datagram_received` starts your `handle_income_packet` coroutine which is free to await any number of coroutines. Since the coroutine runs in the "background", the event loop is not blocked at any point and `datagram_received` returns immediately, just as intended. | You might be interested in [this module providing high-level UDP endpoints for asyncio](https://gist.github.com/vxgmichel/e47bff34b68adb3cf6bd4845c4bed448):
```
async def main():
# Create a local UDP enpoint
local = await open_local_endpoint('localhost', 8888)
# Create a remote UDP enpoint, pointing to the first one
remote = await open_remote_endpoint(*local.address)
# The remote endpoint sends a datagram
remote.send(b'Hey Hey, My My')
# The local endpoint receives the datagram, along with the address
data, address = await local.receive()
# Print: Got 'Hey Hey, My My' from 127.0.0.1 port 50603
print(f"Got {data!r} from {address[0]} port {address[1]}")
``` | 16,292 |
31,221,586 | I was wondering if anyone could give me a hand with this...
Basically I am trying to modernize the news system of my site but I can't seem to limit the amount of posts showing in the foreach loop that is on my blog part of the site. I need to skip the first instance as it is already promoted at the top of the page. I've tried various google searches but im getting results for C++ Perl and python, what is really irritating. I just need a simple PHP solution. I'll pop my code below and see if anyone can help. Thanks for any help in-advance. And please remember to leave your responses as an answer so I can mark them up if they helped ;)
```
<div class="view22 full" style="margin-top:0;">
<h3>Recent News and Announcements</h3>
<?php foreach ($articles as $article) {
?>
<div class="ah7_ clearfix">
<p class="date"><?php echo date('F j', $article['article_timestamp']); ?>, <?php echo date('Y', $article['article_timestamp']); ?></p>
<h3><a href="<?php echo $url.'/newsroom/'.$article['article_id']; ?>"><?php echo $article['article_title']; ?></a></h3>
</div>
<?php
}
?>
</div>
``` | 2015/07/04 | [
"https://Stackoverflow.com/questions/31221586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3925360/"
] | I assume that the $articles array has keys starting with 0. How about modifying the loop like this:
```
foreach ($articles as $key => $article)
```
and checking if $key is 0 at the beginning?
```
if($key == 0)
continue;
```
If the array keys are different: Create a new variable $i, set it to 0 and increase the value by 1 in every foreach loop iteration.
```
$i = 0;
foreach ($articles as $article) {
$i++;
if($i == 1)
continue;
elseif($i > 8)
break;
//the other code goes here
}
```
In case it is based on a SQL query, using "limit" might help to reduce load! | To remove the first instance you can manually unset the item ($articles[0]) after making a copy of it or printing it as a featured news.
To limit the number of post you can use the mysql LIMIT Clause;
Or you can do something like this
```
foreach($articles as $key => $article){
if($key===0)
continue;
if($key===8)
break;
echo $article;// or_do_whatever_youwant_with($article);
}
``` | 16,295 |
56,513,918 | I have created a python script with a single function in it. Is there a way to call the function from the python terminal to test some arguments?
```py
import time
import random
def string_teletyper(string):
'''Prints out each character in a string with time delay'''
for chr in string:
print(chr, end='', flush=True)
time.sleep(random.randint(1,2)/20)
```
If I want to test an argument for the function, I would have to add string\_teletyper(argument) inside the script itself and run it, is there a faster way? | 2019/06/09 | [
"https://Stackoverflow.com/questions/56513918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9906064/"
] | You can use `itertools.count()` to make infinite loop and `itertools.filterfalse` to filter values you don't need:
```
from random import randint
from itertools import count, filterfalse
f = filterfalse(lambda i: i % 2 == 0, [(yield randint(1, 99)) for i in count()])
for i in f:
print(i)
```
Prints:
```
...
61
21
91
77
39
... and so on
```
**Version 2** (without itertools):
```
from random import randint
for val in (i for i in iter(lambda: randint(1, 99), 0) if i % 2 != 0):
print(val)
``` | Do this: (Python 3)
```py
stream = (lambda min_, max_: type("randint_stream", (), {'__next__': (lambda self: 1+2*__import__('random').randint(min_-1,max_//2))}))(1,99)()
```
Get randint with `next(stream)`.
Change min and max by changing the `(1,99)`.
Real 1 line! Can change min & max!
```
=========== Adding ===========
```
The version above isn't a strict generator -- it's another class. Version 2:
```py
stream = (lambda min_, max_: (1+2*__import__('random').randint(min_-1,max_//2) for x in iter(int, 1)))(1,99)
```
Use `next()` to get random odd number.
Change min and max by changing the `(1,99)`. | 16,297 |
11,254,763 | I am making a script to test some software that is always running and I want to test it's recovery from a BSOD. Is there a way to throw a bsod from python without calling an external script or executable like OSR's BANG! | 2012/06/29 | [
"https://Stackoverflow.com/questions/11254763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1470373/"
] | Funny thing. There is Windows kernel function that does just that.
I'm assuming that this is intended behaviour as the function has been ther
The following python code will crash any windows computer from usermode without any additional setup.
```
from ctypes import windll
from ctypes import c_int
from ctypes import c_uint
from ctypes import c_ulong
from ctypes import POINTER
from ctypes import byref
nullptr = POINTER(c_int)()
windll.ntdll.RtlAdjustPrivilege(
c_uint(19),
c_uint(1),
c_uint(0),
byref(c_int())
)
windll.ntdll.NtRaiseHardError(
c_ulong(0xC000007B),
c_ulong(0),
nullptr,
nullptr,
c_uint(6),
byref(c_uint())
)
``` | i hope this helps (:
```
import ctypes
ntdll = ctypes.windll.ntdll
prev_value = ctypes.c_bool()
res = ctypes.c_ulong()
ntdll.RtlAdjustPrivilege(19, True, False, ctypes.byref(prev_value))
if not ntdll.NtRaiseHardError(0xDEADDEAD, 0, 0, 0, 6, ctypes.byref(res)):
print("BSOD Successfull!")
else:
print("BSOD Failed...")
``` | 16,299 |
60,577,610 | I have a Post object with comments and I am trying to send ajax requests in a while loop to check if there have been new comments created, and if there were, add them to the DOM.
How can you achieve that in django?
Here are my models:
```py
class Post(models.Model):
name = models.CharField(max_length=255)
date_added = models.DateTimeField(auto_now_add=True)
last_modified = models.DateTimeField(auto_now=True)
class PostComment(models.Model):
comment = models.TextField()
author = models.ForeignKey(User, on_delete=models.CASCADE)
post = models.ForeignKey(Post, on_delete=models.CASCADE, related_name='post', related_query_name='post')
date_added = models.DateTimeField(auto_now_add=True)
last_modified = models.DateTimeField(auto_now=True)
```
I have tried to look into channels and web sockets but all the tutorials that I have found use old versions of django and python. So I decided to simply achieve the same with ajax requests in a while loop.
I am open to any suggestions about how to achieve my outcome! | 2020/03/07 | [
"https://Stackoverflow.com/questions/60577610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11804213/"
] | The [merge function](https://www.apollographql.com/docs/react/caching/cache-field-behavior/#the-merge-function) in cache type policies is what you are looking for. It allows you to define a custom strategy for writing incoming data into the cache.
When creating the cache you can define how to write specific fields. Let's assume the `date` field belongs to the `Comment` type:
```js
const cache = new InMemoryCache({
typePolicies: {
Comment: {
fields: {
date: {
merge(_, date) {
return new Date(date);
},
},
},
},
},
});
``` | Try this one:
```
const { loading, error, data } = useQuery(GET_COMMENTS, {
onCompleted: data => {
// Do changes here
}
});
```
You can check it here <https://www.apollographql.com/docs/react/api/react/hooks/> | 16,300 |
4,296,570 | I'm a webdeveloper and I have a django project that I need to work on.
I am running mac OSX 10.6.5 on a macbook pro. I used macports to install django and python 2.6.
I now have some sort of problem, possibly related to my PATH / PYTHONPATH that prevents me from running django.
In terminal echo $PATH gives:
```
echo $PATH
/Library/Python/2.6/site-packages:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
```
and echo $PYTHONPATH gives nothing.
sudo port installed gives :
```
sudo port installed
The following ports are currently installed:
autoconf @2.65_1 (active)
automake @1.11.1_0 (active)
bash-completion @1.1_8
bash-completion @1.2_0 (active)
bzip2 @1.0.5_3+darwin
bzip2 @1.0.6_0 (active)
db46 @4.6.21_6 (active)
expat @2.0.1_0
expat @2.0.1_1 (active)
fontconfig @2.8.0_0+macosx (active)
freetype @2.3.12_0+macosx (active)
gdbm @1.8.3_2 (active)
gettext @0.18_0
gettext @0.18.1.1_2 (active)
gperf @3.0.4_0 (active)
help2man @1.38.2_0 (active)
ImageMagick @6.6.2-0_0+q16 (active)
jpeg @8a_0 (active)
lcms @1.19_2 (active)
libiconv @1.13.1_0 (active)
libpng @1.2.43_0 (active)
libtool @2.2.6b_1+darwin (active)
libxml2 @2.7.7_0 (active)
m4 @1.4.14_0 (active)
ncurses @5.7_0+darwin_10
ncurses @5.7_1 (active)
ncursesw @5.7_0+darwin_10
ncursesw @5.7_1 (active)
openssl @1.0.0b_0 (active)
p5-locale-gettext @1.05_2 (active)
p7zip @9.04_0 (active)
perl5 @5.8.9_0 (active)
perl5.8 @5.8.9_3 (active)
pkgconfig @0.25_0 (active)
py26-distribute @0.6.14_0 (active)
py26-django @1.2.3_0+bash_completion (active)
python26 @2.6.6_0+no_tkinter (active)
readline @6.1.002_0 (active)
sqlite3 @3.7.3_0 (active)
tiff @3.9.2_3+macosx (active)
xorg-bigreqsproto @1.1.0_0 (active)
xorg-inputproto @2.0_0 (active)
xorg-kbproto @1.0.4_0 (active)
xorg-libice @1.0.6_0 (active)
xorg-libsm @1.1.1_0 (active)
xorg-libX11 @1.3.3_0 (active)
xorg-libXau @1.0.5_0 (active)
xorg-libXdmcp @1.0.3_0 (active)
xorg-libXext @1.1.1_0 (active)
xorg-libXt @1.0.8_0 (active)
xorg-util-macros @1.7.0_0 (active)
xorg-xcmiscproto @1.2.0_0 (active)
xorg-xextproto @7.1.1_0 (active)
xorg-xf86bigfontproto @1.2.0_0 (active)
xorg-xproto @7.0.16_0 (active)
xorg-xtrans @1.2.5_0 (active)
zlib @1.2.5_0 (active)
```
and when I type python I get:
```
python
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
I'm pretty certain that this is the incorrect version.
When I try and test if django is available to python I get:
```
>>> import django
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named django
>>>
```
Could anyone help me figure out what is going on here? I only need to get django running so that I can view and work on the django project that my colleague sent me.
Thanks for any help in advance! | 2010/11/28 | [
"https://Stackoverflow.com/questions/4296570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/522909/"
] | First, Macports writes the file ~/.profile to set its PATH variables. If you have created a ~/.bash\_profile file then ~/.profile will be **ignored**. You will have to copy the contents over.
To see what python version Macports has selected use:
```
port select --list python
```
which will show you something like this:
```
Available versions for python:
none
python25-apple
python26-apple
python27 (active)
python32
```
To tell Macports to use a specific version use:
```
port select --set python python27
```
That should get your python version correct. You can use the python easy\_install to install Django now or use the Macports distribution of Django. | Is that really the Python interpreter installed by macports? 2.6.1 smells like the Apple provided one to me (2.6.1 is quite old).
try,
```
which python
```
As an aside, I wouldn't install Django using macports.
EDIT: Macports installed 2.6.6, the problem is the apple provided python is earlier on your path.
```
python26 @2.6.6_0+no_tkinter (active)
```
EDIT:
I use python from homebrew, which by default goes in /usr/local/bin/python.
To use this, I have a line in my .bashrc, but it could go at the end of your /etc/profile
```
export PATH=/usr/local/bin:/usr/local/sbin:$PATH
```
What that does is prepends my system path with location of my homebrew installs (/usr/local)
In your case, it should be where your macports python (the 2.6.6 one) can be found.
HTH | 16,302 |
64,698,542 | The Nvidia model is showing error for strides, even if I initialize them to the default value of (1,1)
I am using 'strides' as a replacement for the 'subsample' argument in previous versions of keras can someone explain the new syntax of using them.
```
def nvidia_model():
model = Sequential()
model.add(Conv2D(24,5,5, strides = (2,2), input_shape= (66,200,3), activation='relu'))
model.add(Conv2D(36,5,5, strides = (2,2), activation = 'relu'))
model.add(Conv2D(48,5,5, strides = (2,2), activation = 'relu'))
model.add(Conv2D(64,3,3, activation = 'relu'))
model.add(Conv2D(64,3,3, activation = 'relu'))
model.add(Dropout(0.5)) #50% nodes turned to zero
model.add( Flatten())
model.add(Dense(100, activation ='relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation ='relu'))
model.add(Dense(10, activation ='relu'))
model.add(Dense(1))
optimizer = Adam(lr = 1e-3)
model.compile(loss = 'mse' , optimizer = optimizer)
return model
model = nvidia_model()
print(model.summary)
```
```
Error:
TypeError Traceback (most recent call last)
<ipython-input-83-aff2a2709b79> in <module>()
----> 1 model = nvidia_model()
2 print(model.summary)
<ipython-input-82-7942ade664af> in nvidia_model()
1 def nvidia_model():
2 model = Sequential()
----> 3 model.add(Conv2D(24,5,5, strides = (2,2), input_shape= (66,200,3), activation='relu'))
4 model.add(Conv2D(36,5,5, strides = (2,2), activation = 'relu'))
5 model.add(Conv2D(48,5,5, strides = (2,2), activation = 'relu'))
TypeError: __init__() got multiple values for argument 'strides'
``` | 2020/11/05 | [
"https://Stackoverflow.com/questions/64698542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13301154/"
] | I never worked with `tensorflow`, but according to the documentation of [`Conv2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) it's defined as
```
tf.keras.layers.Conv2D(
filters, kernel_size, strides=(1, 1), padding='valid', data_format=None,
dilation_rate=(1, 1), groups=1, activation=None, use_bias=True,
kernel_initializer='glorot_uniform', bias_initializer='zeros',
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, **kwargs
)
```
As you can see `strides` is the third parameter.
Now you use `Conv2D(24,5,5, strides = (2,2), input_shape= (66,200,3), activation='relu')` where the third parameter is `5` and then you try to set `strides` again with the keyword parameter. It seems there is one parameter too much in your call.
Imagine the little Python gnome handling your code: "OK, the boss wants an instance of `Conv2D`. He sets the first argument `filters` to `4`, the second argument `kernel_size` to `2` and the third argument `strides`to `2`. Done with the positional arguments. Now let's continue with the keyword arguments. Here we have `strides` and ... oh, I already have `strides`, so I don't know what to do. The boss might be angry so I'll tell him exactly what happenend: `TypeError: __init__() got multiple values for argument 'strides'`" | ```
model.add(Conv2D(24,5,5, strides = (2,2), input_shape= (66,200,3), activation='relu'))
model.add(Conv2D(36,5,5, strides = (2,2), activation = 'relu'))
model.add(Conv2D(48,5,5, strides = (2,2), activation = 'relu'))
```
For these lines use parentheses like the following example
```
model.add(Conv2D(24,(5,5), strides = (2,2), input_shape= (66,200,3), activation='relu'))
model.add(Conv2D(36,(5,5), strides = (2,2), activation = 'relu'))
model.add(Conv2D(48,(5,5), strides = (2,2), activation = 'relu'))
``` | 16,305 |
74,416,745 | I have been successfully using Google Build for continuous integration with Google Cloud Run for the Django Application.
However recently psycop2-binary started giving errors as below
```
Step #0 - "Buildpack": ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 202.7/202.7 kB
28.3 MB/s eta 0:00:00
Step #0 - "Buildpack": Collecting google-cloud-build==3.9.0
Step #0 - "Buildpack": Downloading google_cloud_build-3.9.0-py2.py3-none-any.whl (88 kB)
Step #0 - "Buildpack": ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 88.2/88.2 kB 13.0 MB/s eta 0:00:00
Step #0 - "Buildpack": Collecting psycopg2-binary==2.9.3
Step #0 - "Buildpack": Downloading psycopg2-binary-2.9.3.tar.gz (380 kB)
Step #0 - "Buildpack": ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 380.6/380.6 kB 39.3 MB/s eta 0:00:00
Step #0 - "Buildpack": Preparing metadata (setup.py): started
Step #0 - "Buildpack": Preparing metadata (setup.py): finished with status 'error'
Step #0 - "Buildpack": error: subprocess-exited-with-error
Step #0 - "Buildpack":
Step #0 - "Buildpack": × python setup.py egg_info did not run successfully.
Step #0 - "Buildpack": │ exit code: 1
Step #0 - "Buildpack": ╰─> [25 lines of output]
Step #0 - "Buildpack": /layers/google.python.runtime/python/lib/python3.11/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
Step #0 - "Buildpack": warnings.warn(msg, warning_class)
Step #0 - "Buildpack": running egg_info
Step #0 - "Buildpack": creating /tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info
Step #0 - "Buildpack": writing /tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info/PKG-INFO
Step #0 - "Buildpack": writing dependency_links to /tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info/dependency_links.txt
Step #0 - "Buildpack": writing top-level names to /tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info/top_level.txt
Step #0 - "Buildpack": writing manifest file '/tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info/SOURCES.txt'
Step #0 - "Buildpack":
Step #0 - "Buildpack": Error: pg_config executable not found.
Step #0 - "Buildpack":
Step #0 - "Buildpack": pg_config is required to build psycopg2 from source. Please add the directory
Step #0 - "Buildpack": containing pg_config to the $PATH or specify the full executable path with the
Step #0 - "Buildpack": option:
Step #0 - "Buildpack":
Step #0 - "Buildpack": python setup.py build_ext --pg-config /path/to/pg_config build ...
Step #0 - "Buildpack":
Step #0 - "Buildpack": or with the pg_config option in 'setup.cfg'.
Step #0 - "Buildpack":
Step #0 - "Buildpack": If you prefer to avoid building psycopg2 from source, please install the PyPI
Step #0 - "Buildpack": 'psycopg2-binary' package instead.
Step #0 - "Buildpack":
Step #0 - "Buildpack": For further information please check the 'doc/src/install.rst' file (also at
Step #0 - "Buildpack": <https://www.psycopg.org/docs/install.html>).
Step #0 - "Buildpack":
Step #0 - "Buildpack": [end of output]
Step #0 - "Buildpack":
Step #0 - "Buildpack": note: This error originates from a subprocess, and is likely not a problem with pip.
Step #0 - "Buildpack": error: metadata-generation-failed
Step #0 - "Buildpack":
Step #0 - "Buildpack": × Encountered error while generating package metadata.
Step #0 - "Buildpack": ╰─> See above for output.
Step #0 - "Buildpack":
Step #0 - "Buildpack": note: This is an issue with the package mentioned above, not pip.
Step #0 - "Buildpack": hint: See above for details.
Step #0 - "Buildpack": Done "python3 -m pip install --requirement requirements.txt --upgr..." (5.263140792s)
Step #0 - "Buildpack": Failure: (ID: c84b3231) ...bernetes==24.2.0
Step #0 - "Buildpack": Downloading kubernetes-24.2.0-py2.py3-none-any.whl (1.5 MB)
Step #0 - "Buildpack": ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 72.6 MB/s eta 0:00:00
Step #0 - "Buildpack": Collecting google-cloud-container==2.11.0
Step #0 - "Buildpack": Downloading google_cloud_container-2.11.0-py2.py3-none-any.whl (202 kB)
Step #0 - "Buildpack": ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 202.7/202.7 kB 28.3 MB/s eta 0:00:00
Step #0 - "Buildpack": Collecting google-cloud-build==3.9.0
Step #0 - "Buildpack": Downloading google_cloud_build-3.9.0-py2.py3-none-any.whl (88 kB)
Step #0 - "Buildpack": ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 88.2/88.2 kB 13.0 MB/s eta 0:00:00
Step #0 - "Buildpack": Collecting psycopg2-binary==2.9.3
Step #0 - "Buildpack": Downloading psycopg2-binary-2.9.3.tar.gz (380 kB)
Step #0 - "Buildpack": ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 380.6/380.6 kB 39.3 MB/s eta 0:00:00
Step #0 - "Buildpack": Preparing metadata (setup.py): started
Step #0 - "Buildpack": Preparing metadata (setup.py): finished with status 'error'
Step #0 - "Buildpack": error: subprocess-exited-with-error
Step #0 - "Buildpack":
Step #0 - "Buildpack": × python setup.py egg_info did not run successfully.
Step #0 - "Buildpack": │ exit code: 1
Step #0 - "Buildpack": ╰─> [25 lines of output]
Step #0 - "Buildpack": /layers/google.python.runtime/python/lib/python3.11/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
Step #0 - "Buildpack": warnings.warn(msg, warning_class)
Step #0 - "Buildpack": running egg_info
Step #0 - "Buildpack": creating /tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info
Step #0 - "Buildpack": writing /tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info/PKG-INFO
Step #0 - "Buildpack": writing dependency_links to /tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info/dependency_links.txt
Step #0 - "Buildpack": writing top-level names to /tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info/top_level.txt
Step #0 - "Buildpack": writing manifest file '/tmp/pip-pip-egg-info-a97zvtp4/psycopg2_binary.egg-info/SOURCES.txt'
Step #0 - "Buildpack":
Step #0 - "Buildpack": Error: pg_config executable not found.
Step #0 - "Buildpack":
Step #0 - "Buildpack": pg_config is required to build psycopg2 from source. Please add the directory
Step #0 - "Buildpack": containing pg_config to the $PATH or specify the full executable path with the
Step #0 - "Buildpack": option:
Step #0 - "Buildpack":
Step #0 - "Buildpack": python setup.py build_ext --pg-config /path/to/pg_config build ...
Step #0 - "Buildpack":
Step #0 - "Buildpack": or with the pg_config option in 'setup.cfg'.
Step #0 - "Buildpack":
Step #0 - "Buildpack": If you prefer to avoid building psycopg2 from source, please install the PyPI
Step #0 - "Buildpack": 'psycopg2-binary' package instead.
Step #0 - "Buildpack":
Step #0 - "Buildpack": For further information please check the 'doc/src/install.rst' file (also at
Step #0 - "Buildpack": <https://www.psycopg.org/docs/install.html>).
Step #0 - "Buildpack":
Step #0 - "Buildpack": [end of output]
Step #0 - "Buildpack":
Step #0 - "Buildpack": note: This error originates from a subprocess, and is likely not a problem with pip.
Step #0 - "Buildpack": error: metadata-generation-failed
Step #0 - "Buildpack":
Step #0 - "Buildpack": × Encountered error while generating package metadata.
Step #0 - "Buildpack": ╰─> See above for output.
Step #0 - "Buildpack":
Step #0 - "Buildpack": note: This is an issue with the package mentioned above, not pip.
Step #0 - "Buildpack": hint: See above for details.
Step #0 - "Buildpack": --------------------------------------------------------------------------------
Step #0 - "Buildpack": Sorry your project couldn't be built.
Step #0 - "Buildpack": Our documentation explains ways to configure Buildpacks to better recognise your project:
Step #0 - "Buildpack": -> https://github.com/GoogleCloudPlatform/buildpacks/blob/main/README.md
Step #0 - "Buildpack": If you think you've found an issue, please report it:
Step #0 - "Buildpack": -> https://github.com/GoogleCloudPlatform/buildpacks/issues/new
Step #0 - "Buildpack": --------------------------------------------------------------------------------
Step #0 - "Buildpack": ERROR: failed to build: exit status 1
Step #0 - "Buildpack": ERROR: failed to build: executing lifecycle: failed with status code: 51
Finished Step #0 - "Buildpack"
ERROR
ERROR: build step 0 "gcr.io/k8s-skaffold/pack" failed: step exited with non-zero status: 1
```
You can find the requirements information as below as well :
```
Django==3.1.0
google-auth
google-cloud
google-cloud-pubsub==2.5.0
google-cloud-storage==1.25.0
google-cloud-secret-manager==0.1.1
django-sitemap-generate==0.5.0
django-cors-headers==3.2.1
djangorestframework==3.11.1
django-storages==1.9.1
django-anymail[mailgun]
django-robots==4.0
python-dateutil==2.8.1
gunicorn==20.0.4
requests==2.23.0
stripe==2.43.0
six==1.14.0
google-api-python-client==1.8.3
pyparsing==2.4.7
protobuf==3.20.*
djangorestframework-jwt==1.10.0
djangorestframework-simplejwt==5.2.0
django-filter==21.1
google-cloud-logging>=3.0.0
kubernetes==24.2.0
google-cloud-container==2.11.0
google-cloud-build==3.9.0
psycopg2-binary==2.9.3
```
How can I fix this issue ?
Update :
Pip version is already 22.3.1.
```
Step #0 - "Buildpack": Running "sed -i s|/opt/python3.11|/layers/google.python.runtime/python|g /layers/google.python.runtime/python/bin/2to3-3.11"
Step #0 - "Buildpack": Done "sed -i s|/opt/python3.11|/layers/google.python.runtime/pytho..." (1.28464ms)
Step #0 - "Buildpack": Upgrading pip to the latest version and installing build tools
Step #0 - "Buildpack": --------------------------------------------------------------------------------
Step #0 - "Buildpack": Running "/layers/google.python.runtime/python/bin/python3 -m pip install --upgrade pip setuptools==v64.0.0 wheel"
Step #0 - "Buildpack": Requirement already satisfied: pip in /layers/google.python.runtime/python/lib/python3.11/site-packages (22.3)
Step #0 - "Buildpack": Collecting pip
Step #0 - "Buildpack": Downloading pip-22.3.1-py3-none-any.whl (2.1 MB)
Step #0 - "Buildpack": ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 25.2 MB/s eta 0:00:00
Step #0 - "Buildpack": Collecting setuptools==v64.0.0
Step #0 - "Buildpack": Downloading setuptools-64.0.0-py3-none-any.whl (1.2 MB)
Step #0 - "Buildpack": ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 80.9 MB/s eta 0:00:00
``` | 2022/11/12 | [
"https://Stackoverflow.com/questions/74416745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5342009/"
] | In addition to your answer @london\_utku for new feature in psycopg 2.9.5
* Add support for Python 3.11.
* Add support for rowcount in MERGE statements in binary packages
(ticket #1497).
* Wheel package compiled against OpenSSL 1.1.1r and PostgreSQL 15
libpq.
You can also review all the Release notes for [Psycopg 2.9.5 documentation](https://www.psycopg.org/docs/news.html#what-s-new-in-psycopg-2-9-5) | Updated psycopg2-binary to 2.9.5, and the situation is resolved. | 16,306 |
34,357,513 | I love python one liners:
```
u = payload.get("actor", {}).get("username", "")
```
Problem I face is, I have no control over what 'payload' contains, other than knowing it is a dictionary. So, if 'payload' does not have "actor", or it does and actor does or doesn't have "username", this one-liner is fine.
Problem of course arises when payload DOES have actor, but actor is not a dictionary.
Is there as pretty a way to do this comprehensively as a one liner, *and consider the possibility that 'actor' may not be a dictionary?*
Of course I can check the type using 'isinstance', but that's not as nice.
I'm not requiring a one liner per se, just asking for the most efficient way to ensure 'u' gets populated, without exception, and without prior knowledge of what exactly is in 'payload'. | 2015/12/18 | [
"https://Stackoverflow.com/questions/34357513",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1222167/"
] | Why not use an Exception:
```
try:
u = payload.get("actor", {}).get("username", "")
except AttributeError:
u = ""
``` | If you really need to do it in 1 line, you'll have to implement the functionality yourself. Which is worth doing if you use this semantics many times in your program.
There are two ways to do it: function or custom dictionary-like object for `payload`.
1) Function handles the case of `actor` being not a `dict`. It can check for `isinstance` or do the `try` or whatever else -- it's not essential. The usage would look something like `u = get("username", "", payload.get("actor", {}))` or `u = get("", payload, 'actor', 'username')` (with arbitrary amount of nested calls for items in `payload`).
2) A class of custom objects is a powerful thing -- do it if you can and really need this abstraction in the program. A descendant of `dict` or `UserDict` (in Python3) can check for what it stores or outputs on `__getitem__` calls. | 16,307 |
71,998,895 | I'm trying to setup Pipenv on Ubuntu 22.04 LTS and I used:
```
sudo apt install pipenv
```
but I get an error:
```
FileNotFoundError: [Errno 2] No such file or directory: '/home/foo/.local/share/virtualenvs/hello-JDpq8NmY/bin/python'
```
I tried to update pip with:
```
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
```
Still no use.
I tried the solution suggested [here](https://github.com/pypa/pipenv/issues/5075#issuecomment-1107460002) and nothing changed.
The environment is there but the bin folder is missing. | 2022/04/25 | [
"https://Stackoverflow.com/questions/71998895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12009223/"
] | Finally, I have caught the issue. You are editing in admin panel and I was sharing code for front end. Please try below steps for admin:
Step 1 - file classes/Address.php
```
'lastname' => ['type' => self::TYPE_STRING, 'validate' => 'isAnything', 'required' => true, 'size' => 255],
```
Change this to **isAnything**
Step 2 - src\PrestaShopBundle\Form\Admin\Sell\Address/CustomerAddressType.php
Change your code to below code:
```
line 209: add('last_name', TextType::class, [
'label' => $this->trans('Last name', 'Admin.Global'),
'help' => $genericInvalidCharsMessage,
'required' => true,
'constraints' => [
new NotBlank([
'message' => $this->trans(
'This field cannot be empty.', 'Admin.Notifications.Error'
),
]),
new CleanHtml(),
new TypedRegex([
'type' => TypedRegex::TYPE_GENERIC_NAME,
]),
new Length([
'max' => AddressConstraint::MAX_LAST_NAME_LENGTH,
'maxMessage' => $this->trans(
'This field cannot be longer than %limit% characters',
'Admin.Notifications.Error',
['%limit%' => AddressConstraint::MAX_LAST_NAME_LENGTH]
),
]),
],
])
```
Now, you are ready to go and check. | Go to the file **classes/Address.php** file:
```
'lastname' =>array('type' => self::TYPE_STRING, 'validate' => 'isCustomerName', 'required' => true, 'size' => 32),
```
to :
```
'lastname' =>array('type' => self::TYPE_STRING, 'validate' => 'isAnything', 'required' => true, 'size' => 32),
```
validate to **isAnything**.
I think you were modifying in customer class. Please try with Address.php. | 16,313 |
2,014,767 | I've got a python function that should loop through a tuple of coordinates and print their contents:
```
def do(coordList):
for element in coordList:
print element
y=((5,5),(4,4))
x=((5,5))
```
When y is run through the function, it outputs (5,5) and (4,4), the desired result. However, running x through the function outputs 5 and 5.
Is there a way to force x to be defined as a tuple within a tuple, and if not, what is the easiest way to resolve this problem? | 2010/01/06 | [
"https://Stackoverflow.com/questions/2014767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/211791/"
] | Use a trailing comma for singleton tuples.
```
x = ((5, 5),)
``` | ```
x=((5,5),)
```
(*x*) is an expression (*x*,) is a singleton tuple. | 16,316 |
13,276,847 | I'm trying to add commas to floats for display to end users of my application on GAE. The numbers come from json and are part of a 10 item query with 2 times each (aka 20 numbers per page view). For eg.
```
"total_reach": 276160.0, "total_reach": 500160.0
```
I'm using the python GAE SDK 1.7.3 template system and simply want to display the numbers to the end user with commas e.g.: 276,160 and 500,160. Note, I am using the default webbapp2 and standard templates (from google.appengine.ext.webapp import template) which, as I understand it, is based on Django.
The Django floatformat works great to get rid of the decimal, but now I need to add the comma. Django's humanize looks perfect with its intcomma filter. However, I simply can't figure out how to enable it. I suspect its crazy simple and I'm being dense (and am noob to python) - but I'm out of ideas.
While I tried a range of things, the most recent and promising was (based on my understanding of [the source](http://code.google.com/p/googleappengine/source/browse/trunk/lib/django/django/contrib/humanize/templatetags/humanize.py?r=41)):
```
from google.appengine.ext.webapp.template import register_template_library
register_template_library('django.contrib.humanize.templatetags.humanize')
```
which gave me the following 500:
```
No module named django.contrib.humanize.templatetags
```
but it seems to be there in the source.
I'm very much at a loss. I've searched around and found items like [this](https://stackoverflow.com/questions/901121/how-to-use-inbulit-django-templatetags-in-google-app-engine) which just didn't work and seemed to be aimed at webapp (as opposed to webapp2). Help and thanks!
UPDATE:
Couldn't get it going using the settings.py approach. In the end, I solved it on my own. See below for my functional but not ideal solution. | 2012/11/07 | [
"https://Stackoverflow.com/questions/13276847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1807085/"
] | ```
ids = $("tr.selectON td[id]").map(function() { return this.id; }).get();
```
Documentations :
**To get elements with `id` attribute** <http://api.jquery.com/attribute-contains-prefix-selector/>
**To filter id attribute** <http://api.jquery.com/map/>
**To convert result into array** <http://api.jquery.com/get/> | You can make it simpler, cant speak to the JQuery performance though:
```
ids =[]
$('tr.selectON td[id^=""]').each( function() {
ids.push( this.id )
});
```
"this" in the function is already a dom object, so you have direct access to its id. | 16,323 |
49,561,543 | I'm interested in getting a mapping of country codes to international phone number prefixes, like so:
```
{'US': '+1', 'GB': '+44', 'DE': '+49', ...}
```
One library that probably contains this information is [`python-phonenumbers`](https://github.com/daviddrysdale/python-phonenumbers). However, after a quick perusal of the source code I wasn't able to find where this information is stored. For example, the `shortdata/region_DE.py` module looks like this:
```
"""Auto-generated file, do not edit by hand. DE metadata"""
from ..phonemetadata import NumberFormat, PhoneNumberDesc, PhoneMetadata
PHONE_METADATA_DE = PhoneMetadata(id='DE', country_code=None, international_prefix=None,
general_desc=PhoneNumberDesc(national_number_pattern='1\\d{2,5}', possible_length=(3, 6)),
toll_free=PhoneNumberDesc(national_number_pattern='116\\d{3}', example_number='116000', possible_length=(6,)),
emergency=PhoneNumberDesc(national_number_pattern='11[02]', example_number='112', possible_length=(3,)),
short_code=PhoneNumberDesc(national_number_pattern='11(?:[025]|6(?:00[06]|1(?:1[17]|23)))', example_number='115', possible_length=(3, 6)),
short_data=True)
```
It seems like the `country_code` and `international_prefix` fields are `None`. How can I get such a mapping (possibly with a different library)? | 2018/03/29 | [
"https://Stackoverflow.com/questions/49561543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/995862/"
] | You can get the mapping you want using [`pycountry`](https://pypi.python.org/pypi/pycountry) and [`phonenumbers`](https://pypi.python.org/pypi/phonenumbers), along with a simple dictionary comprehension:
```
import phonenumbers as pn
import pycountry
dct = {c.alpha_2: pn.country_code_for_region(c.alpha_2) for c in pycountry.countries}
print(dct)
```
Output:
```
{'SK': 421, 'KI': 686, 'LV': 371, 'GH': 233, 'JP': 81, 'SA': 966, 'TD': 235, 'SX': 1, 'CY': 357, 'CH': 41, 'EG': 20, 'PA': 507, 'KP': 850, 'CO': 57, 'GW': 245, 'KG': 996, 'AW': 297, 'FM': 691, 'SB': 677, 'HR': 385, 'PY': 595, 'BG': 359, 'IQ': 964, 'ID': 62, 'GQ': 240, 'CA': 1, 'CG': 242, 'MO': 853, 'SL': 232, 'LA': 856, 'OM': 968, 'MP': 1, 'DK': 45, 'FI': 358, 'DO': 1, 'BM': 1, 'GN': 224, 'NE': 227, 'ER': 291, 'DE': 49, 'UM': 0, 'CM': 237, 'PR': 1, 'RO': 40, 'AZ': 994, 'DZ': 213, 'BW': 267, 'MK': 389, 'HN': 504, 'IS': 354, 'SJ': 47, 'ME': 382, 'NR': 674, 'AD': 376, 'BY': 375, 'RE': 262, 'PG': 675, 'SO': 252, 'NO': 47, 'CC': 61, 'EE': 372, 'BN': 673, 'AU': 61, 'HM': 0, 'ML': 223, 'BD': 880, 'GE': 995, 'US': 1, 'UY': 598, 'SM': 378, 'NG': 234, 'BE': 32, 'KY': 1, 'AR': 54, 'CR': 506, 'VA': 39, 'YE': 967, 'TR': 90, 'CV': 238, 'DM': 1, 'ZM': 260, 'BR': 55, 'MG': 261, 'BL': 590, 'FJ': 679, 'SH': 290, 'KN': 1, 'ZA': 27, 'CF': 236, 'ZW': 263, 'PL': 48, 'SV': 503, 'QA': 974, 'MN': 976, 'SE': 46, 'JE': 44, 'PS': 970, 'MZ': 258, 'TK': 690, 'PM': 508, 'CW': 599, 'HK': 852, 'LB': 961, 'SY': 963, 'LC': 1, 'IE': 353, 'RW': 250, 'NL': 31, 'MA': 212, 'GM': 220, 'IR': 98, 'AT': 43, 'SZ': 268, 'GT': 502, 'MT': 356, 'BQ': 599, 'MX': 52, 'NC': 687, 'CK': 682, 'SI': 386, 'VE': 58, 'IM': 44, 'AM': 374, 'SD': 249, 'LY': 218, 'LI': 423, 'TN': 216, 'UG': 256, 'RU': 7, 'DJ': 253, 'IL': 972, 'TM': 993, 'BF': 226, 'GF': 594, 'TO': 676, 'GI': 350, 'MH': 692, 'UZ': 998, 'PF': 689, 'KZ': 7, 'GA': 241, 'PE': 51, 'TV': 688, 'BT': 975, 'MQ': 596, 'MF': 590, 'AF': 93, 'IN': 91, 'AX': 358, 'BH': 973, 'JM': 1, 'MY': 60, 'BO': 591, 'AI': 1, 'SR': 597, 'ET': 251, 'ES': 34, 'TF': 0, 'GU': 1, 'BJ': 229, 'SS': 211, 'KE': 254, 'BZ': 501, 'IO': 246, 'MU': 230, 'CL': 56, 'MD': 373, 'LU': 352, 'TJ': 992, 'EC': 593, 'VG': 1, 'NZ': 64, 'VU': 678, 'FO': 298, 'LR': 231, 'AL': 355, 'GB': 44, 'AS': 1, 'IT': 39, 'TC': 1, 'TW': 886, 'BI': 257, 'HU': 36, 'TL': 670, 'GG': 44, 'PN': 0, 'SG': 65, 'LS': 266, 'KH': 855, 'FR': 33, 'BV': 0, 'CX': 61, 'AE': 971, 'LT': 370, 'PT': 351, 'KR': 82, 'BB': 1, 'TG': 228, 'AQ': 0, 'EH': 212, 'AG': 1, 'VN': 84, 'CI': 225, 'BS': 1, 'GL': 299, 'MW': 265, 'NU': 683, 'NF': 672, 'LK': 94, 'MS': 1, 'GP': 590, 'NP': 977, 'PW': 680, 'PK': 92, 'WF': 681, 'BA': 387, 'KM': 269, 'JO': 962, 'CU': 53, 'GR': 30, 'YT': 262, 'RS': 381, 'NA': 264, 'ST': 239, 'SC': 248, 'CN': 86, 'CD': 243, 'GS': 0, 'KW': 965, 'MM': 95, 'AO': 244, 'MV': 960, 'UA': 380, 'TT': 1, 'FK': 500, 'WS': 685, 'CZ': 420, 'PH': 63, 'VI': 1, 'TZ': 255, 'MR': 222, 'MC': 377, 'SN': 221, 'HT': 509, 'VC': 1, 'NI': 505, 'GD': 1, 'GY': 592, 'TH': 66}
``` | I have just found a python library that must be perfect for your problem.
It's called PhoneISO3166.
This is the github link: [GitHub phoneiso3166](https://github.com/onlinecity/phone-iso3166/) | 16,324 |
47,271,662 | I would like to compute an RBF or "Gaussian" kernel for a data matrix `X` with `n` rows and `d` columns. The resulting square kernel matrix is given by:
```
K[i,j] = var * exp(-gamma * ||X[i] - X[j]||^2)
```
`var` and `gamma` are scalars.
What is the fastest way to do this in python? | 2017/11/13 | [
"https://Stackoverflow.com/questions/47271662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3475712/"
] | I am going to present four different methods for computing such a kernel, followed by a comparison of their run-time.
Using pure numpy
================
Here, I use the fact that `||x-y||^2 = ||x||^2 + ||y||^2 - 2 * x^T * y`.
```py
import numpy as np
X_norm = np.sum(X ** 2, axis = -1)
K = var * np.exp(-gamma * (X_norm[:,None] + X_norm[None,:] - 2 * np.dot(X, X.T)))
```
Using numexpr
=============
[`numexpr`](http://numexpr.readthedocs.io/en/latest/user_guide.html) is a python package that allows for efficient and parallelized array operations on numpy arrays. We can use it as follows to perform the same computation as above:
```py
import numpy as np
import numexpr as ne
X_norm = np.sum(X ** 2, axis = -1)
K = ne.evaluate('v * exp(-g * (A + B - 2 * C))', {
'A' : X_norm[:,None],
'B' : X_norm[None,:],
'C' : np.dot(X, X.T),
'g' : gamma,
'v' : var
})
```
Using `scipy.spatial.distance.pdist`
====================================
We could also use [`scipy.spatial.distance.pdist`](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.spatial.distance.pdist.html) to compute a non-redundant array of pairwise squared euclidean distances, compute the kernel on that array and then transform it to a square matrix:
```py
import numpy as np
from scipy.spatial.distance import pdist, squareform
K = squareform(var * np.exp(-gamma * pdist(X, 'sqeuclidean')))
K[np.arange(K.shape[0]), np.arange(K.shape[1])] = var
```
Using `sklearn.metrics.pairwise.rbf_kernel`
===========================================
`sklearn` provides a [built-in method](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.rbf_kernel.html) for direct computation of an RBF kernel:
```py
import numpy as np
from sklearn.metrics.pairwise import rbf_kernel
K = var * rbf_kernel(X, gamma = gamma)
```
Run-time comparison
===================
I use 25,000 random samples of 512 dimensions for testing and perform experiments on an Intel Core i7-7700HQ (4 cores @ 2.8 GHz). More precisely:
```py
X = np.random.randn(25000, 512)
gamma = 0.01
var = 5.0
```
Each method is run 7 times and the mean and standard deviation of the time per execution is reported.
```none
| Method | Time |
|-------------------------------------|-------------------|
| numpy | 24.2 s ± 1.06 s |
| numexpr | 8.89 s ± 314 ms |
| scipy.spatial.distance.pdist | 2min 59s ± 312 ms |
| sklearn.metrics.pairwise.rbf_kernel | 13.9 s ± 757 ms |
```
First of all, `scipy.spatial.distance.pdist` is surprisingly slow.
`numexpr` is almost 3 times faster than the pure `numpy` method, but this speed-up factor will vary with the number of available CPUs.
`sklearn.metrics.pairwise.rbf_kernel` is not the fastest way, but only a bit slower than `numexpr`. | In the case that you are evaluating X against a high number of gammas, it is useful to save the negative pairwise distances matrix using the tricks done by @Callidior and @Divakar.
```
from numpy import exp, matmul, power, einsum, dot
from scipy.linalg.blas import sgemm
from numexpr import evaluate
def pdist2(X):
X_norm = - einsum('ij,ij->i', X, X)
return evaluate('A + B + C', {
'A' : X_norm[:,None],
'B' : X_norm[None,:],
'C' : sgemm(alpha=2.0, a=X, b=X, trans_b=True),
})
pairwise_distance_matrix = pdist2(X)
```
Then, the best solution would be to use `numexpr` to compute the exponential.
```
def rbf_kernel2(gamma, p_matrix):
return evaluate('exp(g * m)', {
'm' : p_matrix,
'g' : gamma,
})
```
Example:
```
import numpy as np
np.random.seed(1001)
X= np.random.rand(1001, 5).astype('float32')
p_matrix_test = pdist2(X)
gamma_test_list = (10 ** np.linspace(-2, 1, 11)).astype('float32')
def app2(gamma, X):
X_norm = - gamma * einsum('ij,ij->i', X, X)
return evaluate('exp(A + B + C)', {\
'A' : X_norm[:, None],\
'B' : X_norm[None, :],\
'C' : sgemm(alpha=2.0*gamma, a=X, b=X, trans_b=True),\
'g' : gamma,
})
```
I have the results:
```
%timeit y = [app2(gamma_test, x_test) for gamma_test in gamma_test_list]
```
>
> 70.8 ms ± 5.06 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>
>
>
```
%timeit y = [rbf_kernel2(gamma_test, p_matrix_test) for gamma_test in gamma_test_list]
```
>
> 33.6 ms ± 2.33 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>
>
>
Note that you need to add the overhead to compute the pairwise distance matrix before but it shouldn't be much if you are evaluating against a large number of gammas. | 16,325 |
17,890,896 | I have 15 lines in a log file and i want to read the 4th and 10 th line for example through python and display them on output saying this string is found :
```
abc
def
aaa
aaa
aasd
dsfsfs
dssfsd
sdfsds
sfdsf
ssddfs
sdsf
f
dsf
s
d
```
please suggest through code how to achieve this in python .
just to elaborate more on this example the first (string or line is unique) and can be found easily in logfile the next String B comes within 40 lines of the first one but this one occurs at lots of places in the log file so i need to read this string withing the first 40 lines after reading string A and print the same that these strings were found.
Also I cant use `with` command of python as this gives me errors like 'with' will become a reserved keyword in Python 2.6. **I am using Python 2.5** | 2013/07/26 | [
"https://Stackoverflow.com/questions/17890896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2615729/"
] | Here you go (untested):
```
- (int)getRank:(NSString *passedString) {
if([passedString isEqualToString:@"randomcard"]){
return YES;
} else {
return NO;
}
}
```
I suggest you learn Objective-C first. This is a very basic question. | To begin, NSString (the iOS implementation of String) goes something like this:
```
NSString str = @"Some STR";
```
or
```
NSString = [NSString initWithFormat:@"Hello %@", "Hi there"]];
``` | 16,329 |
67,241,815 | I tried writing a simple merge and sort function in python and got stuck after getting the following error-
```
List out of range.
```
I would appreciate if you could help me fix it and figure out how to avoid it. I have added the code below-
```
def merge(lst1, lst2):
# Gets two sorted lists and returns one merged and sorted list
merge_sorted = []
i = 0
j = 0
len1 = len(lst1) - 1
len2 = len(lst2) - 1
while i < len1 or j < len2:
if lst1[i] < lst2[j]:
merge_sorted.append(lst1[i])
i += 1
elif lst1[i] > lst2[j]:
merge_sorted.append(lst2[j])
j += 1
else:
merge_sorted.append(lst1[i])
merge_sorted.append(lst2[j])
i += 1
j += 1
return merge_sorted
lst1 = [2, 4, 5, 6, 8]
lst2 = [1, 3, 7, 9, 0]
merge(lst1, lst2)
```
What I got:
```
IndexError Traceback (most recent call last)
<ipython-input-13-572aad47097b> in <module>()
22 lst1 = [2, 4, 5, 6, 8]
23 lst2 = [1, 3, 7, 9, 0]
---> 24 merge(lst1, lst2)
<ipython-input-13-572aad47097b> in merge(lst1, lst2)
7 len2 = len(lst2) - 1
8 while i < len1 or j < len2:
----> 9 if lst1[i] < lst2[j]:
10 merge_sorted.append(lst1[i])
11 i += 1
IndexError: list index out of range
``` | 2021/04/24 | [
"https://Stackoverflow.com/questions/67241815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15753999/"
] | Your problem is the while condition:
```
while i < len1 or j < len2:
```
it should be `and` - if either of the conditoins are not true, you simple append the remainder of the non-empty list to your result and you are done.
Your current code still enters the while-body and checks `if lst1[i] < lst2[j]:` if one of the `i` / `j` is bigger then the list you get the error you have.
---
The full fixed code:
```
def merge(lst1, lst2):
# Gets two sorted lists and returns one merged and sorted list
merge_sorted = []
i = 0
j = 0
len1 = len(lst1) - 1
len2 = len(lst2) - 1
while i < len1 and j < len2: # use and
if lst1[i] < lst2[j]:
merge_sorted.append(lst1[i])
i += 1
elif lst1[i] > lst2[j]:
merge_sorted.append(lst2[j])
j += 1
else:
merge_sorted.append(lst1[i])
merge_sorted.append(lst2[j])
i += 1
j += 1
# add remainder lists - the slices evaluate to [] if behind the list lengths
merge_sorted.extend(lst1[i:]) # if i is aready out of the list this is []
merge_sorted.extend(lst2[j:]) # if j is aready out of the list this is []
return merge_sorted
lst1 = [2, 4, 5, 6, 8]
lst2 = [0, 1, 3, 7, 9] # fixed input, needs to be sorted, yours was not
print(merge(lst1, lst2))
```
Output:
```
[0, 1, 2, 3, 4, 5, 6, 8, 7, 9]
``` | Here are the values for i, j just before that `if` condition-
```
0 0
0 1
1 1
1 2
2 2
3 2
4 2
4 3
5 3
```
When any of the lists is traversed till the end, it throws `index out of range error`.
**Solution-**
Instead of using `or` condition, use `and` condition and append the remaining list elements at the end of the sorted list. | 16,332 |
43,753,657 | I am a beginner in python and I am using an older version of anaconda which has the 3.5.2 version of python, because I would like to use tensorflow with it. I have some outdated packages that I would like to update with "conda update all". Is there a way to do this without updating python from 3.5 to 3.6, which is incompatible with tensorflow? | 2017/05/03 | [
"https://Stackoverflow.com/questions/43753657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7955849/"
] | You can either update them all manually `conda update yourpackage`
...or you could update them all `conda update --all`, and then downgrade python again with `conda install python=3.5.2`. | Another simple method: `conda update --all python=3.5.2`
Replace the python version with your currently installed version. This will update all packages, and since the target version for `python` is already installed, it will not be updated. This also works with multiple packages: `conda update all python=3.5.2 spyder=3.3.6`. | 16,337 |
33,582,766 | I'm running a python script and it used to work (it even does on my other laptop right now) but not on my current computer - I just get the error code:
```
Process finished with exit code -1073741515 (0xC0000135)
```
I don't get any other results - not even from "print" commands at the beginning of the file.
I haven't found anything specific to that.
I re-installed python (2.7.9), pygame (1.9.1) and even pycharm (tried 4.5 first, now with 5.0 - same result)
Does anyone know what that error code means? I couldn't find anything about it. | 2015/11/07 | [
"https://Stackoverflow.com/questions/33582766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3909896/"
] | I encountered the same error when running .py in PyCharm on Windows. Inspired by <https://thenewboston.com/forum/topic.php?id=10088>, I uninstalled and reinstalled Python. When reinstalling, I checked the `ADD PYTHON TO THE PATH` Option. After recreating the virtual environment in PyCharm, the error was gone.
**Update**: On another Windows PC (64-bit), I encountered the same error where reinstalling Python was not enough. I tried two things.
1. Uninstalling 32-bit Python and installing 64-bit Python. Namely downloading `python-3.6.4-amd64.exe` instead of `python-3.6.4.exe`
2. Recreating the Pycharm project from github.
Between the two actions, the error was gone. | I encountered this error in my code as well, in my case the problem was sharing `pickle` which produced in Unix machine one a Windows one. | 16,339 |
12,737,121 | This bit is being troublesome....
```
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
//1
NSString *urlString = @"http://zaphod_beeblebrox.pythonanywhere.com/";
//2
NSURL *url = [NSURL URLWithString:urlString];
//3
NSURLRequest *request = [NSURLRequest requestWithURL:url];
//4
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
//5
[NSURLConnection sendAsynchronousRequest:request queue:queue
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {
if ([data length] > 0 && error == nil) (UIWebView)
else if ((error != nil) NSLog(@"Error: %@", error))}];
}@end
```
I can't seem to figure out what is causing else if to have problems. I have googled the crap out of this and have checked my code over and over, but I cant seem to figure this out!
please help. | 2012/10/04 | [
"https://Stackoverflow.com/questions/12737121",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1718637/"
] | You asked this same question before, which [I answered](https://stackoverflow.com/a/12720171/1271826). I repeat the relevant portions of that answer below. If there's something that didn't make sense, just leave a comment below.
---
I'm guessing you're trying to load a html page in a `UIWebView`? You obviously need an `IBOutlet` for your `UIWebView`. (If you're not familiar with `IBOutlet`, check out the Apple tutorial [Your First iOS App](http://developer.apple.com/library/ios/referencelibrary/GettingStarted/RoadMapiOS/chapters/RM_YourFirstApp_iOS/Articles/00_Introduction.html).)
Anyway, in my examples below, I'm going to assume your `IBOutlet` is called `webview`, and thus I might advise getting rid of the `NSOperationQueue` and `NSUrlConnection` and just have the `UIWebView` load the html for you:
```
- (void)viewDidLoad
{
[super viewDidLoad];
NSString *urlString = @"http://zaphod_beeblebrox.pythonanywhere.com/";
NSURL *url = [NSURL URLWithString:urlString];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
[self.webview loadRequest:request];
}
```
It might be worth going through a few iPhone programming tutorials (just google it and you'll get tons of hits), too or look at the Apple [App Programming Guide](http://developer.apple.com/library/ios/#referencelibrary/GettingStarted/RoadMapiOS/chapters/Introduction.html) or check out the wonderful resources at <http://developer.apple.com>.
---
**Update:**
By the way, if you insist on using `NSOperationQueue` and `NSUrlConnection`, you still need an `IBOutlet` for your webview. But the revised code would look like:
```
NSString *urlString = @"http://zaphod_beeblebrox.pythonanywhere.com/";
NSURL *url = [NSURL URLWithString:urlString];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
[NSURLConnection sendAsynchronousRequest:request
queue:queue
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {
if ([data length] > 0 && error == nil)
{
NSString *htmlString = [NSString stringWithUTF8String:data.bytes];
[self.webview loadHTMLString:htmlString baseURL:url];
}
else if (error != nil)
{
NSLog(@"Error: %@", error);
}
else
{
NSLog(@"No data returned");
}
}];
```
I think `loadRequest` is much simpler, but if you really want to do it this way, here you go. | ' I am trying to use a uiwebview to bring up a web app. If you know of any better way, please let me know!'
```
[theWebView loadRequest:[NSURLRequest requestWithURL:theURL]];
``` | 16,349 |
27,949,520 | how can I do this:
in python, how can I do a loop or an if statment, that to get in the loop/ the if statment, you need a function to be called.
I mean something like this:
```
if function() *is called*:
print('function() is called')
```
thanks | 2015/01/14 | [
"https://Stackoverflow.com/questions/27949520",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4454553/"
] | You should use booleans to handle that:
```
def function():
function.has_been_called = True
pass
function.has_been_called = False
#Actual Code!:
if function.has_been_called:
print('function()is called')
```
The boolean will now store whether or not the fucntion has been called. | Use a decorator to wrap the function, so that any time the function is called, you get a print, without having to alter your original function
```
def is_called(func):
def wrap():
func()
print func, "is called"
return wrap
@is_called
def function():
pass
if function():
pass #do
```
would print 'function is called' | 16,351 |
58,932,201 | I moved my Gem5 simulations from my system to a server. My system does not have HDF5 libraries, but the server has, and I am met with this error:
>
>
> ```
> /usr/local/lib/python2.7/config/libpython2.7.a(posixmodule.o): In function `posix_tmpnam':
> /space/src/Python-2.7/./Modules/posixmodule.c:7275: warning: the use of `tmpnam_r' is dangerous, better use `mkstemp'
> /usr/local/lib/python2.7/config/libpython2.7.a(posixmodule.o): In function `posix_tempnam':
> /space/src/Python-2.7/./Modules/posixmodule.c:7230: warning: the use of `tempnam' is dangerous, better use `mkstemp'
> build/X86/base/lib.o.partial: In function `Stats::Hdf5::addMetaData(H5::DataSet&, char const*, double)':
> /net/nasstore/students/GRAD/ECE/febinps/home/Paper3/gem5/build/X86/base/stats/hdf5.cc:312: undefined reference to `H5::H5Object::createAttribute(char const*, H5::DataType const&, H5::DataSpace const&, H5::PropList const&) const'
> build/X86/base/lib.o.partial: In function `Stats::Hdf5::addMetaData(H5::DataSet&, char const*, std::vector<char const*, std::allocator<char const*> > const&)':
> /net/nasstore/students/GRAD/ECE/febinps/home/Paper3/gem5/build/X86/base/stats/hdf5.cc:280: undefined reference to `H5::H5Object::createAttribute(char const*, H5::DataType const&, H5::DataSpace const&, H5::PropList const&) const'
> build/X86/base/lib.o.partial: In function `Stats::Hdf5::addMetaData(H5::DataSet&, char const*, std::string const&)':
> /net/nasstore/students/GRAD/ECE/febinps/home/Paper3/gem5/build/X86/base/stats/hdf5.cc:302: undefined reference to `H5::H5Object::createAttribute(char const*, H5::DataType const&, H5::DataSpace const&, H5::PropList const&) const'
> collect2: error: ld returned 1 exit status
> scons: *** [build/X86/gem5.opt] Error 1
> scons: building terminated because of errors.
>
> ```
>
>
How can I fix this? Is there an yway I can avoid the build from using HDF5? I cannot do much at the server as I do not have admin access. | 2019/11/19 | [
"https://Stackoverflow.com/questions/58932201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6346849/"
] | I don't know the root cause of this issue, some one who is working on Gem5 could possibly answer that.
But as a workaround, since I have no admin access to the server ,and even then removing a library just for the sake of one build doesn't feel right, I edited the SConstruct file in gem5/, where the environment variables are gathered and passed:
```
have_hdf5 = 0 #check_hdf5() #line number 951 in the SConstruct file
```
This seems to work for the build. | In case you run on older system like Debian 10 or Ubuntu 16.04, the errors are due to the fact that the `hdf5` library path is not in default system library path. I solved it by manually (brutally) linking gem5 ...
The link flags added are:
`-L/usr/lib/x86_64-linux-gnu/hdf5/serial/ -lhdf5_cpp -lhdf5`
```sh
g++ -o /gem5/default/build/X86/gem5.opt \
-Wl,--as-needed -fuse-ld=gold -L/usr/lib/python2.7/config-x86_64-linux-gnu -L/usr/lib \
-Xlinker -export-dynamic \
-Wl,-O1 -Wl,-Bsymbolic-functions -z origin -O3 \
/gem5/default/build/X86/sim/main.o \
/gem5/default/build/X86/dev/net/lib.o.partial \
/gem5/default/build/X86/base/lib.o.partial \
/gem5/default/build/X86/dev/i2c/lib.o.partial \
/gem5/default/build/X86/cpu/testers/traffic_gen/lib.o.partial \
/gem5/default/build/X86/mem/cache/tags/indexing_policies/lib.o.partial \
/gem5/default/build/X86/mem/ruby/slicc_interface/lib.o.partial \
/gem5/default/build/X86/mem/probes/lib.o.partial \
/gem5/default/build/X86/mem/ruby/network/simple/lib.o.partial \
/gem5/default/build/X86/dev/x86/lib.o.partial \
/gem5/default/build/X86/mem/ruby/network/fault_model/lib.o.partial \
/gem5/default/build/X86/systemc/utils/lib.o.partial \
/gem5/default/build/X86/systemc/dt/int/lib.o.partial \
/gem5/default/build/X86/cpu/kvm/lib.o.partial \
/gem5/default/build/X86/cpu/simple/probes/lib.o.partial \
/gem5/default/build/X86/base/filters/lib.o.partial \
/gem5/default/build/X86/dev/serial/lib.o.partial \
/gem5/default/build/X86/sim/power/lib.o.partial \
/gem5/default/build/X86/mem/cache/tags/lib.o.partial \
/gem5/default/build/X86/arch/x86/bios/lib.o.partial \
/gem5/default/build/X86/systemc/dt/fx/lib.o.partial \
/gem5/default/build/X86/mem/ruby/common/lib.o.partial \
/gem5/default/build/X86/mem/ruby/network/garnet2.0/lib.o.partial \
/gem5/default/build/X86/mem/ruby/structures/lib.o.partial \
/gem5/default/build/X86/cpu/testers/garnet_synthetic_traffic/lib.o.partial \
/gem5/default/build/X86/mem/cache/prefetch/lib.o.partial \
/gem5/default/build/X86/cpu/trace/lib.o.partial \
/gem5/default/build/X86/sim/probe/lib.o.partial \
/gem5/default/build/X86/sim/lib.o.partial \
/gem5/default/build/X86/mem/ruby/protocol/lib.o.partial \
/gem5/default/build/X86/systemc/tlm_core/2/quantum/lib.o.partial \
/gem5/default/build/X86/cpu/simple/lib.o.partial \
/gem5/default/build/X86/base/vnc/lib.o.partial \
/gem5/default/build/X86/mem/ruby/system/lib.o.partial \
/gem5/default/build/X86/mem/cache/lib.o.partial \
/gem5/default/build/X86/arch/x86/lib.o.partial \
/gem5/default/build/X86/dev/storage/lib.o.partial \
/gem5/default/build/X86/mem/protocol/lib.o.partial \
/gem5/default/build/X86/systemc/core/lib.o.partial \
/gem5/default/build/X86/systemc/tlm_core/2/generic_payload/lib.o.partial \
/gem5/default/build/X86/cpu/testers/directedtest/lib.o.partial \
/gem5/default/build/X86/mem/ruby/profiler/lib.o.partial \
/gem5/default/build/X86/arch/x86/regs/lib.o.partial \
/gem5/default/build/X86/dev/pci/lib.o.partial \
/gem5/default/build/X86/cpu/o3/probe/lib.o.partial \
/gem5/default/build/X86/mem/cache/compressors/lib.o.partial \
/gem5/default/build/X86/cpu/lib.o.partial \
/gem5/default/build/X86/learning_gem5/part2/lib.o.partial \
/gem5/default/build/X86/mem/cache/replacement_policies/lib.o.partial \
/gem5/default/build/X86/dev/virtio/lib.o.partial \
/gem5/default/build/X86/proto/lib.o.partial \
/gem5/default/build/X86/cpu/testers/rubytest/lib.o.partial \
/gem5/default/build/X86/mem/qos/lib.o.partial \
/gem5/default/build/X86/cpu/pred/lib.o.partial \
/gem5/default/build/X86/python/lib.o.partial \
/gem5/default/build/X86/arch/generic/lib.o.partial \
/gem5/default/build/X86/systemc/tlm_bridge/lib.o.partial \
/gem5/default/build/X86/dev/lib.o.partial \
/gem5/default/build/X86/kern/lib.o.partial \
/gem5/default/build/X86/mem/lib.o.partial \
/gem5/default/build/X86/cpu/testers/memtest/lib.o.partial \
/gem5/default/build/X86/systemc/dt/misc/lib.o.partial \
/gem5/default/build/X86/systemc/tlm_utils/lib.o.partial \
/gem5/default/build/X86/cpu/o3/lib.o.partial \
/gem5/default/build/X86/mem/ruby/network/lib.o.partial \
/gem5/default/build/X86/systemc/dt/bit/lib.o.partial \
/gem5/default/build/X86/dev/ps2/lib.o.partial \
/gem5/default/build/X86/unittest/lib.o.partial \
/gem5/default/build/X86/systemc/channel/lib.o.partial \
/gem5/default/build/X86/systemc/dt/lib.o.partial \
/gem5/default/build/X86/base/date.o \
-L/gem5/default/build/libelf -L/gem5/default/build/fputils -L/gem5/default/build/libfdt -L/gem5/default/build/drampower -L/gem5/default/build/iostream3 -L/gem5/default/build/nomali -L/gem5/default/build/googletest -L/usr/lib/x86_64-linux-gnu/hdf5/serial/ \
-lpython2.7 -lpthread -ldl -lutil -lm -lz -lprotobuf -lrt -ltcmalloc -lnomali -liostream3 \
-ldrampower -lfdt -lfputils -lelf -lhdf5_cpp -lhdf5
``` | 16,352 |
63,056,197 | I'm writing some python code where I need to use generators inside recursive functions. Here is some code I wrote to mimic what I am trying to do. This is attempt 1.
```
def f():
def f2(i):
if i > 0:
yield i
f2(i - 1)
yield f2(10)
for x in f():
for y in x:
print(y)
```
This only prints 10, attempt 2 using this `yield from` construct I found online.
```
def f():
def f2(i):
if i > 0:
yield i
yield from f2(i - 1)
yield from f2(10)
for x in f():
print(x)
```
This does what I want, but I don't understand what is happening, what is `yield from` doing behind the scenes and why doesn't my first attempt work? | 2020/07/23 | [
"https://Stackoverflow.com/questions/63056197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5927758/"
] | You can think of `yield from` as a for loop which yields every item:
```
for i in f(10):
yield i
```
is the same as `yield from f(10)`. In other words, it *yields* the items *from* the given iteratable which in this case is another generator. | `yield from g()` will recurse inside a new generator `g` yielding from each `yield` statement at that generator
so
```
def g1():
yield from g2()
def g2()
for i in range(10):
yield i * 2
```
You can think as if `yield from` in g1 was unrolling `g2` inside of it, expanding to something like this
```
def g1():
for i in range(10):
yield i * 2
```
This not what is happening because you have scopes and etc, but during the execution of `yield from g2()` in `g1` then interpreter recurse in `g2` yield each value that it yields, possibly recursing to another generator.
Now consider this generator
```
def flatten(maybe_it):
try:
for i0 in maybe_it:
for i1 in flatten(i0):
yield i1
except TypeError:
yield maybe_it
```
with `yield from` it can be rewrite as
```
def flatten(maybe_it):
try:
for i0 in maybe_it:
yield from flatten(i0):
except TypeError:
yield maybe_it
``` | 16,354 |
10,989,297 | I have the following PHP code, and for the life of me I can't think of a simple & elegant way to implement around the empty() function in python to check if the index is defined in a list.
```
$counter = 0;
$a = array();
for ($i=0;$i<100;$i++){
$i = ($i > 4) ? 0 : $i;
if empty($a[$i]){
$a[$i]=array();
}
$a[$i][] = $counter;
$counter++;
}
```
if I do
```
if a[i] is None
```
then I get index out of range. However I am aware of ways to do it in multiple steps, but that's not what I wanted. | 2012/06/12 | [
"https://Stackoverflow.com/questions/10989297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/342553/"
] | PHP Arrays and Python lists are not equivalent. PHP Arrays are actually [associative containers](http://php.net/manual/en/language.types.array.php):
>
> An array in PHP is actually an ordered map. A map is a type that
> associates values to keys. This type is optimized for several
> different uses; it can be treated as an array, list (vector), hash
> table (an implementation of a map), dictionary, collection, stack,
> queue, and probably more.
>
>
>
In Python, the map data structure is defined as a [dictionary](http://docs.python.org/library/stdtypes.html#typesmapping):
>
> A mapping object maps hashable values to arbitrary objects. Mappings
> are mutable objects. There is currently only one standard mapping
> type, the dictionary.
>
>
>
The `empty()` function [serve many purposes](http://php.net/manual/en/function.empty.php). In your use context, it is equivalent to the Python `in` operator:
```
>>> a = {}
>>> a[1] = "x"
>>> a[3] = "y"
>>> a[5] = "z"
>>> i = 3
>>> i in a
True
>>> i = 2
>>> i in a
False
``` | In the event you were trying to do it with a list, you would have to actually set that index to none, otherwise the element wouldn't be there you'd possibly be trying to check an index past the end of the list.
```
>>> i = [None]
>>> i
[None]
>>> i = [None, None]
>>> i
[None, None]
>>> i[1] is None
True
``` | 16,355 |
14,049,983 | I am trying to search for an element in a sub-element with Selenium (Version 2.28.0), but selenium des not seem to limit its search to the sub-element. Am I doing this wrong or is there a way to use element.find to search a sub-element?
For an example I created a simple test webpage with this code:
```
<!DOCTYPE html>
<html>
<body>
<div class=div title=div1>
<h1>My First Heading</h1>
<p class='test'>My first paragraph.</p>
</div>
<div class=div title=div2>
<h1>My Second Heading</h1>
<p class='test'>My second paragraph.</p>
</div>
<div class=div title=div3>
<h1>My Third Heading</h1>
<p class='test'>My third paragraph.</p>
</div>
</body>
</html>
```
My python (Version 2.6) code looks like this:
```
from selenium import webdriver
driver = webdriver.Firefox()
# Open the test page with this instance of Firefox
# element2 gets the second division as a web element
element2 = driver.find_element_by_xpath("//div[@title='div2']")
# Search second division for a paragraph with a class of 'test' and print the content
print element2.find_element_by_xpath("//p[@class='test']").text
# expected output: "My second paragraph."
# actual output: "My first paragraph."
```
If I run:
```
print element2.get_attribute('innerHTML')
```
It returns the html from the second division. So selenium is not limiting its search to element2.
I would like to be able to find a sub-element of element2. This post suggests my code should work [Selenium WebDriver access a sub element](https://stackoverflow.com/questions/8836712/selenium-webdriver-access-a-sub-element) but his problem was caused by a time-out issue.
Can anyone help me understand what is happening here? | 2012/12/27 | [
"https://Stackoverflow.com/questions/14049983",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1529981/"
] | If you start an XPath expression with `//`, it begins searching from the root of document. To search relative to a particular element, you should prepend the expression with `.` instead:
```
element2 = driver.find_element_by_xpath("//div[@title='div2']")
element2.find_element_by_xpath(".//p[@class='test']").text
``` | ***Find The Child of any Elements***
```
parent = browser.find_element(by=By.XPATH,value='value of XPATH of Parents')
child=parent.find_elements(by=By.TAG_NAME,value='value of child path')
``` | 16,357 |
37,371,992 | I am trying to split the integers in a series by forward slash by using `rsplit` function of python but it does not work.
Original Data
=============
```
date
1/30/2015
1/30/2015
1/30/2015
1/30/2015
1/30/2015
1/30/2015
1/30/2015
1/30/2015
1/30/2015
1/30/2015
```
expected Data
=============
I want to split by '/'
```
date
'1' '30' '2015'
'1' '30' '2015'
'1' '30' '2015'
'1' '30' '2015'
'1' '30' '2015'
'1' '30' '2015'
'1' '30' '2015'
```
purpose of doing this is to take the year in separate column. i was trying to use below code before.
```
date = df['date']
split = date.rsplit("/")
OutputData['Year']=split[2]
```
split[2]-> is year in date series - purpose to take year in separate columns
many thanks in advance
this is the error i get every time 'this is a series of objects'
================================================================
AttributeError: 'Series' object has no attribute 'split' | 2016/05/22 | [
"https://Stackoverflow.com/questions/37371992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5157772/"
] | You can use the `str` accessor to use string methods on the series:
```
df["date"].str.rsplit("/")
```
Or to have them in different columns:
```
df["date"].str.rsplit("/", expand = True)
```
With a series, it might be better to work on datetime data:
```
import pandas as pd
pd.to_datetime(df["date"]).dt.year
Out[10]:
0 2015
1 2015
2 2015
3 2015
4 2015
5 2015
6 2015
7 2015
8 2015
9 2015
Name: date, dtype: int64
``` | IMO it would be more useful to just convert the string to a `datetime` using [`to_datetime`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html) so you can perform arithmetic operations on it and if you want the year or any other date/time component you can use the vectorised [`dt`](http://pandas.pydata.org/pandas-docs/stable/api.html#datetimelike-properties) accessor:
```
In [23]:
df['date'] = pd.to_datetime(df['date'])
df
Out[23]:
date
0 2015-01-30
1 2015-01-30
2 2015-01-30
3 2015-01-30
4 2015-01-30
5 2015-01-30
6 2015-01-30
7 2015-01-30
8 2015-01-30
9 2015-01-30
In [24]:
df['year'] = df['date'].dt.year
df
Out[24]:
date year
0 2015-01-30 2015
1 2015-01-30 2015
2 2015-01-30 2015
3 2015-01-30 2015
4 2015-01-30 2015
5 2015-01-30 2015
6 2015-01-30 2015
7 2015-01-30 2015
8 2015-01-30 2015
9 2015-01-30 2015
``` | 16,367 |
72,542,852 | I'm trying to make a discord bot that can read dates from a txt file on my device.
----------------------------------------------------------------------------------
I've read the documentation and looked over similar posts, but I keep running into the same error. Just starting to learn python so please excuse any obvious mistakes/ strange formatting.
---
```
list = open("dates.txt","r")
list.seek(0)
var firstLine = file1.readline()
file1.close()
bot.sendMessage({
to: channelID,
message: firstLine
})
```
---
The error:
```
C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp\bot.js:241
list = open("dates.txt","r")
^
ReferenceError: open is not defined
at DiscordClient.<anonymous> (C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp\bot.js:241:13)
at DiscordClient.emit (node:events:527:28)
at DiscordClient.handleWSMessage (C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp\node_modules\discord.io\lib\index.js:1854:11)
at WebSocket.emit (node:events:527:28)
at Receiver.ontext (C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp\node_modules\ws\lib\WebSocket.js:841:10)
at C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp\node_modules\ws\lib\Receiver.js:536:18
at Receiver.applyExtensions (C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp\node_modules\ws\lib\Receiver.js:371:5)
at C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp\node_modules\ws\lib\Receiver.js:508:14
at Receiver.flush (C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp\node_modules\ws\lib\Receiver.js:347:3)
at Receiver.finish (C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp\node_modules\ws\lib\Receiver.js:541:12)
PS C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp>
```
---
Might be useful to know
* This bot is able to do other things like respond to commands
* dates.txt exists in the same folder at C:\Users\My Laptop\Desktop\Discord Bots\HeadzUp | 2022/06/08 | [
"https://Stackoverflow.com/questions/72542852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19296877/"
] | So it seems like I completely mashed up python and nodejs on accident.
Not quite sure what I was thinking but that was the issue.
---
Simple answer — don't mix nodejs and Python. | First of all you should avoid to redefine "list" as this is python keyword.
Second, the command is correct and the "open" keyword is built in in native python. So it should work.
This exact line works in my python environment (even when overwriting "list").
When you look at the end of the lines of the error message you can see the ".js" endings. This is hinting, that you somehow try to interpret python code with a javascript interpreter.
Maybe you should post some more information about your system and setup.
If you use a proper python3 interpreter your code should work. | 16,368 |
64,726,961 | given a function, like split(" ",1) whats the most pythonic way to take the first element of its output only, for example given:
```
x= "a sequence of words"
x.split(" ",1)
```
I would like to get the string "a" | 2020/11/07 | [
"https://Stackoverflow.com/questions/64726961",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4792022/"
] | After the split a list is returned, so use indexing to get the first element:
```
x= "a sequence of words"
print(x.split(" ",1)[0])
``` | The most "pythonic" method is an opinion. However any method like split which returns a list you can simply index the list at 0 to get the first item.
`a = x.split(" ", 1)[0]` | 16,369 |
58,638,534 | I am very new to python 3(and python in general), and I have decided to try to make a cypher and decypher in my own way as a personal project. The cypher works by generating a random number and multiplying the letters number value by it. It adds all of these multiplied values into a list and then adding the random number at the end.
Let's say that the list is [1976, 1995, 19]. The user would copy and paste this into the input and press enter
```py
dlist = list(input('Text to be Decyphered:'))
```
The function would then take the last value in the list and give the variable rnd the value. It would then delete this value from the list. After this it would divide the other values by rnd. The problem is that
```py
dlist[-1]
```
comes back with ']'
```py
import string
import random
from random import randrange
def cypher():
rinput = input('Write Text to be Cyphered: ')
rnd = randrange(0, 100)
rinput = rinput.lower()
output = []
for character in rinput:
number = ord(character) * rnd
output.append(number)
output.append(rnd)
print(output)
def decypher():
dlist = list(input('Text to be Decyphered:'))
rnd = dlist[-2]
rnd = int(rnd)
dlist.pop()
a = 1
out = ""
for number in dlist:
vlc = int(dlist[a])
vnum = vlc/rnd
valf = chr(vnum)
out + valf
a=a+1
print(out)
```
I think that the program is treating the list as a string because it is coming from an input() code. | 2019/10/31 | [
"https://Stackoverflow.com/questions/58638534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12301586/"
] | When you use `list` function, it converts the string to `['[', '1', '9', '7', '6', ',', ' ', '1', '9', '9', '5', ',', ' ', '1', '9', ']']`
Use instead:
```py
dlist = eval(input('Text to be Decyphered:'))
```
`eval` function will convert it to an actual list. You can cross check it:
```py
>>>print(type(eval('[1976, 1995, 19]')))
<class 'list'>
>>>print(eval('[1976, 1995, 19]'))
[1976, 1995, 19]
>>>for i in eval('[1976, 1995, 19]')
print(i)
1976
1995
19
``` | Try json on your input:
```
import json
data = json.loads(the_string)
``` | 16,378 |
48,957,624 | I am very to new to python and trying to translate a bunch of keywords using google API. I have an excel file with 3000 keywords which are mix of english, spanish, german etc. Trying to translate everything to English.
However, every time I run my code, I get error at different values. Sometimes, my code give error at 810th keyword while sometime it gives error at 1038 keyword.
And I am not even editing the layout of the file.
**JSONDecodeError: Expecting value: line 1 column 1 (char 0)**
Below is my code:
```
from googletrans import Translator
import pandas
import math
import time
df = pandas.read_excel(r'Desktop/python_keywords.xlsx')
keywords = df['Keywords']
Translate = []
translator = Translator()
for i in range(0,len(keywords)):
word = translator.translate(str(keywords[j])).text
Translate.append(word)
``` | 2018/02/23 | [
"https://Stackoverflow.com/questions/48957624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5695920/"
] | Normally this error is due to the character limit of 15K in Googletrans API.
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Consider reducing the number of characters. | Which line of your code gives you this error? Look at error trace
Let's start with your iterators: it is declared as `i`, but then you use `j`. Then check length of your request. It shouldn't be longer than 5k symbols according to [JSONDecodeError using Google Translate API with Python3](https://stackoverflow.com/questions/48021371/jsondecodeerror-using-google-translate-api-with-python3).
Anyway, it looks like api responds with empty json and you have to add at least `try .. except` to avoid this error, smth like this:
```
try:
word = translator.translate(str(keywords[j])).text
except JSONDecodeError as err:
print(err) # if you want to see when error happens
else:
Translate.append(word)
``` | 16,381 |
62,719,063 | Using beautiful soup and python, I have undertaken some webscraping of the shown website to isolate: the rank, company name and revenue.
I would like to show, in an html table that I am rendering using flask and jinja2, the results of the top ten companies in the table, however, the code I have written is just displaying the first record five times.
**Code in file: webscraper.py**
```
url = 'https://en.m.wikipedia.org/wiki/List_of_largest_Internet_companies'
req = requests.get(url)
bsObj = BeautifulSoup(req.text, 'html.parser')
data = bsObj.find('table',{'class':'wikitable sortable mw-collapsible'})
table_data=[]
trs = bsObj.select('table tr')
for tr in trs[1:6]: #first element is empty
row = []
for t in tr.select('td')[:3]: #td is referring to the columns
row.extend([t.text.strip()])
table_data.append(row)
data=table_data
rank=data[0][0]
name=data[0][1]
revenue=data[0][2]
```
**Relevant code in home.html**
```
<p>{{data}}</p>
<table class="table">
<thead>
<tr>
<th scope="col">#</th>
<th scope="col">Rank</th>
<th scope="col">Name</th>
<th scope="col">Revenue</th>
</tr>
</thead>
<tbody>
{% for element in data %}
<tr>
<th scope="row"></th>
<td>{{rank}}</td>
<td>{{name}}</td>
<td>{{revenue}}</td>
</tr>
{% endfor %}
</tbody>
```
**The HTML output is:** Note: The variable {{data}} is showing all five records correctly..but I am not isolating the data correctly.
[['1', 'Amazon', '$280.5'], ['2', 'Google', '$161.8'], ['3', 'JD.com', '$82.8'], ['4', 'Facebook', '$70.69'], ['5', 'Alibaba', '$56.152']]
Rank Name Revenue
=================
1 Amazon $280.5
1 Amazon $280.5
1 Amazon $280.5
1 Amazon $280.5
1 Amazon $280.5
As mentioned, I want 1 - 10, all the companies listed up to 10, not just Amazon.
Any suggestions as to what I've done wrong in my code - I'd like the most elegant solution that pertains to my own code, not a completely new idea or solution.
Explanation of the for loop and theory behind it please too.
I know this is wrong:
```
rank=data[0][0]
name=data[0][1]
revenue=data[0][2]
```
but don't understand why and how to go about constructing it in the most elegant way such that I have the variables rank, name and revenue contain the respective data elements. | 2020/07/03 | [
"https://Stackoverflow.com/questions/62719063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5074035/"
] | ```
rank=data[0][0]
name=data[0][1]
revenue=data[0][2]
```
You're setting the rank, name and revenue from a single element (first element of data)
I suggest that you try getting changing rank, name and revenue in your html to `{{element[0]}}` and so on, to access the respective data from each element you loop on | Thank you to @mmfallacy above who suggested this answer that I am just fleshing out.
It works, but will accept the answer above as he suggested it.
Here it is for reference:
```
{% for element in data %}
<tr>
<th scope="row"></th>
<td>{{element[0]}}</td>
<td>{{element[1]}}</td>
<td>{{element[2]}}</td>
</tr>
{% endfor %}
```
I simply deleted any tries to generate variables rank, revenue in the .py file. | 16,382 |
68,603,298 | i am following the django instructions to build a web application hello
i have done everthing after the document but this happens
Page not found (404)
Request Method: GET
Request URL: <http://127.0.0.1:8000/hello>
Using the URLconf defined in PythonWeb.urls, Django tried these URL patterns, in this order:
admin/
The current path, hello, didn’t match any of these.
there must be another path as hello/
this is my code:
views.py/hello:
```
from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
def index(request):
return HttpResponse("Hello.")
```
urls.py/pythonweb(my app):
```
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('hello/', include('hello.urls'))
]
```
urls.py/hello:
```
from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name =('index'))
]
```
settings.py:
```
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'hello',
]
``` | 2021/07/31 | [
"https://Stackoverflow.com/questions/68603298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16567727/"
] | You have made a silly mistake in `settings.py` file:
Remove `/` after `hello`.
```
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'hello',
]
``` | I tried your code and it worked for me. I did two edits which are:
**settings.py:**
```py
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'hello.apps.HelloConfig',
]
```
Please note that your problem is not with `settings.py`, my edit will not make any difference. However, it is recommended to write it this way `hello.apps.HelloConfig`.
Also, I edited `hello/urls.py`, I removed the redundant parenthesis
**hello/urls.py:**
```py
from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
]
```
Please let me know if it still doesn't work for you. | 16,383 |
39,489,089 | This is more or less a follow up question to [Two dimensional color ramp (256x256 matrix) interpolated from 4 corner colors](https://stackoverflow.com/questions/39485178/two-dimensional-color-ramp-256x256-matrix-interpolated-from-4-corner-colors?noredirect=1#comment66289716_39485178) that was profoundly answered by jadsq today.
For linear gradients the previous answer works very well. However, if one wants to have better control of the stop colors of the gradient, this method seems not to be very practical. What might help in this situation is to have some reference color points in a matrix (lookup table) which are used to interpolate color values for the empty position in the look-up table. What I mean might be easier read out of the below image.
[](https://i.stack.imgur.com/AwPYk.png)
The whole idea is taken from <http://cartography.oregonstate.edu/pdf/2006_JennyHurni_SwissStyleShading.pdf> page 4 to 6. I've read through the paper, I understand theoretically what is going on but failing miserably because of my low experience with interpolation methods and to be honest, general math skills. What might also be of interest is, that they use a sigmoid Gaussian bell as interpolation method (page 6). They argue that Gaussian weighting yielded the visually best results and was simple to compute (equation 1, with k=0.0002 for a table of 256 per 256 cells).
---
Edit (better illustrations):
[](https://i.stack.imgur.com/cQVW1.png)
[](https://i.stack.imgur.com/hGXfz.png)
---
I have the other parts of their presented methods in place but filling the empty values in the matrix really is a key part and keeps me from continuing. Once again, thank you for your help!
What I have right now:
```
#!/usr/bin/env python3
import numpy as np
import matplotlib.pyplot as plt
# the matrix with the reference color elements
ref=np.full([7, 7, 3], [255,255,255], dtype=np.uint8)
ref[0][6] = (239,238,185)
ref[1][1] = (120,131,125)
ref[4][6] = (184,191,171)
ref[6][2] = (150,168,158)
ref[6][5] = (166,180,166)
# s = ref.shape
#
# from scipy.ndimage.interpolation import zoom
# zooming as in https://stackoverflow.com/a/39485650/1230358 doesn't seem to work here anymore, because we have no corner point as reference but randomly distributed points within the matrix. As far as I know ...
# zoomed=zoom(ref,(256/s[0],256/s[1],1),order=1)
plt.subplot(211)
plt.imshow(ref,interpolation='nearest')
# plt.subplot(212)
# plt.imshow(zoomed,interpolation='nearest')
plt.show()
``` | 2016/09/14 | [
"https://Stackoverflow.com/questions/39489089",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1230358/"
] | First some questions to better clarify your problem:
* what kind of interpolation you want: linear/cubic/other ?
* What are the points constrains? for example will there be alway just single region encapsulated by these control points or there could be also points inside?
For the simple linear interpolation and arbitrary (but at least 3 points not on a single line) I would try this:
1. **Triangulate control points area**
To non overlapping triangles covering whole defined area.
2. **render triangles**
So just rasterize see [Algorithm to fill triangle](https://stackoverflow.com/a/39062479/2521214) and all the sublinks. You should interpolate also the `R,G,B` along with the coordinates.
3. **Create a 2 copies of gradient and extrapolate one with H and second with V lines**
So scan all the H-horizontal lines of the gradient and if found 2 known pixels far enough from each other (for example quarter or half of gradient size) then extrapolate the whole line unknown colors. So if found known endpoints (Red) are `(x0,y,r0,g0,b0),(x1,y,r1,g1,b1)` then set all unknown colors in the same line as:
```
r = r0+(r1-r0)*(x-x0)/(x1-x0)
g = g0+(g1-g0)*(x-x0)/(x1-x0)
b = b0+(b1-b0)*(x-x0)/(x1-x0)
```
Similarly do the same in the copy of gradient for V-vertical lines now. So the points are now (x,y0,r0,g0,b0),(x,y1,r1,g1,b1)` and extrapolation:
```
r = r0+(r1-r0)*(y-y0)/(y1-y0)
g = g0+(g1-g0)*(y-y0)/(y1-y0)
b = b0+(b1-b0)*(y-y0)/(y1-y0)
```
After this compare both copies and if unknown point is computed in both set it as average of both colors in the target gradient image. Loop this whole process (**#3**) until no new gradient pixel is added.
4. **use single extrapolated color for the rest**
depending on how you define the control points some areas will have only 1 extrapolated color (either from H or V lines but not both) so use only the single computed color for those (after **#3** is done).
Here an example of what I mean by all this:
[](https://i.stack.imgur.com/l6zBg.png)
If you want something simple instead (but not exact) then you can bleed the known control points colors (with smooth filters) to neighboring pixels until the whole gradient is filled and saturated.
1. **fill unknown gradient pixels with predefined color meaning not computed**
2. **set each pixel to average of its computed neighbors**
you may do this in separate image to avoid shifting.
3. **set control points back to original color**
4. **loop #2 until area filled/saturated/or predefined number of iterations**
**[Edit1] second solution**
Ok I put it together in **C++** with your points/colors and gradient size here is how it looks (I bleed 100 times with 4-neighbors bleeding without weights):
[](https://i.stack.imgur.com/lYYyu.png)
The image on the left is input matrix where I encoded into alpha channel (highest 8 bits) if the pixel is reference point, computed or yet undefined. The image on the right is after applying the bleeding 100 times. The bleed is simple just take any non reference point and recompute it as average of all usable pixels around and itself (ignoring any undefined colors).
Here the **C++** code you can ignore the **GDI** stuff for rendering (beware my gradient map has `x` coordinate first you got `y` !)
```cpp
//---------------------------------------------------------------------------
const int mxs=7,mys=7,msz=16; // gradient resolution x,y and square size for render
DWORD map[mxs][mys]; // gradient matrix ... undefined color is >= 0xFF000000
// 0x00?????? - reference color
// 0xFF?????? - uncomputed color
// 0xFE?????? - bleeded color
//---------------------------------------------------------------------------
void map_clear() // set all pixels as uncomputed (white with alpha=255)
{
int x,y;
for (x=0;x<mxs;x++)
for (y=0;y<mys;y++)
map[x][y]=0xFFFFFFFF;
}
void map_bleed() // bleed computed colors
{
int x,y,r,g,b,n;
DWORD tmp[mxs][mys],c;
for (x=0;x<mxs;x++)
for (y=0;y<mys;y++)
{
c=map[x][y];
n=0; r=0; g=0; b=0; if (DWORD(c&0xFF000000)==0) { tmp[x][y]=c; continue; } if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
x++; if ((x>=0)&&(x<mxs)&&(y>=0)&&(y<mys)) c=map[x][y]; else c=0xFF000000; if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
x--; y--; if ((x>=0)&&(x<mxs)&&(y>=0)&&(y<mys)) c=map[x][y]; else c=0xFF000000; if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
x--; y++; if ((x>=0)&&(x<mxs)&&(y>=0)&&(y<mys)) c=map[x][y]; else c=0xFF000000; if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
x++; y++; if ((x>=0)&&(x<mxs)&&(y>=0)&&(y<mys)) c=map[x][y]; else c=0xFF000000; if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
y--; if (!n) { tmp[x][y]=0xFFFFFFFF; continue; }
c=((r/n)|((g/n)<<8)|((b/n)<<16))&0x00FFFFFF;
tmp[x][y]=c;
}
// copy tmp back to map
for (x=0;x<mxs;x++)
for (y=0;y<mys;y++)
map[x][y]=tmp[x][y];
}
void map_draw(TCanvas *can,int x0,int y0) // just renders actual gradient map onto canvas (can ignore this)
{
int x,y,xx,yy;
for (x=0,xx=x0;x<mxs;x++,xx+=msz)
for (y=0,yy=y0;y<mys;y++,yy+=msz)
{
can->Pen->Color=clBlack;
can->Brush->Color=map[x][y]&0x00FFFFFF;
can->Rectangle(xx,yy,xx+msz,yy+msz);
}
}
//---------------------------------------------------------------------------
```
And here the usage (your example):
```cpp
// clear backbuffer
bmp->Canvas->Brush->Color=clBlack;
bmp->Canvas->FillRect(TRect(0,0,xs,ys));
// init your gradient with reference points
map_clear();
// x y R G B
map[6][0] = (239)|(238<<8)|(185<<16);
map[1][1] = (120)|(131<<8)|(125<<16);
map[6][4] = (184)|(191<<8)|(171<<16);
map[2][6] = (150)|(168<<8)|(158<<16);
map[5][6] = (166)|(180<<8)|(166<<16);
map_draw(bmp->Canvas,msz,msz); // render result (left)
// bleed
for (int i=0;i<100;i++) map_bleed();
map_draw(bmp->Canvas,(mxs+2)*msz,msz); // render result (right)
// refresh window with backbufer (anti-flickering)
Main->Canvas->Draw(0,0,bmp);
```
Again you can ignore all the rendering stuff. The number of bleeds should be 2x bigger then pixels in diagonal so bleeding covers all the pixels. The more iterations the more saturated result I try `100` just for example and the result looks good .. so I did not play with it anymore...
**[Edit2] and here the algorithm for the second approach**
1. **add flags to interpolated matrix**
You need to know if the pixel is `reference,undefined` or `interpolated`. You can encode this to alpha channel, or use mask (separate 2D matrix).
2. **bleed/smooth matrix**
basically for each non `reference` pixel compute its new value as average of all non `undefined` pixels around (4/8 neighbors) and at its position. Do not use `undefined` pixels and store the computed value to temporary matrix (not messing up next pixels otherwise the bleeding/smoothing would shift the pixels usually diagonally). This way undefined pixels areas will shrink by 1 pixel. After whole matrix is done copy the content of temporary matrix to the original one (or swap pointers).
3. **loop #2 until result is saturated or specific count of iterations**
Number of counts should be at leas 2x bigger then the number of diagonal pixels to propagate reference pixel into whole matrix. The saturation check can be done in **#2** while copying the temp array into original one (can do abs difference between frames and if zero or near it stop). | I'm here again (a bit late, sorry,I just found the question) with a fairly short solution using `griddata` from `scipy.interpolate`. That function is meant to do precisely what you want : interpolate values on a grid from just a few points. The issues being the following : with that you won't be able to use fancy weights only the predefined interpolation method, and the holes around the border can't be directly interpolated either, so here I completed them with nearest values.
Here's the demo code :
```
# the matrix with the reference color elements
ref=np.full([7, 7, 3], 0 , dtype=np.uint8)
#Note I fill with 0 instead of 255
ref[0][6] = (239,238,185)
ref[1][1] = (120,131,125)
ref[4][6] = (184,191,171)
ref[6][2] = (150,168,158)
ref[6][5] = (166,180,166)
from scipy.interpolate import griddata
#we format the data to feed in griddata
points=np.where(ref != 0)
values=ref[points]
grid_x,grid_y,grid_z=np.mgrid[0:7,0:7,0:3]
#we compute the inperpolation
filled_grid=griddata(points, values, (grid_x, grid_y, grid_z), method='linear')
filled_grid=np.array(filled_grid,dtype=np.uint8) #we convert the float64 to uint8
#filled_grid still has holes around the border
#here i'll complete the holes with the nearest value
points=np.where(filled_grid != 0)
values=filled_grid[points]
near_grid=griddata(points, values, (grid_x, grid_y, grid_z), method='nearest')
completed_grid=(near_grid*(filled_grid == 0))+filled_grid
plt.subplot(131)
plt.imshow(ref,interpolation='nearest')
plt.subplot(132)
plt.imshow(filled_grid,interpolation='nearest')
plt.subplot(133)
plt.imshow(completed_grid,interpolation='nearest')
plt.show()
```
**Output:**
[](https://i.stack.imgur.com/ZTSrr.png) | 16,384 |
21,582,358 | Trying to find how to execute ipdb (or pdb) commands such as `disable`.
Calling the `h` command on `disable` says
>
> disable bpnumber [bpnumber ...]
> Disables the breakpoints given as a space separated list of
> bp numbers.
>
>
>
So how whould I get those bp numbers? was looking through the list of commands and couldn't get any to display the bp numbers
**[EDIT]**
The `break`, `b` and `info breakpoints` commands don't do anything, although in my module i clearly have 1 breakpoint set like this `import pdb; pdb.set_trace( )` - same for `ipdb`. Moreover `info` is not defined.
The output of `help` in `pdb`:
>
> Documented commands (type help ):
> ======================================== EOF bt cont enable jump pp run unt a c continue
> exit l q s until alias cl d h
>
> list quit step up args clear debug help n
>
> r tbreak w b commands disable ignore next
> restart u whatis break condition down j p
>
> return unalias where
>
>
> Miscellaneous help topics:
> ========================== exec pdb
>
>
> Undocumented commands:
> ====================== retval rv
>
>
>
And for `ipdb`:
>
> Documented commands (type help ):
> ======================================== EOF bt cont enable jump pdef psource run unt a c
>
> continue exit l pdoc q s until alias cl
>
> d h list pfile quit step up args clear
> debug help n pinfo r tbreak w b
>
> commands disable ignore next pinfo2 restart u whatis
> break condition down j p pp return unalias
> where
>
>
> Miscellaneous help topics:
> ========================== exec pdb
>
>
> Undocumented commands:
> ====================== retval rv
>
>
>
I have saved my module as pb3.py and am executing it within the command line like this
>
> python -m pb3
> The execution does indeed stop at the breakpoint, but within di pdb (ipdb) console, the commands indicated don't display anything - or display a NameError
>
>
>
If more info is needed, i will provide it. | 2014/02/05 | [
"https://Stackoverflow.com/questions/21582358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1037251/"
] | Use the `break` command. Don't add any line numbers and it will list all instead of adding them. | >
> info breakpoints
>
>
>
or just
>
> info b
>
>
>
lists all breakpoints. | 16,385 |
5,214,814 | I have a bunch of numbers that are tab-delimited with new line characters that looks something like this:
```
104 109 105 110 126 119 97 103\n
114 129 119 130 122 106 117 128\n
```
and so on. How can I use python to write all these numbers to a file in one column? i.e.
```
104\n
109\n
105\n
110\n
126\n
```
and so on. Fairly new to python so any help is appreciated, thanks! | 2011/03/07 | [
"https://Stackoverflow.com/questions/5214814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/647475/"
] | ```
"\n".join("104 109 105 110 126 119 97 103\n 114 129 119 130 122 106 117 128\n".split())
``` | Replace input\_filename and output\_filename with appropriate values.
```
f = open('input_filename','r')
nums = f.read().split()
f.close()
f = open('output_filename', 'w')
f.write('\n'.join(nums))
f.close()
```
[Edit] Reworked example that doesn't load the whole file into memory. It is now very similar to Chinmay Kanchi's example. But I use split where he uses replace.
```
with open('input_filename','r') as input:
with open('output_filename', 'w') as output:
for line in input:
output.write('\n'.join(line.split()))
output.write('\n')
``` | 16,386 |
40,443,759 | I am trying to count the word fizz using python. However it is giving me an error.
```
def fizz_count(x):
count =0
for item in x :
if item== "fizz":
count=count+1
return count
item= ["fizz","cat", "fizz", "Dog", "fizz"]
example= fizz_count(item)
print example
```
i checked with indentation but still it does not work. Where i am doing wrong? | 2016/11/05 | [
"https://Stackoverflow.com/questions/40443759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4186243/"
] | Your indentation seems to be incorrect, and you should not have the first `return count` (why would you return `count` as soon as you define it??).
```
def fizz_count(x):
count = 0
for item in x:
if item == "fizz":
count += 1 # equivalent to count = count + 1
return count
item = ["fizz", "cat", "fizz", "Dog", "fizz"]
example = fizz_count(item)
print example
``` | Well i am new to python world. What i learned is return statement should be some thing like this.
Example one :-
```
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices],data.iloc[test_indices]
```
Example two :-
```
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
``` | 16,393 |
8,437,964 | I was wondering if we can print like row-wise in python.
Basically I have a loop which might go on million times and I am printing out some strategic counts in that loop.. so it would be really cool if I can print like row-wise
```
print x
# currently gives
# 3
# 4
#.. and so on
```
and i am looking something like
```
print x
# 3 4
``` | 2011/12/08 | [
"https://Stackoverflow.com/questions/8437964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | In Python2:
```
data = [3, 4]
for x in data:
print x, # notice the comma at the end of the line
```
or in Python3:
```
for x in data:
print(x, end=' ')
```
prints
```
3 4
``` | Just add a `,` at the end of the item(s) you're printing.
```
print(x,)
# 3 4
```
Or in Python 2:
```
print x,
# 3 4
``` | 16,397 |
7,843,497 | I am trying to run an awk script using python, so I can process some data.
Is there any way to get an awk script to run in a python class without using the system class to invoke it as shell process? The framework where I run these python scripts does not allow the use of a subprocess call, so I am stuck either figuring out a way to convert my awk script in python, or if is possible, running the awk script in python.
Any suggestions? My awk script basically read a text file and isolate blocks of proteins that contains a specific chemical compound (the output is generated by our framework; I've add an example of how does it looks like below) and isolate them printing them out on a different file.
```
buildProtein compoundA compoundB
begin fusion
Calculate : (lots of text here on multiple lines)
(more lines)
Final result - H20: value CO2: value Compound: value
Other Compounds X: Value Y: value Z:value
[...another similar block]
```
So for example if I build a protein and I need to see if in the compounds I have CH3COOH in the final result line, if it does I have to take the whole block, starting from the command "buildProtein", until the beginning of the next block; and save it on a file; and then move to the next and see if it has again the compound that I am looking for...if it does not have it I skip to the next, until the end of the file (the file has multiple occurrence of the compound that I search for, sometimes they are contiguous while other times they are alternate with blocks that has not the compound.
Any help is more than welcome; banging my head for weeks now and after finding out this site I decided to ask for some help.
Thanks in advance for your kindness! | 2011/10/20 | [
"https://Stackoverflow.com/questions/7843497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1006198/"
] | If you can't use the *subprocess* module, the best bet is to recode your AWK script in Python. To that end, the *fileinput* module is a great transition tool with and AWK-like feel. | [Python's re module](http://docs.python.org/library/re.html) can help, or, if you can't be bothered with regular expressions and just need to do some quick field seperation, you can use [the built in str `.split()`](http://docs.python.org/library/stdtypes.html#str.split) and [`.find()`](http://docs.python.org/library/stdtypes.html#str.find) functions. | 16,407 |
40,617,324 | So I have an assignment, and For a specific section, we are supposed to import a .py file into our program."You will need to import histogram.py into your program."
Does that simply mean to create a new python file and just copy and past whatever is in the histogram.py into the file?
This part of my assignment is to create a graphical display with the contents in the .py file (which confuses me too) I was reading the chapters from the tb and it states how to create a window, but I havent seen anything about importing.. Sorry if this is a dumb question | 2016/11/15 | [
"https://Stackoverflow.com/questions/40617324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7045663/"
] | With a small tweak in your Plan model, it is indeed possible to do what you want.
First of all, you'll need to change your Plan `days` field, which is probably an `IntegerField`, to [DurationField](https://docs.djangoproject.com/en/1.10/ref/models/fields/#durationfield).
Now the catch is that we have to use [ExpressionWrapper](https://docs.djangoproject.com/en/1.8/ref/models/expressions/#django.db.models.ExpressionWrapper) to achieve the exact same result inside Postgres as the result you'd achieve in Python if you were to get the plan in a separate query.
Finally, your query should be something like:
```
from django.db.models import F, ExpressionWrapper, DateTimeField
from django.utils import timezone
Post.objects.annotate(target_date=ExpressionWrapper(timezone.now() - F('plan__days'), output_field=DateTimeField())).filter(createdAt__lte=F('target_date'))
``` | For me you must first grab the plan object.
```
plan = Plan.objects.filter(...)
```
and then reference the days
```
Post.objects.filter(createdAt__lte=datetime.now() - timedelta(days=plan.days))
``` | 16,409 |
2,262,482 | I have made my own php MVC framework and have also written its documentation. It is about 80% complete. Now basically I am looking for a way so that other developers should be able to analyze my code and possibly join hands for its further development and improvement and also they should be able to browse through the documentation (html files).
I know about google app engine, but it is currently and mainly for python. So where should i upload my php code which should be runnable and the documentation (html files) browseable? | 2010/02/14 | [
"https://Stackoverflow.com/questions/2262482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/139459/"
] | [**Github**](http://github.com/) comes to mind. It's free for Open Source projects, and supports a lot of "social coding" functions.
If you prefer Subversion Version Control, take a look at [**Google Code**](http://code.google.com/).
**HTML Hosting**
Github can even [**host static HTML pages**](http://github.com/blog/272-github-pages):
>
> GitHub Pages allow you to publish web content to a github.com subdomain named after your username. With Pages, publishing web content becomes as easy as pushing to your GitHub repository.
>
>
>
**Running PHP**
Running PHP files is not possible neither on Github, nor Google Code. I don't know any free, ad-free PHP hosting offers that are worth their salt - probably because of the huge danger of misuse. If it's an option at all, I think the best thing to do is chip in a few dollars/euros and get a small commercial hosting package somewhere. | [GitHub,](http://github.com) [SourceForge](http://sourceforge.com) and [Google Code](http://code.google.com) are all great places to make your project public and get others involved.
But these sites will only host your code, documentation, maybe provide you a forum, a mailing list and a bug tracker. They usually does not offer you a hosting for an instance of your app. (It would be costly and difficult to do that: all project have very specific runtime requirements and most of them are not even in PHP or not webapps at all.) But you could easily google for "free php web hosting", upload your site there, and then link from the project site.
(Btw. google app engine is also for Java!) | 16,412 |
48,031,283 | As an extremely simple benchmark, I executed the below simple code on PHP 7.0.19-1 and Python 3.5.3 (command line) on the same Raspberry Pi 3 model B.
Python's execution time was *horrible* in comparison to PHP's (74 seconds vs 1.4 seconds). Can anyone help me understand why the execution takes so much longer on Python? Is there something I'm doing wrong, or some optimizations/settings that would improve its performance to meet or exceed that of PHP? Or is Python just that much slower (surely not!)?
Yes I saw [this benchmark](https://blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/), which reports PHP 7 blazes past the other languages, but you'd think both would be fairly equally optimized when doing such a simple operation.
Python executes the loop about twice as fast if a string assignment is substituted for the addition. But that's still 34 seconds vs about 1.1 sec.
PHP7 code:
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000);
echo test(1000000);
echo test(10000000);
```
Results:
Time for 100000 was 0.036377191543579
100000Time for 1000000 was 0.18501400947571
1000000Time for 10000000 was 1.3939099311829
Python3 code:
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(1000000))
print(test(10000000))
print(test(100000000))
```
Results:
Time for 1000000 was 0.761641
1000000
Time for 10000000 was 7.427618000000001
10000000
Time for 100000000 was 74.320387
100000000
UPDATE: yes after @Amber pointed it out, I realize I totally PEBKAKed and the loop counters are an order of magnitude apart. Even so, the answers were really interesting so it was worth asking the question. | 2017/12/30 | [
"https://Stackoverflow.com/questions/48031283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502289/"
] | They're both within an order of magnitude of each other, when you run them with identical cycle counts rather than having the Python counts being larger by an order of magnitude:
### PHP: <https://ideone.com/3ebkai> 2.7089s
```
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; $i++)
{
$a++;
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(100000000);
```
### Python: <https://ideone.com/pRFVfk> 4.5708s
```
import time
def test(x):
t1 = time.clock()
a = 0
for i in range(x):
a += 1
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(100000000))
``` | The loop itself appears to be twice as slow in CPython 3:
<https://ideone.com/bI6jzD>
```php
<?php
function test($x)
{
$t1 = microtime(true);
$a = 0;
for($i = 0; $i < $x; ++$i)
{
//1.40s Reassign and use $a.
//$a += 1;
//1.15s Use and increment $a.
//$a++;
//0.88s Increment and use $a.
//++$a;
//0.69s Do nothing.
}
$t2 = microtime(true);
echo "Time for $x was " . ($t2 - $t1) . "\n";
return $a;
}
echo test(1e8);
```
<https://ideone.com/l35EBc>
```py
import time
def test(x):
t1 = time.clock()
#>5s
#from functools import reduce
#a = reduce(lambda a, i: a + i, (1 for i in range(x)), 0)
a = 0
for i in range(x):
#4.38s
#a += 1
#1.89s
pass
t2 = time.clock()
print("Time for {} was {}".format(x, t2 - t1))
return x
print(test(int(1e8)))
```
However, that is only the standard implementation of Python which cares more about being easy to understand than being fast. [PyPy3.5 v6.0.0](https://pypy.org/download.html#default-with-a-jit-compiler) for instance, runs that empty loop in 0.06s instead of 1.70s on my laptop. | 16,415 |
19,943,977 | I am a somewhat Python/programing newbie, and I am attempting to use a python class for the first time.
In this code I am trying to create a script to backup some files. I have 6 files in total that I want to back up regularly with this script so I thought that I would try and use the python Class to save me writing things out 6 times, and also to get practice using Classes.
In my code below I have things set up for just creating 1 instance of a class for now to test things. However, I have hit a snag. I can't seem to use the operator to assign the original filename and the back-up filename.
Is it not possible to use the operator for a filename when opening a file? Or am I doing things wrong.
```
class Back_up(object):
def __init__(self, file_name, back_up_file):
self.file_name = file_name
self.back_up_file = back_up_file
print "I %s and me %s" % (self.file_name, self.back_up_file)
with open('%s.txt', 'r') as f, open('{}.txt', 'w') as f2 % (self.file_name, self.back_up_file):
f_read = read(f)
f2.write(f_read)
first_back_up = Back_up("syn1_ready", "syn1_backup")
```
Also, line #7 is really long, any tips on how to shorten it are appreciated.
Thanks
Darren | 2013/11/13 | [
"https://Stackoverflow.com/questions/19943977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2680443/"
] | If you just want your files backed up, may I suggest using `shutil.copy()`?
As for your program:
If you want to substitute in a string to build a filename, you can do it. But your code doesn't do it.
You have this:
```
with open('%s.txt', 'r') as f, open('{}.txt', 'w') as f2 % (self.file_name, self.back_up_file):
```
Try this instead:
```
src = "%s.txt" % self.file_name
dest = "{}.txt".format(self.back_up_file)
with open(src, "rb") as f, open(dest, "wb") as f2:
# copying code goes here
```
The `%` operator operates on a string. The `.format()` method call is a method on a string. Either way, you need to do the operation with the string; you can't have two `with` statements and then try to use these operators at the end of the `with` statements line.
You don't have to use explicit temp variables like I show here, but it's a good way to make the code easy to read, while greatly shortening the length of the `with` statements line.
Your code to copy the files will read all the file data into memory at one time. That will be fine for a small file. For a large file, you should use a loop that calls `.read(CHUNK_SIZE)` where `CHUNK_SIZE` is a maximum amount to read in a single chunk. That way if you ever back up a really large file on a computer with limited memory, it will simply work rather than filling the computer's memory and making the computer start swapping to disk. | Try simplicity :)
Your line 7 is not going to parse. Split it using intermediate variables:
```
source_fname = "%s.txt" % self.file_name
target_fname = "%s.txt" % self.back_up_file
with open(source_fname) as source, open(target_fname) as target:
# do your thing
```
Also, try hard avoiding inconsistent and overly generic attribute names, like `file_name`, when you have *two* files to operate on.
Your copy routine is not going to be very efficient, too. It tries to read the entire file into memory, then write it. If I were you I'd call `rsync` or something similar via `popen()` and feed it with proper list of files to operate on. Most probably I'd use `bash` for that, though Python may be fine, too. | 16,424 |
12,548,572 | I'm trying to make a simple derived class based on `str`, with the addition of an instance variable, `flag`. For reasons I do not understand, I get an error if I try to pass the flag to the constructor:
```
>>> class Strvalue(str):
def __init__(self, content, flag=None):
str.__init__(self, content)
self.flag = flag
>>> Strvalue("No problem")
'No problem'
>>> Strvalue("Problem", flag=None)
Traceback (most recent call last):
File "<pyshell#113>", line 1, in <module>
Strvalue("Problem", flag=None)
TypeError: str() takes at most 1 argument (2 given)
```
I've checked that in the succesful calls, the `Strvalue` constructor really does get called-- I haven't mistyped `__init__` or something of that sort. So what is going on?
**Edit:** According to [this question](https://stackoverflow.com/questions/12231894/overloading-str-in-python) (and @Martijn's answer), the problem is avoided by overriding `__new__` as well. The question was *why* this was happening. | 2012/09/22 | [
"https://Stackoverflow.com/questions/12548572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/699305/"
] | You need to use `__new__` instead of `__init__` when subclassing `str`, see [basic customization](http://docs.python.org/reference/datamodel.html#basic-customization).
```
>>> class Strvalue(str):
... def __new__(cls, content, flag=None):
... inst = str.__new__(cls, content)
... inst.flag = flag
... return inst
...
>>> Strvalue('foo', True)
'foo'
>>> foo = Strvalue('foo', True)
>>> foo
'foo'
>>> foo.flag
True
```
Your code doesn't override `str.__new__`, so the original `str.__new__` constructor is called with your two arguments, and it only accepts one.
`str` objects are immutable, they construct a new instance in `__new__`, which then cannot be changed anymore; by the time `__init__` is called, `self` is an immutable object, so `__init__` for `str` doesn't make sense. You can still *also* define an `__init__` method, but since you already have `__new__`, there is really no need to divide the work up across two methods. | You need to override `__new__` instead of (or as well as) `__init__`. | 16,425 |
9,052,588 | I am new to python and new to programming. I have question how can I use variables from method1 in method too.
Example
```
class abc(self):
def method1 (self,v1):
v1 = a+b
return v1 # want to use this value in method 2
def method2(self)
v2 * v1 = v3
```
Thanks | 2012/01/29 | [
"https://Stackoverflow.com/questions/9052588",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172372/"
] | let `method2` "know" it is waiting for an argument:
```
def method2(self,v1): #note v1 was added here
v2 * v1 = v3 #what does that suppse to do? [look my "third note"]
```
also note: you also need to pass `v2` to `method2()`
third note: what exactly are you trying to do in `v2 * v1 = v3` ? maybe you meant `v3 = v1 * v2` ? | To use a value throughout a class, you need to bind that value to an attribute of its instance.
For example:
```
class Abc(object): # put object here, not self
def method1(self):
self.v1 = 3 + 7 # now v1 is an attribute
def method2(self)
return 4 * self.v1
a = Abc()
a.method1()
a.v1 # -> 10
a.method2() # -> 40
```
But usually is not a good practice to have an attribute rising after the call of some method, so you should also provide a default value for v1. Placing it in `__init__`:
```
class Abc(object):
def __init__(self):
self.v1 = 0
def method1(self):
self.v1 = 3 + 7
def method2(self)
return 4 * self.v1
a = Abc()
a.v1 # -> 0
a.method1()
a.v1 # -> 10
a.method2() # -> 40
``` | 16,426 |
63,894,460 | An example is something like [Desmos](https://www.desmos.com/calculator) (but as a desktop application). The function is given by the user as text, so it cannot be written at compile-time. Furthermore, the function may be reused thousands of times before it changes. However, a true example would be something where the function could change more frequently than desmos, and its values could be used more as well.
I see four methods for writing this code:
1. Parse the user-defined function with a grammar every single time the function is called. (Slow with many function calls)
2. Construct the syntax tree of the math expression so that the nodes contain function pointers to the appropriate math operations, allowing the program to skip parsing the text every single time the function is called. This should be faster than #1 for many function calls, but it still involves function pointers and a tree, which adds indirection and isn't as fast as if the functions were pre-compiled (and optimized).
3. Use something like [The Tiny C Compiler](https://bellard.org/tcc/) as the backend for dynamic code generation with libtcc to quickly compile the user's function after translating it into C code, and then use it in the main program. Since this compiler can compile something like 10,000 very simple programs on my machine per second, there should be next to no delay with parsing new functions. Furthermore, this compiler generates machine code for the function, so there are no pointers or trees involved, and optimization is done by TinyCC. This method is more daunting for an intermediate programmer like me.
4. Write my own tiny compiler (not of C, but tailored specifically to my problem) to generate machine code almost instantly. This is probably 20x more work than #3, and doesn't do much in the way of future improvements (adding a summation operation generator would require me to write more assembly code for that).
Is there any easier, yet equally or more efficient method than #3, while staying in the realm of C++? I'm not experienced enough with lambdas and templates and the standard library to tell for sure if there isn't some abstract way to write this code easily and efficiently.
Even a method that is faster than #2 but slower than #3, and requires no dynamic code generation would be an improvement.
This is more of an intellectual curiosity than a real-world problem, which is why I am concerned so much with performance, and is why I wouldn't use someone else's math parsing library. It's also why I wouldn't consider using javascript or python interpreter which can interpret this kind of thing on-the-fly. | 2020/09/15 | [
"https://Stackoverflow.com/questions/63894460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10868964/"
] | The paper [“A Killer Adversary for Quicksort”](https://www.cs.dartmouth.edu/%7Edoug/mdmspe.pdf) gives an algorithm that, for any quicksort implementation that satisfies certain “reasonable” requirements and runs deterministically, produces arbitrarily long input sequences that cause the algorithm to run in quadratic time. So while you’re correct that using the middle value as the pivot will prevent your algorithm from running in quadratic time on an already-sorted array, the fact that the pivots are picked deterministically means that there will be some input to the algorithm that causes the performance to degrade, and the linked paper can be used to construct such a pathological input. | the worst case of quick sort is when each time the pivot is chosen it's the max or min number/value in the array.
in this case it will run at O(n^2) for the regular version of Quick sort.
However, there's a version of Quick sort that uses the partition algorithm in order to choose better pivots. In this version of Quick sort the worst case is O(nlogn). | 16,430 |
29,124,435 | So I'm having this issue where I'm trying to convert something such as
```
[0]['question']: "what is 2+2",
[0]['answers'][0]: "21",
[0]['answers'][1]: "312",
[0]['answers'][2]: "4"
```
into an actual formated json object like so
```
[
{
'question': 'what is 2+2',
'answers': ["21", "312", "4"]
}
]
```
but I'm not too sure what approach to take to make this work.
I'm planning on parsing the key-values in the first snipped through javascript and decode it into a json object like in the second snippet through python.
Have you got any idea on how to do this? I'd accept an example in pretty much any language as it shouldn't be much of a worry to read the concept behind them. | 2015/03/18 | [
"https://Stackoverflow.com/questions/29124435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3254198/"
] | Something like this. You need to handle input errors.
A function to take a data structure and add stuff to it based on input
```
function add(old, input) {
var index = input[0];
var section = input[1];
if (old[index] == undefined) {
old[index] = {}
};
if (section == "question") {
old[index]['question'] = input[2];
}
if (section == "answers") {
var answerIndex = input[2];
var answerValue = input[3];
if (old[index]["answers"] == undefined) {
old[index]["answers"] = []
};
old[index]["answers"][answerIndex] = answerValue
}
return old;
}
```
Some inputs:
```
var inputs = [[0, "question", "what"],
[0, "answers", 0, "21"],
[0, "answers", 1, "22"]];
var result = {};
inputs.forEach(function(input) { add(result, input) })
JSON.stringify(result)
"{"0":{"question":"what","answers":["21","22"]}}"
``` | I think you should format the json as follow:
```
{
"questions": [
{
"question": "What is 2+2",
"possible_answers": [
{
"value": 1,
"correct": false
},
{
"value": 4,
"correct": true
},
{
"value": 3,
"correct": false
}
]
},
{
"question": "What is 5+5",
"possible_answers": [
{
"value": 6,
"correct": false
},
{
"value": 7,
"correct": false
},
{
"value": 10,
"correct": true
}
]
}
]
}
```
for doing that, you can do it:
```
var result = {}
result.questions = []; //the questions collection
var question = {}; //the first question object
question.question = "what is 2 + 2";
question.possible_answers = [];
var answer1 = {};
answer1.value = 1;
answer1.correct = false;
var answer2 = {};
answer2.value = 2;
answer2.correct = true;
var answer3 = {};
answer3.value = 3;
answer3.correct = false;
question.possible_answers.push(answer1);
question.possible_answers.push(answer2);
question.possible_answers.push(answer3);
result.questions.push(question); //add the first question with its possible answer to the result.
```
You can help yourself using [jsonlint](http://jsonlint.com/) for formatting the json and then try to set your javascript object to get the json you want.
Hope helps you! | 16,431 |
55,994,238 | I have a code to scrape hotels reviews in python (from yelp).
The code scrape the first page of reviews perfectly, but, I am struggling to scrape the next pages.
The While loop don't work, data scraped in each loop is the same (data of the first page)
```
import requests
from lxml import html
from bs4 import BeautifulSoup
url = 'https://www.yelp.com/biz/fairmont-san-francisco-san-francisco?sort_by=rating_desc'
while url:
r = requests.get(url)
t = html.fromstring(r.content)
for i in t.xpath("//div[@class='review-list']/ul/li[position()>1]"):
rev = i.xpath('.//p[@lang="en"]/text()')[0].strip()
date = i.xpath('.//span[@class="rating-qualifier"]/text()')[0].strip()
stars = i.xpath('.//img[@class="offscreen"]/@alt')[0].strip().split(' ')[0]
print(rev)
print(date)
print(stars)
next_page = soup.find('a',{'class':'next'})
if next_page:
url = next_page['href']
else:
url = None
sleep(5)
```
here **sleep(5)** before request new url is to avoid limitation set by the website. | 2019/05/05 | [
"https://Stackoverflow.com/questions/55994238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5539782/"
] | The following is one of the ways you can get your job done. I've slightly modified your existing logic of traversing next pages. Give it a shot.
```
import requests
from lxml.html import fromstring
url = 'https://www.yelp.com/biz/fairmont-san-francisco-san-francisco?sort_by=rating_desc'
while True:
res = requests.get(url)
root = fromstring(res.text)
for item in root.xpath("//div[@class='review-list']/ul/li[position()>1]"):
rev = item.xpath('.//p[@lang="en"]/text()')[0].strip()
print(rev)
next_page = root.cssselect(".pagination-links a.next")
if not len(next_page): break
url = next_page[0].get('href')
``` | You just need to be smart about looking at the URL. Most websites follow a scheme with their page progression. In this case, it seems like it changes to the following format for the next pages:
```
https://www.yelp.com/biz/fairmont-san-francisco-san-francisco?start=20&sort_by=rating_desc
```
Where the start=20 is where we should be looking. Rewrite the url at the end of the while loop. Once it gets to the end of the page, it should add 20 to that number, and then put it in the string. Like so:
```py
pagenum = 0
while url
pagenum += 20
url = "https://www.yelp.com/biz/fairmont-san-francisco-san-francisco?start=" + pagenum + "&sort_by=rating_desc"
```
And then to terminate the program in a try/except catch, where the url wouldn't load because there' no more pages. | 16,432 |
38,810,424 | I was running TensorFlow and I happen to have something yielding a NaN. I'd like to know what it is but I do not know how to do this. The main issue is that in a "normal" procedural program I would just write a print statement just before the operation is executed. The issue with TensorFlow is that I cannot do that because I first declare (or define) the graph, so adding print statements to the graph definition does not help. Are there any rules, advice, heuristics, anything to track down what might be causing the NaN?
---
In this case I know more precisely what line to look at because I have the following:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
when this line is present I have it that it returns NaN as declared by my summary writers. Why is this? Is there a way to at least explore what value Z has after its being square rooted?
---
For the specific example I posted, I tried `tf.Print(0,Z)` but with no success it printed nothing. As in:
```
Delta_tilde = 2.0*tf.matmul(x,W) - tf.add(WW, XX) #note this quantity should always be positive because its pair-wise euclidian distance
Z = tf.sqrt(Delta_tilde)
tf.Print(0,[Z]) # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
A = tf.exp(Z)
```
I actually don't understand what `tf.Print` is suppose to do. Why does it need two arguments? If I want to print 1 tensor why would I need to pass 2? Seems bizarre to me.
---
I was looking at the function [tf.add\_check\_numerics\_ops()](https://www.tensorflow.org/versions/r0.9/api_docs/python/control_flow_ops.html#add_check_numerics_ops) but it doesn't say how to use it (plus the docs seem to not be super helpful). Does anyone know how to use this?
---
Since I've had comments addressing the data might be bad, I am using standard MNIST. However, I am computing a quantity that is positive (pair-wise eucledian distance) and then square rooting it. Thus, I wouldn't see how the data specifically would be an issue. | 2016/08/07 | [
"https://Stackoverflow.com/questions/38810424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601580/"
] | There are a couple of reasons WHY you can get a NaN-result, often it is because of too high a learning rate but plenty other reasons are possible like for example corrupt data in your input-queue or a log of 0 calculation.
Anyhow, debugging with a print as you describe cannot be done by a simple print (as this would result only in the printing of the tensor-information inside the graph and not print any actual values).
However, if you use tf.print as an op in bulding the graph ([tf.print](https://www.tensorflow.org/versions/r0.10/api_docs/python/control_flow_ops.html#Print)) then when the graph gets executed you will get the actual values printed (and it IS a good exercise to watch these values to debug and understand the behavior of your net).
However, you are using the print-statement not entirely in the correct manner. This is an op, so you need to pass it a tensor and request a result-tensor that you need to work with later on in the executing graph. Otherwise the op is not going to be executed and no printing occurs. Try this:
```
Z = tf.sqrt(Delta_tilde)
Z = tf.Print(Z,[Z], message="my Z-values:") # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)
``` | I used to find it's much tougher to pinpoint where the nans and infs may occur than to fix the bug. As a complementary to @scai's answer, I'd like to add some points here:
The debug module, you can imported by:
```
from tensorflow.python import debug as tf_debug
```
is much better than any print or assert.
You can just add the debug function by changing your wrapper you session by:
```
sess = tf_debug.LocalCLIDebugWrapperSession(sess)
sess.add_tensor_filter("has_inf_or_nan", tf_debug.has_inf_or_nan)
```
And you'll prompt an command line interface, then you enter:
`run -f has_inf_or_nan` and `lt -f has_inf_or_nan` to find where the nans or infs are. The first one is the first place where the catastrophe occurs. By the variable name you can trace the origin in your code.
Reference: <https://developers.googleblog.com/2017/02/debug-tensorflow-models-with-tfdbg.html> | 16,433 |
29,298,096 | Is there any code to tap and hold on Appium? i use python , is there any command to support it ?
For double click i used click on element twice, for tap and hold i am not getting any solution | 2015/03/27 | [
"https://Stackoverflow.com/questions/29298096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4310652/"
] | Need to pass driver
```
TouchAction action = new TouchAction(driver);
action.longPress(webElement).release().perform();
``` | Here is the update for `Java Client: 5.0.4`
```
WebElement recBtn = driver.findElement(MobileBy.id("img_button"));
new TouchAction((MobileDriver) driver).press(recBtn).waitAction(Duration.ofMillis(10000)).release().perform();
``` | 16,443 |
31,321,906 | I have a string like this in Java:
`"\xd0\xb5\xd0\xbd\xd0\xb4\xd0\xbf\xd0\xbe\xd0\xb9\xd0\xbd\xd1\x82"`
How can I convert it to a human readable equivalent?
Note:
actually it is `GWT` and this string is coming from python as part of a JSON data.
The `JSONParser` transforms it to something that is totally irrelevant, so I want to be able to convert the string prior to parsing.
The expected, so called by me "human readable", should be "ендойнт" (<https://mothereff.in/utf-8#%D0%B5%D0%BD%D0%B4%D0%BF%D0%BE%D0%B9%D0%BD%D1%82>) | 2015/07/09 | [
"https://Stackoverflow.com/questions/31321906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2961166/"
] | It seems indeed that there's no endpoint voor search exists, but I think you use a simple alternative:
Use an empty "fields" array. And count the results of your query. If == 0: false. If > 0: true
```
GET /giata_index/giata_type/_search
{
"fields": [],
"query": {
"bool": {
"must": [
{
"term": {
"status": 2
}
},
{
"term": {
"ids": "26744"
}
}
]
}
}
}
```
An other alternative is to use \_count : <https://www.elastic.co/guide/en/elasticsearch/reference/1.6/search-count.html> | It should be possible with the [latest 2.x version](https://github.com/elastic/elasticsearch-php/blob/master/src/Elasticsearch/Endpoints/SearchExists.php).
Code sample could be something like this:
```
$clientBuilder = Elasticsearch\ClientBuilder::create();
// Additional client options, hosts, etc.
$client = $clientBuilder->build();
$index = 'your_index';
$type = 'your_type';
$params = [
'index' => $index,
'type' => $type,
'body' => [
'query' => [
'bool' => [
'must' => [
[
'term' => [
"status" => 2
]
],
[
'term' => [
'ids' => "26744"
]
]
]
]
]
];
try {
$client->searchExists($params);
} catch (Exception $e) {
// Not found. You might want to return FALSE if wrapped in a function.
// return FALSE;
}
// Found.
```
It is worth noting that if search is not wrapped in try/catch block it can break execution and throw an exception (status code 4xx if not found).
Also, it can not be used effectively in [future mode](https://www.elastic.co/guide/en/elasticsearch/client/php-api/current/_future_mode.html#_caveats_to_future_mode). | 16,453 |
37,293,366 | I am trying to list the instances on tag values of different tag keys
For eg> one tag key - Environment, other tag key - Role.
My code is given below :
```
import argparse
import boto3
AWS_ACCESS_KEY_ID = '<Access Key>'
AWS_SECRET_ACCESS_KEY = '<Secret Key>'
def get_ec2_instances(Env,Role):
ec2 = boto3.client("ec2", region)
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
for reservation in reservations["Reservations"] :
for instance in reservation["Instances"]:
print "%s" % (instance.tags['Name'])
if __name__ == '__main__':
regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1',
'ap-southeast-1','ap-southeast-2','ap-northeast-1']
parser = argparse.ArgumentParser()
parser.add_argument('Env', default="environment", help='value for tag:environment');
parser.add_argument('Role', default="role", help='value for tag:role');
args = parser.parse_args()
for region in regions: get_ec2_instances(args.Env, args.Role)
```
After running this script : python script.py arg1 arg2
I am getting following error
```
Traceback (most recent call last):
File "script.py", line 27, in <module>
for region in regions: get_ec2_instances(args.Env, args.Role)
File "script.py", line 10, in get_ec2_instances
reservations = ec2.describe_instances(Filters={"tag:environment" : Env, "tag:role" : Role})
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 258, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 524, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 577, in _convert_to_request_dict
api_params, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/validate.py", line 270, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter Filters, value: {'tag:role': 'arg1', 'tag:environment': 'arg2'}, type: <type 'dict'>, valid types: <type 'list'>, <type 'tuple'>
``` | 2016/05/18 | [
"https://Stackoverflow.com/questions/37293366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6349605/"
] | This looks familiar, did I modify this for somebody somewhere ;-) . Actually the code I wrote is in rush and not tested properly (And I don't bother to amend the % string formating and replace it with str.format() ) . In fact,using Filters parameter is not properly documented in AWS.
Please refer to Russell Ballestrini blog [Filtering AWS resources with Boto3](http://russell.ballestrini.net/filtering-aws-resources-with-boto3/) to learn more about correct boto Filters method.
1. Filters accept list value, and info inside the tag should be dict. thus [{}]
2. Boto3 documentation is pretty ambiguous on how to use specify the tag name. It is confusing without examples when they say you may use tag:key. So many people will just do `[{"tag:keyname","Values": [""] }]` and it doesn't work. (Actually the origin code I assume the developer know how the filters works, so I just amend the structure only).
3. Actually, You MUST explicitly specify "Name" and "Values" pair. So the correct way to specify tag name is `[{"Name" :"tag:keyname", "Values":[""] }]`. It is tricky.
So the correct way of formatting a filters if you want to use for your example
```
filters = [{'Name':'tag:environment', 'Values':[Env]},
{'Name':'tag:role', 'Values':[Role]}
]
```
(Update)
And to make sure argparse take up string value, you just enforce the argument to take string values
```
parser.add_argument('Env', type=str, default="environment",
help='value for tag:environment');
parser.add_argument('Role', type=str,default="role",
help='value for tag:role');
``` | Fix the Env and Role, as I am not sure mine or mootmoot's answer will work because the Array for Values [expects](http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Client.describe_instances) strings.
```
reservervations = ec2.describe_instances(
Filters=[
{'Name': 'tag:environment', 'Values': ['%s'], 'Name': 'tag:role', 'Values': ['%s']} % (Env, Role),
]]
).get(
'Reservations', []
)
``` | 16,454 |
51,710,083 | * I am writing unit tests for a Python library using **pytest**
* I need to **specify a directory** for test files to avoid automatic test file discovery, because there is a large sub-directory structure, including many files in the library containing "\_test" or "test\_" in the name but are not intended for pytest
* Some files in the library use **argparse** for specifying command-line options
* The problem is that specifying the directory for pytest as a command-line argument seems to interfere with using command line options for argparse
To give an example, I have a file in the root directory called `script_with_args.py` as follows:
```
import argparse
def parse_args():
parser = argparse.ArgumentParser(description="description")
parser.add_argument("--a", type=int, default=3)
parser.add_argument("--b", type=int, default=5)
return parser.parse_args()
```
I also have a folder called `tests` in the root directory, containing a test-file called `test_file.py`:
```
import script_with_args
def test_script_func():
args = script_with_args.parse_args()
assert args.a == 3
```
If I call `python -m pytest` from the command line, the test passes fine. If I specify the test directory from the command line with `python -m pytest tests`, the following error is returned:
```
============================= test session starts =============================
platform win32 -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: C:\Users\Jake\CBAS\pytest-tests, inifile:
plugins: remotedata-0.2.1, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2
collected 1 item
tests\test_file.py F [100%]
================================== FAILURES ===================================
______________________________ test_script_func _______________________________
def test_script_func():
# a = 1
# b = 2
> args = script_with_args.parse_args()
tests\test_file.py:13:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
script_with_args.py:9: in parse_args
return parser.parse_args()
..\..\Anaconda3\lib\argparse.py:1733: in parse_args
self.error(msg % ' '.join(argv))
..\..\Anaconda3\lib\argparse.py:2389: in error
self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = ArgumentParser(prog='pytest.py', usage=None, description='description', f
ormatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_h
elp=True)
status = 2, message = 'pytest.py: error: unrecognized arguments: tests\n'
def exit(self, status=0, message=None):
if message:
self._print_message(message, _sys.stderr)
> _sys.exit(status)
E SystemExit: 2
..\..\Anaconda3\lib\argparse.py:2376: SystemExit
---------------------------- Captured stderr call -----------------------------
usage: pytest.py [-h] [--a A] [--b B]
pytest.py: error: unrecognized arguments: tests
========================== 1 failed in 0.19 seconds ===========================
```
My question is, how do I specify the test file directory for pytest, without interfering with the command line options for argparse? | 2018/08/06 | [
"https://Stackoverflow.com/questions/51710083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8477566/"
] | `parse_args()` without argument reads the `sys.argv[1:]` list. That will include the 'tests' string.
`pytests` also uses that `sys.argv[1:]` with its own parser.
One way to make your parser testable is provide an optional `argv`:
```
def parse_args(argv=None):
parser = argparse.ArgumentParser(description="description")
parser.add_argument("--a", type=int, default=3)
parser.add_argument("--b", type=int, default=5)
return parser.parse_args(argv)
```
Then you can test it with:
```
parse_args(['-a', '4'])
```
and use it in for real with
```
parse_args()
```
Changing the `sys.argv` is also good way. But if you are going to the work of putting the parser in a function like this, you might as well give it this added flexibility. | To add to hpaulj's answer, you can also use a library like [unittest.mock](https://docs.python.org/3/library/unittest.mock.html) to temporarily mask the value of `sys.argv`. That way your parse args command will run using the "mocked" argv but the *actual* `sys.argv` remains unchanged.
When your tests call `parse_args()` they could do it like this:
```
with unittest.mock.patch('sys.argv', ['--a', '1', '--b', 2]):
parse_args()
``` | 16,459 |
59,704,959 | I'm trying to count the number of dots in an email address using Python + Pandas.
The first record is "addison.shepherd@gmail.com". It should count 2 dots. Instead, it returns 26, the length of the string.
```
import pandas as pd
url = "http://profalibania.com.br/python/EmailsDoctors.xlsx"
docs = pd.read_excel(url)
docs["PosAt"] = docs["Email"].str.count('.')
```
Can anybody help me? Thanks in advance! | 2020/01/12 | [
"https://Stackoverflow.com/questions/59704959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5518389/"
] | [`pandas.Series.str.count`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.count.html) takes a regex expression as input. To match a literal period (`.`), you must escape it:
```
docs["Email"].str.count('\.')
```
Just specifying `.` will use the regex meaning of the period (matching any single character) | The [**`.str.count(..)`** method [pandas-doc]](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.count.html) works with a [*regular expression* [wiki]](https://en.wikipedia.org/wiki/Regular_expression). This is specified in the documentation:
>
> This function is used to count the number of times a particular **regex pattern** is repeated in each of the string elements of the `Series`.
>
>
>
For a regex, the dot means "all characters except new line". You can use a *character set* (by surrounding it by square brackets):
```
docs["PosAt"] = docs["Email"].str.count(**'[.]'**)
``` | 16,462 |
57,718,512 | I'm trying to try using this model to train on rock, paper, scissor pictures. However, it was trained on 1800 pictures and only has an accuracy of 30-40%. I was then trying to use TensorBoard to see whats going on, but the error in the title appears.
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
model = Sequential()
model.add(Conv2D(256, kernel_size=(4, 4),
activation='relu',
input_shape=(64,64,3)))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
''' here it instantiates the tensorboard '''
tensorboard = TensorBoard(log_dir="C:/Users/bamla/Desktop/RPS project/Logs")
model.compile(loss="sparse_categorical_crossentropy",
optimizer="SGD",
metrics=['accuracy'])
model.summary()
''' Here its fitting the model '''
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
```
This outputs:
```
Traceback (most recent call last):
File "c:/Users/bamla/Desktop/RPS project/Testing.py", line 82, in <module>
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training.py", line 1178, in fit
validation_freq=validation_freq)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training_arrays.py", line 125, in fit_loop
callbacks.set_model(callback_model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\callbacks.py", line 68, in set_model
callback.set_model(model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\tensorflow\python\keras\callbacks.py", line 1509, in set_model
if not model.run_eagerly:
AttributeError: 'Sequential' object has no attribute 'run_eagerly'
```
Also, if you have any tips on how to improve the accuracy it would be appreciated! | 2019/08/29 | [
"https://Stackoverflow.com/questions/57718512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7858253/"
] | The problem is here:
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
```
Do not mix `keras` and `tf.keras` imports, these are **not compatible with each other**, and produce weird errors as the ones you are seeing. | I changed `from tensorflow.python.keras.callbacks import TensorBoard`
to `from keras.callbacks import TensorBoard` and it worked for me. | 16,464 |
51,664,292 | I'm getting the error below when I'm parsing the xml from the URL in the code. I won't post the XML because it's huge. The link is in the code below.
ERROR:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-70-77e5e1b79ccc> in <module>()
11
12 for child in root.iter('Materia'):
---> 13 if not child.find('EmentaMateria').text is None:
14 ementa = child.find('EmentaMateria').text
15
AttributeError: 'NoneType' object has no attribute 'text'
```
MY CODE:
```
url = 'http://legis.senado.leg.br/dadosabertos/senador/4988/autorias'
import requests
from xml.etree import ElementTree
response = requests.get(url, stream=True)
response.raw.decode_content = True
tree = ElementTree.parse(response.raw)
root = tree.getroot()
for child in root.iter('Materia'):
if child.find('EmentaMateria').text is not None:
ementa = child.find('EmentaMateria').text
for child_IdMateria in child.findall('IdentificacaoMateria'):
anoMateria = child_IdMateria.find('AnoMateria').text
materia = child_IdMateria.find('NumeroMateria').text
siglaMateria = child_IdMateria.find('SiglaSubtipoMateria').text
print('Ano = '+anoMateria+' | Numero Materia = '+materia+' | tipo = '+siglaMateria+' | '+ementa)
```
What I'm overlooking here?
Thanks | 2018/08/03 | [
"https://Stackoverflow.com/questions/51664292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1706665/"
] | Instead of checking if `child.find('EmentaMateria').text` is not `None`, you should make sure that `child.find('EmentaMateria')` is not `None` first.
Also, you should store the returning value of `child.find('EmentaMateria')` to avoid calling it twice.
Lastly, you should assign `ementa` a default value if `child.find('EmentaMateria')` is `None`; otherwise your `print` function below will be referencing an un-initialized variable.
Change:
```
if child.find('EmentaMateria').text is not None:
ementa = child.find('EmentaMateria').text
```
to:
```
node = child.find('EmentaMateria')
if node is not None:
ementa = node.text
else:
ementa = None
```
Alternatively, you can use the built-in function `getattr` to do the same without a temporary variable:
```
ementa = getattr(child.find('EmentaMateria'), 'text', None)
``` | If you are using the code to parse an xml file, open the xml file with a text editor and inspect the tags. In my case there were some rogue tags at the end. Once i removed those, the code worked as expected. | 16,469 |
12,164,692 | So I am new in this field and am not sure how to do this!!But basically here is what i did.
I sshed to somehost.
```
ssh hostname
username: foo
password: bar
```
In one of the directories, there is a huge csv file.. abc.csv
Now, i dont want to copy that file to my local.. but read it from there.
When I asked the folks around, they said that I can write a unix script and get the data in my python program from tehre.
I am not sure what that means?
Any clues?
Also, i am using windows env.
Thanks | 2012/08/28 | [
"https://Stackoverflow.com/questions/12164692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | So, here are your options:
**1.** Declare your **base class** as `abstract` and some methods as well
This approach has two good points: you will be free to implement common methods at the base class (that is, not all of them need to be `abstract`) while any abstract method will **must be** overridden at derived classes. There is one counter point (that you may be aware of): you can't instantiate it. That is, you can't do something like:
```
Base obj = new Base();
```
However, you stil will be able to do this:
```
Base obj = new Child();
```
**2.** Use **interfaces**
You may declare some interfaces to force your classes to implement some methods. However, the semantics between inheritance and interface is quite different. You must decide which is best for your.
IMHO, you would be fine with the first option. | You need to specify an abstract method in Parent:
```
public abstract class Parent
{
public void DoSomething()
{
// Do something here...
}
public abstract void ForceChildToDoSomething();
}
```
This forces the child to implement it:
```
public class Child : Parent
{
public override void ForceChildToDoSomething()
{
// Do something...
}
}
```
You will, however, now have an abstract Parent. So if you want to use the functionality in Parent, you'll need to do something like this:
```
Parent parent = new Child();
parent.DoSomething();
parent.ForceChildToDoSomething();
``` | 16,470 |
28,454,359 | I need to process a large text file containing information on scientific publications, exported from the ScienceDirect search page. I want to store the data in an array of arrays, so that each paper is an array, and all papers are stored in a larger array.
The good part is that each line corresponds to the value I want to put in the array, and that there is an empty line between papers. The problem is that each paper has a different number of lines associated with it, ranging from 2 to 6. An example of the data would be:
```
[Authors, title, journal, date]
[(digital object identifier)]
[(link to ScienceDirect website)]
[Abstract: Abstract]
[It has been shown ...]
[Authors, title, journal, date]
[(digital object identifier)]
[(link to ScienceDirect website)]
[Abstract: Abstract]
[It has been shown ...]
[Keywords]
[Authors, title, journal, date]
[(digital object identifier)]
```
and so on. The desired data structure would be ArrayAllPapers [ Paper-1 , Paper-2 , ... ,
Paper-n ], where each paper is an array Paper-1 [ author-line , doi-line , etc ]
I am able to read the file into python line by line as an array, but then run up against the problem of slicing the list based on a list item (in this case '\n'). I have found solutions to this problem for datasets with equal line spacing for objects, most of them written for lists, but none that work for unequal distribution. Perhaps I need to write to the text file first to fill in 'missing' rows to create an equal distribution?
I am still learning to work with Python (some experience with MatLab), so please excuse me if there is an obvious solution for this. I have tried finding a solution but have come up empty.
Any help would be highly appreciated!
For reference, the code I use now to enter the text file into an array:
```
import re, numpy
with open("test-abstracts-short.txt", "r") as text:
array = []
for line in text:
array.append(line)
``` | 2015/02/11 | [
"https://Stackoverflow.com/questions/28454359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4554385/"
] | Since you want to skip blank lines, the easiest thing to do is to check if a line is blank.
```
x = []
with open('my_file.txt', 'r') as f:
temp_list = []
for line in f:
if line.strip(): #line is not blank
temp_list.append(line)
else: #line is blank, i.e., it contains only newlines and/or whitespace
if temp_list: #check if temp_list contains any items
x.append(temp_list)
temp_list = []
``` | If first lines are mandatory, you can try to parse them and for each article create structure like this `{'author': 'Name', 'digital_object_identifier': 'Value'}` and so on.
Than you can try to parse most common keywords and append them as fields. So your article woild be like this:
`{'author': 'Name', 'digital_object_identifier': 'Value', 'keyword1': 'Value', 'keyword2': 'Value', 'keyword3': 'Value'}`.
Than you can add all unparsed keywords in some specific field (to do not lose data):
`{'author': 'Name', 'digital_object_identifier': 'Value', 'keyword1': 'Value', 'keyword2': 'Value', 'keyword3': 'Value', 'other_keys': {'key': 'value'}}`.
So, in other words, you can split your document om mandatory and non-mandatory fields. | 16,475 |
44,851,342 | How to convert a python dictionary `d = {1:10, 2:20, 3:30, 4:30}` to `{10: [1], 20: [2], 30: [3, 4]}`?
I need to reverse a dictionary the values should become the keys of another dictionary and the values should be key in a list i.e. also in the sorted matter. | 2017/06/30 | [
"https://Stackoverflow.com/questions/44851342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8238359/"
] | ```
d = {1:10, 2:20, 3:30, 4:30}
inv = {}
for key, val in d.iteritems():
inv[val] = inv.get(val, []) + [key]
```
Try this! | ```
o = {}
for k,v in d.iteritems():
if v in o:
o[v].append(k)
else:
o[v] = [k]
```
`o = {10: [1], 20: [2], 30: [3, 4]}` | 16,476 |
29,385,340 | I'm trying to find all the divisors ("i" in my case) of a given number ("a" in my case) with no remainder (a % i == 0). I'm running a loop that goes trough all the vales of i starting from 1 up to the value of a. The problem is that only first 2 products of a % i == 0 are taken into account. The rest is left out. Why is that?
Here the code in python3:
```
a = 999
i = 1
x = 0
d = []
while (i < a):
x = a / i
if(x % i == 0):
d.append(i)
i += 1
print (d)
```
The output of the code is:
```
[1, 3]
```
instead of listing all the divisors.
I have checked for different values of a and can't find the error. | 2015/04/01 | [
"https://Stackoverflow.com/questions/29385340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4262683/"
] | The behavior of the script is correct. See for yourself:

I think it's your logic, and what you are trying to achieve is:
```
a = 999
i = 1
d = []
while (i < a):
if(a % i == 0):
d.append(i)
i += 1
print (d)
```
Outputs:
```
[1, 3, 9, 27, ...]
``` | To complement Anton's answer, a more Pythonic way to loop would be:
```
a, d = 999, []
for i in range(1, a):
if a%i == 0:
d.append(i)
```
You can also take advantage of the fact that object have a [Boolean value](https://docs.python.org/3.4/reference/datamodel.html#object.__bool__):
```
if not a%i:
```
Or you can use a list comprehension:
```
d = [i for i in range(1, a) if not a%i]
``` | 16,481 |
41,690,010 | [](https://i.stack.imgur.com/FnX1O.png)In python selenium, how to create xpath for below code which needs only id and class:
```
<button type="button" id="ext-gen756" class=" x-btn-text">Save</button>
```
And I also need to select Global ID from below drop-down without clicking it.
```
<div class="x-combo-list-item">Global ID</div>
```
My below solution is not working-
```
//div[@class='x-combo-list-item']/div[contains(.,'Global ID')]
```
I do not want to mention `droplist` sequence number like-
```
//div[@class='x-combo-list-item']/div[1]
``` | 2017/01/17 | [
"https://Stackoverflow.com/questions/41690010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5907308/"
] | If you want to club `id` and `class` together in your `xpath` try like this-
```
driver.find_element_by_xpath('//button[@id="ext-gen756"][@class=" x-btn-text"]');
```
You can also try the same using `AND` -
```
driver.find_element_by_xpath('//button[@id="ext-gen756" and @class=" x-btn-text"]');
```
**EDITED**
Your `xpath` seem incorrect. Use following -
```
driver.find_element_by_xpath('//div[@class="x-combo-list-item"][contains(.,"Global ID")]');
``` | Just answering my own question after a long time had a look on this. The Question was posted when I was new in xpath topics.
```
<button type="button" id="ext-gen756" class=" x-btn-text">Save</button>
```
in terms of id and class:
```
driver.find_element_by_xpath("//button[@id='ext-gen756'][@class=' x-btn-text']")
```
Also sometime if Id's are dynamic and changes for every reload of the page then you may try:
```
driver.find_element_by_xpath("//button[@type='Save'][contains(@id,'ext-gen')][@class=' x-btn-text']")
```
Here I have used @type and for the @id contains option as prefix(ext-gen) usually remains the same for the dynamic ID's | 16,482 |
5,048,217 | i have some data stored in a .txt file in this format:
```
----------|||||||||||||||||||||||||-----------|||||||||||
1029450386abcdefghijklmnopqrstuvwxy0293847719184756301943
1020414646canBeFollowedBySpaces 3292532113435532419963
```
don't ask...
i have many lines of this, and i need a way to add more digits to the end of a particular line.
i've written code to find the line i want, but im stumped as to how to add 11 characters to the end of it. i've looked around, this site has been helpful with some other issues i've run into, but i can't seem to find what i need for this.
it is important that the line retain its position in the file, and its contents in their current order.
using python3.1, how would you turn this:
```
1020414646canBeFollowedBySpaces 3292532113435532419963
```
into
```
1020414646canBeFollowedBySpaces 329253211343553241996301846372998
``` | 2011/02/19 | [
"https://Stackoverflow.com/questions/5048217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/623985/"
] | As a general principle, there's no shortcut to "inserting" new data in the middle of a text file. You will need to make a copy of the entire original file in a new file, modifying your desired line(s) of text on the way.
For example:
```
with open("input.txt") as infile:
with open("output.txt", "w") as outfile:
for s in infile:
s = s.rstrip() # remove trailing newline
if "target" in s:
s += "0123456789"
print(s, file=outfile)
os.rename("input.txt", "input.txt.original")
os.rename("output.txt", "input.txt")
``` | Copy the file, line by line, to another file. When you get to the line that needs extra chars then add them before writing. | 16,483 |
2,541,954 | I basically want to be able to:
* Write a few functions in python (with the minimum amount of extra meta data)
* Turn these functions into a web service (with the minimum of effort / boiler plate)
* Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments)
**Example**
python:
```
def hello_world():
return "Hello world"
```
javascript:
```
...
<!-- This file is automatically generated (either dynamically or statically) -->
<script src="http://myurl.com/webservice/client_side_javascript"> </script>
...
<script>
$('#button').click(function () {
hello_world(function (data){ $('#label').text(data)))
}
</script>
```
A bit of research has shown me some approaches that come close to this:
* Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell)
* Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this)
But are there any approaches closer to what I want? | 2010/03/29 | [
"https://Stackoverflow.com/questions/2541954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/47741/"
] | Yes there is, there is [Pyjamas](http://pyjs.org/). Some people bill this as the "[GWT](http://code.google.com/webtoolkit/) for Python" | It looks like using a javascript XML RPC client (there is jquery plugin for this) together with an XML RPC server is a good way to go.
The jquery plugin will introspect your rpc service and will populate method names make it impossible to mis type the name of a method call without getting early warning. It will not however test the number of arguments that you pass, or their type.
There doesn't seem to be the same support for introspection on json rpc (or rather there doesn't seem to be a consistent standard). This approach can also be used with django.
I've put together some example code and uploaded it [here](http://tat.wright.name/xml-rpc) (I hope that linking to one's blog posts isn't considered terrible form - a brief search of the internet didn't seem to suggest it was)... | 16,488 |
36,510,431 | I am very new to python and programming in general and I want to print out the string "forward" whenever i press "w" on the keyboard. It is a test which I will transform into a remote control for a motorized vehicle.
```
while True:
if raw_input("") == "w":
print "forward"
```
Why does it just print out every key I type? | 2016/04/08 | [
"https://Stackoverflow.com/questions/36510431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4909346/"
] | In Python 2.x the raw\_input function will display all characters pressed, and return upon receiving a newline. If you want different behaviour you'll have to use a different function. Here's a portable version of getch for Python, it will return every key press:
```
# Copied from: stackoverflow.com/questions/510357/python-read-a-single-character-from-the-user
def _find_getch():
try:
import termios
except ImportError:
# Non-POSIX. Return msvcrt's (Windows') getch.
import msvcrt
return msvcrt.getch
# POSIX system. Create and return a getch that manipulates the tty.
import sys, tty
def _getch():
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(fd)
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
return _getch
getch = _find_getch()
```
It can be used like so:
```
while True:
if getch() == "w":
print "forward"
``` | `raw_input` reads an entire line of input. The line you're inputting is made visible to you, and you can do things like type some text:
```
aiplanes
```
go left a few characters to fix your typo:
```
airplanes
```
go back to the end and delete a character because you didn't mean to make it plural:
```
airplane
```
and then hit `Enter`, and `raw_input` will return `"airplane"`. It doesn't just return immediately when you hit a keyboard key.
---
If you *want* to read individual keys, you'll need to use lower-level terminal control routines to take input. On Unix, the [`curses`](https://docs.python.org/2/library/curses.html) module would be an appropriate tool; I'm not sure what you'd use on Windows. I haven't done this before, but on Unix, I think you'd need to set the terminal to raw or cbreak mode and take input with `window.getkey()` or `window.getch()`. You might also have to turn off echoing with `curses.noecho()`; I'm not sure whether that's included in raw/cbreak mode. | 16,489 |
74,134,047 | I need some help recursively searching a python dict that contains nested lists.
I have a structure like the below example. The value of key "c" is a list of one or more dicts. The structure can be nested multiple times (as you can see in the second item), but the pattern is the same. In all likelihood, the nested depth will probably not be more than 5 deep.
My objective (in this example) is to find all occurrences of ref = 'hij789', **no matter where they occur** (however deep they are nested) and then add the missing 'b' = 'something' to each occurrence.
```
{
'ref': 'abc123',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'def456',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'hij789',
'a': 'something'
}]
},{
'ref': 'klm012',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'nop345',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'hij789',
'a': 'something'
}]
}]
},{
'ref': 'qrs678',
'a': 'something',
'b': 'something',
'c': [{
'ref': 'tuv901',
'a': 'something'
}]
}]
}
```
I first tried something like this, but it of course does not search beyond the first nested dict:
```
l = next((n for n in mydict['c'] if n['ref'] == 'myref'), None)
l['b'] = 'somevalue'
```
I also tried a variation of this, but could not make it work:
[Recursive list inside dict? - python](https://stackoverflow.com/questions/63646525/recursive-list-inside-dict-python)
Is there a relatively straightforward way to achieve this?
Thanks. | 2022/10/20 | [
"https://Stackoverflow.com/questions/74134047",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20287604/"
] | (Just posting this here from Matt Wards comment as I cannot mark the comment as the answer.)
As the comment suggests, Visual Studio for Mac seems to only use launchSettings.json for Asp.Net projects. I was working with a Console App.
Visual Studio for PC uses launchSettings.json for console applications too but not the Mac version. | Well, you can try to change the properties of the file and how VS Studio treats it during build.
1. Right-click on `launchSettings.json` and choose `Properties`
2. Set the below properties as follows:
```
Build action -> Content
Copy to directory -> Copy if newer
```
See if this helps. | 16,490 |
44,756,447 | I've got a lot of commands running in impala shell, in the middle of them I now have a need to run a python script. The script itself is fine when run from outside the impala shell.
When I run from within the impala shell using ! or "shell" (documentation found [here](https://www.cloudera.com/documentation/enterprise/5-9-x/topics/impala_shell_commands.html "here")) it changes the commands to be fully lower case.
The path to the script itself would be something like this: **/home/DOMAIN\_USERS/somemorefolders/python/script.py**
so in my impala shell I'm running: `!/home/DOMAIN_USERS/somemorefolders/python/script.py`
the error I get back is
>
> sh: /home/domain\_users/somemorefolders/python/script.py: No such file
> or directory
>
>
>
Is there any way to force it to not make it into lower case? I've tried putting both single & double quotes round the path but that makes no difference.
I guess if there's no way I'll have to come out of the impala shell, run the python bit then go back in. Its just a bit more work when I figured the "shell" command in the impala shell is there for that exact benefit. | 2017/06/26 | [
"https://Stackoverflow.com/questions/44756447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5065581/"
] | This is caused by a known bug [IMPALA-4664](https://issues.apache.org/jira/browse/IMPALA-4664).
A workaround is to leave a space after "!". Can you try this (note the space):
! /home/DOMAIN\_USERS/somemorefolders/python/script.py | Thanks to [@BoboDarph](https://stackoverflow.com/users/8085234/bobodarph "bobodarph") for help in getting there.
I was able to use `!~/somemorefolders/python/script.py` as I could get there from my home directory.
I still think it's a bit shortsighted of impala to force things into lower case but there you go. | 16,492 |
43,021,399 | Just creating a python program that creates a function named letterX, that ... well makes an X. The two lines must be 90 degrees from each other. The pointer ends at the initial position.
I solved this pretty easily, just wondering if you can put this into a loop or just simplify it. I don't know how since i have to change directions differently rather than looping over the same code. Any help would be appreciated.
```
import turtle
t = turtle.Turtle()
s = turtle.Screen()
def letterX(t,length):
t.down()
t.right(45)
t.forward(length/2)
t.right(180)
t.forward(length)
t.right(180)
t.forward(length/2)
t.left(90)
t.forward(length/2)
t.right(180)
t.forward(length)
t.right(180)
t.forward(length/2)
t.right(45)
t.up()
letterX(t,100)
``` | 2017/03/25 | [
"https://Stackoverflow.com/questions/43021399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7767472/"
] | Like francescalus commented, it looks like problem is related to integer arithmetic in Fortran.
You may modify the first `if` statement in Matlab implementation as follows:
```
if fix(k/2) ~= j/2
```
---
In your second part, there is a typo error in the Matlab code.
You wrote `x2` instead of `x1`.
Correct code:
```
f1 = 100/(1+x1.^2); %Instead of f1 = 100/(1+x2.^2);
```
Minor flaw:
```
if abs(e)<=0.001 %Instead of if abs(e)<0.001
```
---
I know very basic Fortran, so I executed both Matlab and Fortran code versions side by side.
I executed the code step by step using the debugger.
I used some arbitrary input values.
The problem is related to the first Fortran `if` statement: `(k/2/=j/2.)`
When `k` is an integer `k/2` evaluates to `floor(k/2)`, and `j/2.` evaluates to floating point (assume `k` is positive).
(I used [fix](https://www.mathworks.com/help/fixedpoint/ref/fix.html) Matlab function, in case `k` can also be negative).
Example:
```
integer j, k
j=3
k=3
print *, k/2
print *, j/2.
print *, k/2/=j/2.
```
Result:
```
1
1.500000
T
```
---
In Matlab, the default type is double.
```
j=3;
k=3;
disp(k/2)
disp(j/2)
disp(k/2 ~= j/2)
```
Result:
```
1.5000
1.5000
0
```
As you can see, in Fortran condition evaluates to **true**, and in Matlab to **false**.
---
Complete Matlab code:
```
a = 0;
b = 5.+85;
par1 = 100*(atan(b)-atan(a));
fa = 100/(1+a.^2);
fb = 100/(1+b.^2);
fprintf('METHOD SIMPSON\n');
for n = 1:1000000
h=(b-a)/n;
sum1=0;
sum2=0;
x1 = a;
x2 = a;
for j = 1:n-1
k = j;
if fix(k/2) ~= j/2
if j == 1
x1 = x1+h;
end
if j > 1
x1 = x1+2*h;
end
f1 = 100/(1+x1.^2);
sum1 = sum1 + f1;
else
x2 = x2+2*h;
f2 = 100/(1+x2.^2);
sum2 = sum2 + f2;
end
end
par2 = (h/3)*(fa+4*sum1+2*sum2+fb);
e = par1 - par2;
if abs(e)<=0.001
break;
end
end
y=n;
partitionS = zeros (n);
valueS= zeros (n);
errorS = zeros (n);
for n = 1:y
h=(b-a)/n;
sum1=0;
sum2=0;
x1=a;
x2=a;
for j = 1:n-1
k = j;
if fix(k/2) == j/2
x2 = x2 + 2*h;
f2 = 100/(1+x2.^2);
sum2 = sum2 + f2;
else
if j == 1
x1 = x1 + h;
end
if j > 1
x1 = x1 + 2*h;
end
f1 = 100/(1+x1.^2);%f1 = 100/(1+x2.^2);
sum1 = sum1 + f1;
end
end
partitionS(n) = n;
valueS(n)= (h/3)*(fa+4*sum1+2*sum2+fb);
errorS(n)=par1-valueS(n);
end
fprintf('Below are the results\n');
fprintf('%.25f\n',partitionS(n));
fprintf('%.25f\n',valueS(n));
fprintf('%.25f\n',errorS(n));
```
---
Matlab output:
```
METHOD SIMPSON
Below are the results
332.0000000000000000000000000
155.9675968160148900000000000
0.0009704737140339148000000
``` | I made a small fortran program based on your posts. Then put it through my f2matlab fortran source to matlab source converter (matlab file exchange). Here is the fortran:
```
program kt_f
implicit none
integer j,n,k,f1,f2
real x1,x2,h,sum1,sum2
n=100
k=50
do j=1,n-1
k=j
if(k/2/=j/2.) then
if(j==1) x1=x1+h
if(j>1) x1=x1+2*h
f1=100/(1+x1**2)
sum1=sum1+f1
else
x2=x2+2*h
f2=100/(1+x2**2)
sum2=sum2+f2
endif
enddo
print *,'sum1=',sum1
print *,'sum2=',sum2
end program kt_f
```
When I compile and run this, the output is:
```
sum1= 5000.000
sum2= 4900.000
```
Here is the matlab source produced. Note that in addition to the `fix` in the if statement, you need another `fix` in the line with the 100/ because this is an integer division as well. Here is the matlab code:
```
function kt_f(varargin)
clear global; clear functions;
global GlobInArgs nargs
GlobInArgs={mfilename,varargin{:}}; nargs=nargin+1;
persistent f1 f2 h_fv j k n sum1 sum2 x1 x2 ;
if isempty(f1), f1=0; end;
if isempty(f2), f2=0; end;
if isempty(h_fv), h_fv=0; end;
if isempty(j), j=0; end;
if isempty(k), k=0; end;
if isempty(n), n=0; end;
if isempty(sum1), sum1=0; end;
if isempty(sum2), sum2=0; end;
if isempty(x1), x1=0; end;
if isempty(x2), x2=0; end;
n = 100;
k = 50;
for j = 1: n - 1;
k = fix(j);
if(fix(k./2) ~= (j./2.));
if(j == 1);
x1 = x1 + h_fv;
end;
if(j > 1);
x1 = x1 + 2.*h_fv;
end;
f1 = fix(100./(1+x1.^2));
sum1 = sum1 + f1;
else;
x2 = x2 + 2.*h_fv;
f2 = fix(100./(1+x2.^2));
sum2 = sum2 + f2;
end;
end;
'sum1=',sum1
'sum2=',sum2
end %program kt_f
```
This gives the same output as the fortran. Please check and see whether this solves you issue. | 16,493 |
6,738,987 | is it possible to add extension method to python built-in types?
I know that I can add extension method to defined type by simply adding new method by . as following:
```
class myClass:
pass
myClass.myExtensionMethod = lambda self,x:x * 2
z = myClass()
print z.myExtensionMethod(10)
```
But is any way to adding extension method to python built'in types like list, dict, ...
```
list.myExtension = lambda self,x:x * 2
list.myExtension(10)
``` | 2011/07/18 | [
"https://Stackoverflow.com/questions/6738987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614953/"
] | No. Types defined in C cannot be monkeypatched. | No, because I'm pretty sure all the built-in types are written in optimized C and thus can't be modified with Python. When I try it, I just get:
```
TypeError: can't set attributes of built-in/extension type 'list'
``` | 16,494 |
18,038,492 | I have a PHP script that needs to take one command-line argument. I need to call this script from inside my python script.
```
Popen('php simplepush.php "Here's the argument"', shell=True, cwd="/home/ubuntu/web/firestopapp.com/app")
```
^That works. However, I want to pass a variable in the Python script instead of "Here's the argument". But when I try:
```
var1 = "yes"
Popen(['php', 'simplepush.php', var1], shell=True, cwd="/home/ubuntu/web/firestopapp.com/app")
```
it no longer works. And this is run through crontab, which is what led me to have to include the cwd argument.
I'd really appreciate any help, seems like a fairly straightforward syntactical issue.
After Eric's suggestion:
`Traceback (most recent call last):
File "/home/ubuntu/web/mywebsite.com/app/email_parse.py", line 25, in <module>
Popen('php simplepush.php "Here's the argument"', shell=False, cwd="/home/ubuntu/web/mywebsite.com/app")
File "/usr/lib/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1308, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory`
Sihrc's solution gets me the following, so it's not a full fix.
`/bin/sh: 1: cannot open my2ndemail@gmail.com: No such file`
and here's the rest of the code.
```
#!/usr/bin/python
import email, getpass, imaplib, os, subprocess
from subprocess import Popen
detach_dir = '.'
m = imaplib.IMAP4_SSL("imap.gmail.com")
m.login("myemail@gmail.com","mypassword")
m.select('mailbox')
resp, items = m.search(None, "(UNSEEN)")
message = ""
items = items[0].split()
for emailid in items:
resp, data = m.fetch(emailid, "(RFC822)")
email_body = data[0][1]
mail = email.message_from_string(email_body)
message += "["+mail["From"]+"] :" + mail["Subject"] + "\n"
for part in mail.walk():
if part.get_content_type() == 'text/plain':
message += part.get_payload()
else:
continue
Popen('php simplepush.php ' + str(eval('message')), shell=True, cwd="/home/ubuntu/web/firestopapp.com/app")
``` | 2013/08/04 | [
"https://Stackoverflow.com/questions/18038492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2476581/"
] | You will not be able to get the JSON in controller. In ASP.NET Web API pipeline, binding happens before the action method executes. Media formatter would have read the request body JSON (which is a read-once stream) and emptied the contents by the time the execution comes to your action method. But if you read the JSON from a component running in the pipeline before the binding, say a message handler, you will be able to read it like this. If you must get the JSON in action method, you can store it in the properties dictionary.
```
public class MessageContentReadingHandler : DelegatingHandler
{
protected override async Task<HttpResponseMessage> SendAsync(
HttpRequestMessage request,
CancellationToken cancellationToken)
{
var content = await request.Content.ReadAsStringAsync();
// At this point 'content' variable has the raw message body
request.Properties["json"] = content;
return await base.SendAsync(request, cancellationToken);
}
}
```
From the action method, you can retrieve JSON string like this:
```
public HttpResponseMessage PostCustomer(Customer customer)
{
string json = (string)Request.Properties["json"];
}
``` | You can't get the parsed JSON, but you can get the content and parse it yourself. Try this:
```
public async Task PostCustomer(Customer customer)
{
var json = Newtonsoft.Json.JsonConvert.DeserializeObject(await this.Request.Content.ReadAsStringAsync());
///You can deserialize to any object you need or simply a Dictionary<string,object> so you can check the key value pairs.
}
``` | 16,502 |
40,445,390 | I have a list composed by tuples.
Each tuple is in the following tuple format: (String, Integer).
I want to merge the tuples that have the same head (String) as follows:
```
[("Foo", 2), ("Bar", 4), ("Foo", 2), ("Bar", 4), ("Foo", 2)]
```
should become:
```
[("Foo", 6), ("Bar",8)].
```
What is a good python algorithm for this? | 2016/11/06 | [
"https://Stackoverflow.com/questions/40445390",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4383494/"
] | How about collecting the sums in a [`defaultdict`](https://docs.python.org/3.6/library/collections.html#collections.defaultdict)?
```
from collections import defaultdict
d = defaultdict(int)
for (key, value) in items:
d[key] += value
```
And then turn them back to a list of tuples:
```
list(d.items())
```
The `defaultdict` in this example uses the `int` function to fill in unknown values as `0`. So the first time a particular `d[key]` is added to, it assumes an initial value of `0` and gets summed from there. | ```
d = {}
map(lambda (x,y):d.setdefault(x,[]).append(y),a)
print [(k,sum(v)) for k,v in d.items()]
``` | 16,504 |
17,581,418 | I'm trying to build OpenCV with MSYS / MinGW so I can use the cv2 module in python.
I'm on Windows 7 64-bit and using 32 bit Python 2.7. Building OpenCV works, but I cannot seem to use it without getting an "ImportError: DLL load failed: The specified module could not be found." after importing cv2. I've been debugging it for awhile, and the common google solutions seem not to work.
These are the steps I have taken:
* Removed all other instance of opencv from my computer (there were a few)
* Built opencv with CMake and I manually set these flags:
```
'CMAKE_INSTALL_PREFIX' : 'C:/Program Files (x86)/OpenCV'
'CMAKE_BUILD_TYPE' : 'Release',
'CMAKE_C_FLAGS' : '-m32',
'CMAKE_CXX_FLAGS' : '-m32',
'BUILD_opencv_gpu' : False,
'BUILD_opencv_gpuarithm' : False,
'BUILD_opencv_gpubgsegm' : False,
'BUILD_opencv_gpucodec' : False,
'BUILD_opencv_gpufeatures2d' : False,
'BUILD_opencv_gpufilters' : False,
'BUILD_opencv_gpuimgproc' : False,
'BUILD_opencv_gpuoptflow' : False,
'BUILD_opencv_gpustereo' : False,
'BUILD_opencv_gpuwarping' : False
```
* I do a cmake -G "MSYS Makefiles" ..., make -j9, and finally a make install. The printouts show that the libraries go where I expect them to.
* My PATH includes
```
C:\MinGW\bin
C:\MinGW\msys\1.0\bin
C:\MinGW\libexec\gcc\mingw32\4.6.2
C:\Python27\
C:\Python27\Scripts
C:\Program Files (x86)\OpenCV
C:\Program Files (x86)\OpenCV\lib
```
* My PYTHONPATH includes
```
C:\Python27\Lib\site-packages
C:\Python27
```
* I then copy
+ cv2.pyd and libcv2.dll.a from C:\Python\Lib\site-packages
+ all libopencv\_\*249.dll.a from C:\Program Files (x86)\OpenCV\lib
+ libgcc\_s\_dw2-1.dll libstdc++-6.dll from C:\MinGW\bin
+ I MOVE ALL OF THESE into a directory I named cv2
* In the cv2 directory I made an \_\_init\_\_.py file containing the following code:
```
import os, sys
from os.path import realpath, dirname
tpl_cv2 = realpath(dirname(__file__))
print(tpl_cv2)
sys.path.insert(0,tpl_cv2)
os.environ['PATH'] = tpl_cv2 + os.pathsep + os.environ['PATH']
try:
from cv2 import *
except Exception as ex:
print(repr(ex))
print(os.environ['PATH'])
print(sys.path)
raise
```
* I then open a IPython terminal and enter the command "import cv2"
And I get the error: ImportError('DLL load failed: The specified module could not be found.',)
---
To debug this I've:
* I load up the cv2.pyd file in depends.exe It shows that
LIBOPENCV\_CALIB3D249.DLL and all the other opencv libs are not found. However, these are all in the same directory as cv2.pyd as well as in both the PATH and PYTHONPATH.
But they are named .dll.a because I compiled with MSYS / MinGW. I don't understand why its looking for the .dll without the .a suffix. I think this probably has something to do with the error, but I don't know where to go from here.
* I've also built a small C++ application which reads and image using the OpenCV that I've built. It builds just fine, but I get the error: The program can't start because libopencv\_core249.dll is missing from your computer. So, I'm more convinced this .dll.a thing is the issue.
* I've tried simply renaming the libs from .dll.a to .dll, but that didn't work.
I'm hoping someone can shed light on this issue. | 2013/07/10 | [
"https://Stackoverflow.com/questions/17581418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/887074/"
] | I feel really stupid. The dlls were in "C:\Program Files (x86)\bin" not "C:\Program Files (x86)\lib" It seems to work now. | Just to make sure other users can be helped with this answer:
Imagine you have compiled OpenCV and have several \*.dll and the cv2.pyd file.
You need to copy those files to 'DLLs' folder within the python directory.
Then import the module to check wether it is ok.
I have also copied the \*.lib files into the appropriate folder.
Best regards. | 16,506 |
28,677,012 | the code:
```
import os
from time import *
import socket
import time
global diskspace
#####################
#display temp
#uses shell script to find out temp then uses python to display it
#python uses os module to run line of shell script
os.system("cat /sys/class/thermal/thermal_zone0/temp > sysTemp")
temp = open("sysTemp") # Open a file
str = temp.read(); # read characters in sysTemp
temp.close() # close opened file
t=eval(str) # convert string into number
t2=t*2 # multiply by 2, evaluated number
t3=(t/1000.00) # convert five figure temp (milli-degrees) to degrees to two decimal places
print ("temp is:")
temperature = int(t3)
print(temperature)
def temp():
if temperature > 60:
print("The temp is over 60. Cool down")
elif temperature < 40:
print("temp is below 40")
check()
#find name
##################
#check for internet connection
###################
#Display disk space
###################
def getDiskSpace():
p = os.popen("df -h /")
i = 0
while 1:
i = i +1
line = p.readline()
if i==2:
diskspace = (line.split()[4:5])
ds = diskspace[0]
print("The disk space used is:")
print(ds)
global ds
#Display CPU usage
###################
def getCPUuse():
print(os.popen("top -n1 | awk '/Cpu\(s\):/ {print $2}'").readline().strip())
#Display IP
###################
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("gmail.com",80))
IP = (s.getsockname()[0])
s.close()
print("The Ip is:" + IP)
getDiskSpace()
###################
#writing it to a .txt file
f = open("data.txt","w") #opens file with name of "test.txt"
f.write("raspberry pi data.\n")
f.write("ip:\n")
f.write(IP + "\n")
#f.write("Disk usage:" + str(ds))
f.write("temp: {0}".format(t3))
f.write("disk usage: {0}".format(ds))
f.close()
temp()
getCPUuse()
print("...")
time.sleep(10)
```
It is a program that monitors the temp, disk space, cpu usage and ip of the pi and writes it to a txt file
The main problem is with this line
```
f.write("Disk usage:" + diskspace + "\n")
```
and it says that it is not defined, I have tried many things like creating it before the def as blank but then nothing gets written for diskspace on the text file. The other things write to the text file but not this one
the output:
temp is:
58
The Ip is:192.168.1.36
Traceback (most recent call last):
File "temp.py", line 74, in
f.write("Disk usage:" + diskspace)
NameError: global name 'diskspace' is not defined
if you remove the bit about the writing bit, {'40%'] usually gets printed for diskspace.
ive added some changed code that prints the data, throws not errors but doesnt write it. | 2015/02/23 | [
"https://Stackoverflow.com/questions/28677012",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3910964/"
] | You have few options:
1. Initialize arrays in constructor MesssageParsingTest using syntax : `firstMessage{0x24,0x54,0x3b,0x72,0x8b,0x03,0x24,0x29,0x23,0x43,0x66,0x22,0x53,0x41,0x11,0x62,0x10}`
in initializer list.
2. Create static const array containing your message, and either copy it to member variable using memcpy, or use static member and get rid of firstMessage member variable.
Declare const static member in .h inside class definition:
```
static const unsigned char kFirstMessage[];
```
and define + initialize it in .ccp
```
const unsigned char MessageParsingTest::kFirstMessage[] = "\0x24\0x54\0x3b\0x72\0x8b\0x03\0x24\0x29\0x23\0x43\0x66\0x22\0x53\0x41\0x11\0x62\0x10";
```
I would prefer static const member if you do not intend to modify this array later, since it makes the intention cleaner. | You can use a temporary buffer and then copy into you member as this:
```
void MessageParsingTest::setUp() {
unsigned char tmp[1500] = {0x24,0x54,0x3b,0x72,0x8b,0x03,0x24,0x29,0x23,0x43,0x66,0x22,0x53,0x41,0x11,0x62,0x10};
memcpy(firstMessage, tmp, 1500);
}
``` | 16,509 |
26,746,127 | I'm in an interactive Python 2.7 Terminal (Terminal default output is "utf-8"). I have a string from the internet, lets call it `a`
```
>>> a
u'M\xfcssen'
>>> a[1]
u'\xfc'
```
I wonder why its value is not `ü` so I try
```
>>> print(a)
Müssen
>>> print(a[1])
ü
```
which works as intended.
So my first question is, what does `print a` do, which is missing if i just type `a`?
and out of curiosity: Why is it that I get another output for the following in the same python terminal session?
```
>>> "ü"
'\xc3\xbc'
>>> print "ü"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/codecs.py", line 351, in write
data, consumed = self.encode(object, self.errors)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)
>>> print u"ü"
ü
``` | 2014/11/04 | [
"https://Stackoverflow.com/questions/26746127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/620053/"
] | Can you try this
```
updateToServer: function(e) {
e.preventDefault();
var id = e.target.getAttribute('data-id');
var file = this.collection.get(id);
var data = {};
$(e.target).serializeArray().map(function(x) {data[x.name] = x.value;});
this.$el.modal('hide');
setTimeout(function(){
file.save(data);
}, 200); //try with different values for timer
}
```
I have added a 200 millisecond timer.
This might not be your actual solution but at least you will come to know if there is some asynchronous stuff going on before 'file' is actually formed.
Try different values for the timer. I mean keep increasing the timer and see if you are still not able to get rid of the error.
Once you are sure that 'file' is formed asynchronously then you can look into why that's happening.
And try `console.log`s instead of debuggers for debugging so that you can test without pausing the execution.
Hope that helps. | This was not at all what I suspected, and I hadn't given enough information in the question without realizing it. The line in my code that triggered the exception was `file.save()`, but the actual exception was happening inside Backgrid.
I provide a form to allow users to update models from the collection displayed in a grid. A particular column is defined as an integer column, but I hadn't converted the value coming from the form to an integer. As a result, Backgrid was trying to run `toFixed` on a string. I modified my form serialization code to convert strings containing only integers into integers. Now, everything works as expected.
Here's that serialization code:
```
$(e.target).serializeArray().map(function(x) {
data[x.name] = x.value === 'on' ? true : x.value;
if (!isNaN(parseInt(data[x.name])) && isFinite(data[x.name])) {
data[x.name] = parseInt(data[x.name]);
}
});
```
If I had to guess, I'd say that's probably a bit naive, but it seems to be working well in my application.
Thanks to everyone for the help! | 16,512 |
26,504,852 | On `python/flask/gunicorn/heroku` stack, I need to set an environment variable based on the content of another env variable.
For background, I run a python/Flask app on heroku.
I communicate with an addon via a environment variable that contains credentials and url.
The library I use to communicate with the addon needs that data, but needs it in a different format.
Also, it needs it as an environment variable.
So far, I had cloned and reformatted the environment variable manually, but that just brought disaster because the add-on provider was changing passwords.
OK, so I need to automate reading one environment variable and setting another, before the library starts looking for it.
The naive approach I tried was (file `app.py`):
```
app = Flask(__name__, ...)
env_in = os.environ['ADDON_ENV_VAR']
os.environ['LIB_ENV_VAR'] = some_processing(env_in)
...
if __name__ == '__main__':
app.run(host='0.0.0.0', port='5000')
```
That works fine when doing `python app.py` for debugging, but it fails when running via `gunicorn app:app -b '0.0.0.0:5000'` (as a `Procfile`for `foreman`) for deploying a real webserver. In the second case, the env var doesn't seem to make it to the OS level. I'm not sure about how wsgi works, but maybe the environment changes once gunicorn starts running the app.
What can I do to have the environment variable set at the place it's needed? | 2014/10/22 | [
"https://Stackoverflow.com/questions/26504852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1827442/"
] | you could also set the enviroment variables at run time as such
```
gunicorn -b 0.0.0.0:5000 -e env_var1=enviroment1 -e env_var2=environment2
``` | OK, so the answer (via Kenneth R, Heroku) is to set the environment before running gunicorn. I.e. write a Procfile like
```
web: sh appstarter.sh
```
which calls a wrapper (shell, python, ..) that sets up the environment variable and then runs the gunicorn command, like for example
appstarter.sh:
```
export LIB_ENV_VAR=${ADDON_ENV_VAR}/some/additional_string
gunicorn app:app -b '0.0.0.0:5000'
```
Just in case it helps anyone else out there. | 16,513 |
14,592,879 | I cannot run any script by pressing F5 or selecting run from the menus in IDLE. It stopped working suddenly. No errors are coughed up. IDLE simply does nothing at all.
Tried reinstalling python to no effect.
Cannot run even the simplest script.
Thank you for any help or suggestions you have.
Running Python 2.6.5 on windows 7.
Could not resolve the problem with idle. I have switched to using pyDev in Aptana Studio 3. | 2013/01/29 | [
"https://Stackoverflow.com/questions/14592879",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2022926/"
] | I am using a Dell laptop, and ran into this issue. I found that if I pressed Function + F5, the program would run.
On my laptop keyboard, functions key items are in blue (main functions in white). The Esc (escape) key has a blue lock with 'Fn' on it. I pressed Esc + F5, and it unlocked my function keys. I can now run a program in the editor by only pressing F5
Note: Running Python 3 - but I do not think this is an issue with Idle or Python - I think this is a keyboard issue. | Your function keys are locked,I think so.
Function keys can be unlocked by fn key + esc.
Then f5 will work without any issue. | 16,516 |
6,080,930 | I have a problem setting up a Virtualenv on my web host server (to install python modules later on)
So far I tried this using SSH-access:
```
wget http://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.5.2.tar.gz
tar xzf virtualenv-1.5.2.tar.gz
~/usr/lib/python2.4 virtualenv-1.5.2/virtualenv.py ~/data/env
```
or
```
~/usr/lib/python2.4 setup.py install
```
I don't get any errors or output and cannot find the ~/data/env-directory that should be created. What is going wrong? Or what is the next step?
Thanks,
Wienel | 2011/05/21 | [
"https://Stackoverflow.com/questions/6080930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/763840/"
] | Same in Ruby.
```
require 'lib/yourlibrary.rb'
```
Or:
```
$LOAD_PATH << File.expand_path(File.dirname(FILE) + “/../lib”))
require 'yourlibrary.rb'
``` | To include a gem in your project, you could download the module and place it in the same folder as your code and then do a 'require'. You can also download the module with Rubygems copy, or you can download the module from it's project page. | 16,517 |
11,871,221 | I have this piece of code which creates a new note..WHen I try to print I get the following error even though it prints the output
```
Error:
C:\Python27\Basics\OOP\formytesting>python notebook.py
Memo=This is my first memo, Tag=example
Traceback (most recent call last):
File "notebook.py", line 14, in <module>
print(firstnote)
TypeError: __str__ returned non-string (type NoneType)
```
note.py
```
import datetime
class Note:
def __init__(self, memo, tags):
self.memo = memo
self.tags = tags
self.creation_date = datetime.date.today()
def __str__(self):
print('Memo={0}, Tag={1}').format(self.memo, self.tags)
if __name__ == "__main__":
firstnote = Note('This is my first memo','example')
print(firstnote)
``` | 2012/08/08 | [
"https://Stackoverflow.com/questions/11871221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1050619/"
] | In the Model's `__str__` method, you are returning a value which is `null`.
For example:
```py
class X(models.Model):
name = models.CharField(_('Name'), null=True, blank=True,
max_length=150)
date_of_birth = models.DateField(_('Date of birth'), null=True, blank=True)
street = models.CharField(_('Street'), max_length=150, blank=True)
def __str__(self):
return self.name # here the value of name field might be null, so the
```
error is showing.
Correct `__str__` method will be:
```py
def __str__(self):
return str(self.name)
``` | You probably have some null value in your table. Enter to mysql and delete null value in table. | 16,518 |
48,375,937 | I am new to python and web-scraping. I am trying to scrape a website (link is the url). I am getting an error as "'NoneType' object is not iterable", with the last line of below code. Could anyone point what could have gone wrong?
```
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
url = 'https://labtestsonline.org/tests-index'
soup = BeautifulSoup(requests.get(url).content, 'lxml')
# Function to get hyper-links for all test components
hyperlinks = []
def parseUrl(url):
global hyperlinks
page = requests.get(url).content
soup = BeautifulSoup(page, 'lxml')
for a in soup.findAll('div',{'class':'field-content'}):
a = a.find('a')
href = urlparse.urljoin(Url,a.get('href'))
hyperlinks.append(href)
parseUrl(url)
# function to get header and common questions for each test component
def header(url):
page = requests.get(url).content
soup = BeautifulSoup(page, 'lxml')
h = []
commonquestions = []
for head in soup.find('div',{'class':'field-item'}).find('h1'):
heading = head.get_text()
h.append(heading)
for q in soup.find('div',{'id':'Common_Questions'}):
questions = q.get_text()
commonquestions.append(questions)
for i in range(0, len(hyperlinks)):
header(hyperlinks[i])
```
Below is the traceback error:
```
<ipython-input-50-d99e0af6db20> in <module>()
1 for i in range(0, len(hyperlinks)):
2 header(hyperlinks[i])
<ipython-input-49-15ac15f9071e> in header(url)
5 soup = BeautifulSoup(page, 'lxml')
6 h = []
for head in soup.find('div',{'class':'field-item'}).find('h1'):
heading = head.get_text()
h.append(heading)
TypeError: 'NoneType' object is not iterable
``` | 2018/01/22 | [
"https://Stackoverflow.com/questions/48375937",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9238871/"
] | a bit late, but for any one else stumbling upon this.
```
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.someMethod = functions.https.onRequest((req, res) => {
var stuff = [];
var db = admin.firestore();
db.collection("Users").doc("7vFjDJ63DmhcQiEHwl0M7hfL3Kt1").collection("blabla").get().then(snapshot => {
snapshot.forEach(doc => {
var newelement = {
"id": doc.id,
"xxxx": doc.data().xxx,
"yyy": doc.data().yyy
}
stuff = stuff.concat(newelement);
});
res.send(stuff)
return "";
}).catch(reason => {
res.send(reason)
})
});
``` | Thanks to [Ruan's answer](https://stackoverflow.com/a/49516133/2162226), here's an example for `onCall(..)` variation:
```
exports.fireGetColors = functions.https.onCall((data, context) => {
return new Promise((resolve, reject) => {
var colors = {};
var db = admin.firestore();
db.collection('colors')
.get()
.then(snapshot => {
snapshot.forEach(doc => {
var key = doc.id;
var color = doc.data();
color['key'] = key;
colors[key] = color;
});
var colorsStr = JSON.stringify(colors, null, '\t');
console.log('colors callback result : ' + colorsStr);
resolve(colors);
})
.catch(reason => {
console.log('db.collection("colors").get gets err, reason: ' + reason);
reject(reason);
});
});
});
``` | 16,528 |
68,584,934 | I want to append to a csv file, some data from redshift tables, using the `pandas` module in python. From python, I can successfully connect and retrieve rows from redshift tables using the `psycopg2` module. Now, I am storing datewise data on the csv. So I need to first create a new date column in the csv, then append the data retrieved in that new column.
I am using the following commands to read from redshift tables:
```
conn=psycopg2.connect( host='my_db_hostname', port=`portnumber`, user='username', password='password', dbname='db')
conn.autocommit = True
cur=conn.cursor()
cur.execute(""" select emp_name, emp_login_count from public.emp_login_detail where login_date=current_date """)
records=cur.fetchall()
cur.close()
```
Now, I want to append these emp\_name and emp\_login\_count columns to the existing csv. Below is a snapshot of csv:
[](https://i.stack.imgur.com/QG6ln.png)
Everyday I need to add new date column in csv and then I need to put the emp\_login\_count against respective person's name.
I am new to Pandas and have no idea how to implement this. Can someone please help me out? | 2021/07/30 | [
"https://Stackoverflow.com/questions/68584934",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15948815/"
] | I solved the problem with the following:
./tsconfig.json
```
{
"compilerOptions": {
"isolatedModules": true,
...
},
"exclude": ["cypress/**/*"]
}
```
./cypress/tsconfig.json
```
{
"extends": "../tsconfig.json",
"compilerOptions": {
"isolatedModules": false
},
"include": [
"../node_modules/cypress"
]
}
``` | It seems to be a known issue of type conflicts between cypress and jest. Most accounts have detected that the problem started occurring from cypress v10.x onward.
The *following links* corroborate the OP's own answer, suggesting the exclusion of `cypress.config.ts` from `tsconfig.json`. It may or may not be a workaround, however it has worked for me as well.
**Link 1**: *<https://github.com/cypress-io/cypress/issues/22059>*
**Link 2**: *<https://github.com/nrwl/nx/issues/863>*
*It should be noted that the tests executed successfully despite the type clash, for me, despite not having yet excluded cypress from `tsconfig.json`.*
If your own answer has solved your issue, then you can mark it as answered and close it. | 16,531 |
67,662,674 | My Input JSON data:
```
{
"data": [
{
"config": "current",
"id": "0"
},
{
"config": "current",
"id": "1"
},
{
"config": "current",
"id": "2"
},
{
"config": "current",
"id": "3"
},
{
"config": "previous",
"id": "4",
},
{
"config": "previous",
"id": "5"
},
{
"config": "current",
"id": "6"
}
]
}
```
I want to form a dictionary of lists out of above input data based on common key/value pair:
```
{
"current": ["0", "1", "2", "3", "6"],
"previous": ["4", "5"]
}
```
How can this be achieved using python? | 2021/05/23 | [
"https://Stackoverflow.com/questions/67662674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11882985/"
] | Assuming you already know [how to parse JSON](https://stackoverflow.com/q/7771011/4518341), you can do this:
```
d = {
"data": [
{"config": "current", "id": "0"},
{"config": "current", "id": "1"},
{"config": "current", "id": "2"},
{"config": "current", "id": "3"},
{"config": "previous", "id": "4"},
{"config": "previous", "id": "5"},
{"config": "current", "id": "6"}]
}
result = {}
for d0 in d['data']:
ids = result.setdefault(d0['config'], [])
ids.append(d0['id'])
print(result)
# -> {'current': ['0', '1', '2', '3', '6'], 'previous': ['4', '5']}
```
* [`dict.setdefault()`](https://stackoverflow.com/q/7771011/4518341) is used to get the id list if it exists, or if not, set it to a default, which is an empty list here. It's functionally the same as this:
```
config = d0['config']
if config not in result:
result[config] = []
result[config].append(d0['id'])
```
You could also use [`collections.defaultdict(list)`](https://docs.python.org/3/library/collections.html#collections.defaultdict) to do the same thing even more easily.
*[This explanation is taken from [my answer here](https://stackoverflow.com/a/65587884/4518341).]* | ```
jsn = {
"data": [
{"config": "current", "id": "0"},
{"config": "current", "id": "1"},
{"config": "current", "id": "2"},
{"config": "current", "id": "3"},
{"config": "previous", "id": "4",},
{"config": "previous", "id": "5"},
{"config": "current", "id": "6"}
]
}
current = [x["id"] for x in jsn["data"] if x["config"] == "current"]
previous = [x["id"] for x in jsn["data"] if x["config"] == "previous"]
res = {"current": current, "previous": previous}
print(res) # {'current': ['0', '1', '2', '3', '6'], 'previous': ['4', '5']}
```
Or the same algorigthm with a function:
```
def get_ids(json_string, key):
return [x["id"] for x in json_string["data"] if x["config"] == key]
res = {
"current": get_ids(jsn, "current"),
"previous": get_ids(jsn, "previous")
}
print(res) # {'current': ['0', '1', '2', '3', '6'], 'previous': ['4', '5']}
``` | 16,533 |
51,937,449 | For my Coursework which I am desperately struggling with I have tried to set my inputs into a dictionary and then use this to format and print the string so that it is displayed as shown below.
>
>
> ```
> Surname, Forename Payroll Department Salary
>
> ```
>
> The name should be displayed using the format shown above so you will need to create a string containing the name in this format and print it in a fixed-width field. You may assume that no name will contain more than 30 characters when displayed in this format, no department will contain more than 15 characters, payroll numbers will contain at most 5 digits and all salaries will be integers less than 100,000.
>
>
>
So far I have only managed this as every time I seem to alter any of line 9it comes back with an error saying that "tuple indices must be integers or slices, not strings" but I have no idea how to do this.
```
payroll = int(input("Enter your Payroll."))
department = input("Enter your Department Name.")
salary = int(input("Enter your Salary."))
forename = input("Enter your Forename.")
surname = input("Enter your Surname.")
list_lect = payroll, department, salary, forename, surname
str = '{0[4]},{0[3]}{0[0:5]} {0[2]} {0[3]}'.format(list_lect)
print(str)
```
Any help would be much appreciated from someone struggling with python. | 2018/08/20 | [
"https://Stackoverflow.com/questions/51937449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10251613/"
] | While it would be simple to make a print function to print the way you want:
```
a = ('Surname', 'Forename', 'Payroll', 'Department', 'Salary')
def printer(tup):
print_string = str("(")
pad = 24
print_string += ", ".join(tup[:2]).ljust(pad)
print_string += ", ".join(tup[2:4]).ljust(pad)
print_string += tup[-1] + ")"
print(print_string)
>>> printer(a)
(Surname, Forename Payroll, Department Salary)
```
I would suggest that it would be cleaner to handle this a different way. Perhaps might I recommend taking in the values separately and then combining them in a named way. Like this
```
payroll = input("Enter your Payroll.")
department = input("Enter your Department Name.")
salary = input("Enter your Salary.")
forename = input("Enter your Forename.")
surname = input("Enter your Surname.")
```
You can then perform which ever grouping you want and print them in a more sane manner
```
print("%s, %s %s, %s %s" % (surename, forename, .....etc)
```
and then you can store them in a data structure that makes sense as well | Why do you need this? Printing a tuple with spacing is impossible to my knowledge, but I'm sure theres another way to achieve what you're looking for. Aside from that, there is a kind of work around, although you aren't printing a tuple, to say.
```
indexs = {
payroll = 0,
dept = 1,
salary = 2,
name = 3,
surname = 4
}
str = "('{surname}', '{name}' '{payroll}', '{dept}'
'{salary}')".format(surname = z[indexs[surname]], name = z[indexs[name]],
payroll = z[indexs[payroll]], dept = z[indexs[dept]], salary =
z[indexs[salary]])
print(str)
```
Its not perfect, as its just string formatting, but if you want your output to look exactly as you said, this is the only way. Of course your aim might be very different. Anyway, hope this helps | 16,534 |
67,967,272 | I am trying to program a calculator using python. It does not let me run the code because this error tells that:
ValueError: could not convert string to float: ''
This code was working but suddenly this error showed up.
Could anyone help me with telling what I should change or add.
This is the part of the code where the error occurs.
```
def operation(self, op):
self.current = float(self.current)
if self.check_sum:
self.valid_function()
elif not self.result:
self.total = self.current
self.input_value = True
self.check_sum = True
self.op = op
self.result = False
``` | 2021/06/14 | [
"https://Stackoverflow.com/questions/67967272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15274509/"
] | As per Lombok documentation (<https://projectlombok.org/api/lombok/AllArgsConstructor.html>):
>
> An all-args constructor requires one argument for every field in the class.
>
>
>
Obviously you haven't provided id as a constructor argument. | If you still need some constructor having not all attributes you can use `lombok.NonNull` & `@RequiredArgsConstrutor`. Simplified example:
```
@AllArgsConstructor
@NoArgsConstructor
@RequiredArgsConstructor
public class Booking {
private Long id;
@lombok.NonNull
private Date startDate;
}
```
will provide you with:
```
public Booking() ...
public Booking(Date startDate) ...
public Booking(Long id, Date startDate) ...
``` | 16,535 |
63,245,187 | I am just starting to get the concept of what [Prometheus](https://prometheus.io/docs/prometheus/latest/getting_started/) is, and I have done a couple of examples already.
I can understand how Prometheus monitors some data, even the one generated by itself and also some data related to a python application for example.
My question is more simple though. If I have a text file of data already generated (for example some metric of something), is there a way for this data be fed to Prometheus so that I can generate queries or visualize the data? | 2020/08/04 | [
"https://Stackoverflow.com/questions/63245187",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4451521/"
] | Short answer: No. If you actually have text files with data you want to analyze I'd suggest you to write the data to another TMDB (InfluxDB for example) or a plain old SQL database and then connect it with Grafana. Also take a look at PowerBI. I prefer it for data that is scoped more towards business analytics than monitoring.
Long answer: There is a unpublished HTTP API that allows you to push metrics in the JSON format. See the following two issues:
<https://github.com/kube-reporting/metering-operator/issues/640>
<https://github.com/kube-reporting/metering-operator/issues/656> | While it is impossible to import historical data to Prometheus, such data can be imported to Prometheus-like systems such as VictoriaMetrics. See [these docs](https://victoriametrics.github.io/#how-to-import-time-series-data) for details. | 16,536 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.