qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
10,137,026 | So I have the directory struture like this
```
Execute_directory--> execute.py
|
Algorithm ---> algorithm.py
|
|--> data.txt
```
So I am inside execute directory and have included the following path to my python path.
```
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + "/../Algorithm")
```
algorithm.py has a code to read data.txt
So when I run execute.py , execute.py calls algorithm.py which in turns read data.txt
I thought that above line should have done the job. it is able to find algorithm.py but not data.txt??
```
IOError: [Errno 2] No such file or directory:'data.txt'
```
Any clue what I am doing wrong ??
Thanks | 2012/04/13 | [
"https://Stackoverflow.com/questions/10137026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] | Are you reading `data.txt` in `algorithm.py` like this:
```
open('data.txt')
```
Because that is relative to the *working directory* and not relative to the scripts directory.
In `algorithm.py` you could try this:
```
open(os.path.join(os.path.dirname(__file__), 'data.txt'))
``` | This would usually be an issue with relative filenames not being relative to where you expect. Print the contents of `os.path.abspath(filename)` to check this. If it gives you something strange, specifying the absolute path in the first place (when you initialise `filename`) should fix it. | 15,761 |
38,060,383 | I have the following sql query:
```
SELECT
pc.patente,
cs.cpc_group_codigo_cpc_group
FROM
patente_pc pc
,
patente_cpc cpc,
cpc_subgroup cs,
cpc_group cg
WHERE
pc.codigo_patente_pc = cpc.patente_pc_codigo_patente_pc AND
cpc.cpc = cs.codigo_cpc_subgroup AND
cs.cpc_group_codigo_cpc_group = cg.codigo_cpc_group
GROUP BY
pc.patente, cs.cpc_group_codigo_cpc_group
```
I add this query to python, separating line by line the string in a tuple to not have a problem with the syntax..
and it executes correctly
but when I need to retrieve the data, I use
```
lista_cpcs = []
lista_patentes = []
for (pc.patente, cs.cpc_group_codigo_cpc_group) in cursor:
lista_cpcs.append(cs.cpc_group_codigo_cpc_group)
lista_patentes.append(pc.patente)
return [lista_cpcs, lista_patentes]
```
and I get the error `Global name 'pc' is not defined`
I get whats happening, it's interpreting pc and cs as python modules, but they are from the sql..
how to work in this?
Ps: I search for python mysql connector and didn't found anything with this. | 2016/06/27 | [
"https://Stackoverflow.com/questions/38060383",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691244/"
] | >
> Does Google allow third party access?
>
>
>
Yes. If you're going to be doing interactive programming using mainstream services, learn to use APIs. The Google API collection allows users to register their applications and sites for a *huge* variety of their services...including `Gmail`.
Look [here](https://console.developers.google.com/) for more details.
>
> How can I get started?
>
>
>
Read the documentation and instructions for overall Google-API usage in the link above, *and* the separate documentation for the service you want to use.
For `Gmail`, look [here](https://developers.google.com/gmail/api/). | I agree with the others its fairly well documented, particularly here would be relevant for you if you intend to get started using the Java API:
[Google docs](https://developers.google.com/gmail/api/quickstart/java#step_3_set_up_the_sample)
>
> To run this quickstart, you'll need:
>
>
> Java 1.7 or greater. Gradle 2.3 or greater. Access to the internet and
> a web browser. A Google account with Gmail enabled....
>
>
> | 15,764 |
16,136,341 | I need to optimize a function call that is in a loop, for a time-critical robotics application. My script is in python, which interfaces via ctypes with a C++ library I wrote, which then calls a microcontroller library.
The bottleneck is adding position-velocity-time points to the microcontroller buffer. According to my timing checks, calling the C++ function via ctypes takes about `0.45` seconds and on the C++ side the called function takes `0.17` seconds. I'm need to reduce this difference somehow.
Here is the relevant python code, where data is a 2D array of points and clibrary is loaded via ctypes:
```
data_np = np.vstack([nodes, positions, velocities, times]).transpose().astype(np.long)
data = ((c_long * 4) * N)()
for i in range(N):
data[i] = (c_long * 4)(*data_np[i])
timer = time()
clibrary.addPvtAll(N, data)
print("clibrary.addPvtAll() call: %f" % (time() - timer))
```
And here is the called C++ function:
```
void addPvtAll(int N, long data[][4]) {
clock_t t0, t1;
t0 = clock();
for(int i = 0; i < N; i++) {
unsigned short node = (unsigned short)data[i][0];
long p = data[i][1];
long v = data[i][2];
unsigned char t = (unsigned char)data[i][3];
VCS_AddPvtValueToIpmBuffer(device(node), node, p, v, t, &errorCode);
}
t1 = clock();
printf("addPvtAll() call: %f \n", (double(t1 - t0) / CLOCKS_PER_SEC));
}
```
I don't absolutely need to use ctypes but I don't want to have to compile the Python code every time I run it. | 2013/04/21 | [
"https://Stackoverflow.com/questions/16136341",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/901553/"
] | The round-trip between Python and C++ can be expensive, especially when using *ctypes* (which is like an interpreted version of a normal C/Python wrapper).
Your goal should be to minimize the number of trips and do the most work possible per trip.
It looks to me like your code has too fine of a granularity (i.e. doing too many trips and doing too little work on each trip).
The *numpy* package can expose its data directly to C/C++. That will let you avoid the expensive boxing and unboxing of Python objects (with their attendant memory allocations) and it will let you pass a range of data points rather than a point at a time.
Modify your C++ code to process many points at a time rather than once per call (much like the *sqlite3* module does with *execute* vs. *executemany*). | You can just use `data_np.data.tobytes()`:
```
data_np = np.vstack([nodes, positions, velocities, times]).transpose().astype(np.long)
timer = time()
clibrary.addPvtAll(N, data_np.data.tobytes())
print("clibrary.addPvtAll() call: %f" % (time() - timer))
``` | 15,765 |
71,987,704 | So here is the code in question. The error I get when I run the code is
File "D:\obj\windows-release\37amd64\_Release\msi\_python\zip\_amd64\random.py", line 259, in choice
TypeError: object of type 'type' has no len()
```
import random
import tkinter as tk
from tkinter import messagebox
root=tk.Tk()
root.title("Tragic 8 Ball")
def get_answer(entry, list):
if (entry.get() != ""):
messagebox.showwarning("Please ask question.")
else: (entry.get() == "")
messagebox.showwarning("Your answer", random.choice(list))
entry=tk.Entry(width=40)
entry.focus_set()
entry.grid()
get_answer(entry, list)
tk.Label(root, text="Ask a question:").grid(row=0)
tk.Button(root, text="Answer my question", command=get_answer(entry, list).grid(row=3), columd=0, sticky=tk.W, pady=4)
list=["It is certain.",
"Outlook good.",
"You may rely on it",
"Ask again later.",
"Concentrate and ask again.",
"Reply hazy, try again.",
"My reply is no.",
"My sources say no."]
root.mainloop()
'''
``` | 2022/04/24 | [
"https://Stackoverflow.com/questions/71987704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18834663/"
] | This is verging on an opinion-based question, but I think it is on-topic, since it helps to clarify the syntax and structure of ggplot calls.
In a sense you have already answered the question yourself:
>
> it does not seem to be documented anywhere in the ggplot2 help
>
>
>
This, and the near absence of examples in online tutorials, blogs and SO answers is a good enough reason not to use `aes` this way (or at least not to teach people to use it this way). It could lead to confusion and frustration on the part of new users.
>
> This fits a lot better into the logic of adding up layers
>
>
>
This is *sort of* true, but could be a bit misleading. What it actually does is to specify the *default* aesthetic mapping, that subsequent layers will inherit from the `ggplot` object itself. It should be considered a core *part* of the base plot, along with the default data object, and therefore "belongs" in the initial `ggplot` call, rather than something that is being added or layered on to the plot. If you create a default `ggplot` object without data and mapping, the slots are still there, but contain waivers rather than being `NULL` :
```r
p <- ggplot()
p$mapping
#> Aesthetic mapping:
#> <empty>
p$data
#> list()
#> attr(,"class")
#> [1] "waiver"
```
Note that unlike the scales and co-ordinate objects, for which you might argue that the same is also true, there can be no defaults for data and aesthetic mappings.
Does this mean you should *never* use this syntax? No, but it should be considered an advanced trick for folks who are well versed in ggplot. The most frequent use case I find for it is in changing the mapping of ggplots that are created in extension packages, such as `ggsurvplot` or `ggraph`, where the plotting functions use wrappers around `ggplot`. It can also be used to quickly create multiple plots with the same themes and colour scales:
```r
p <- ggplot(iris, aes(Sepal.Width, Sepal.Length)) +
geom_point(aes(color = Species)) +
theme_light()
library(patchwork)
p + (p + aes(Petal.Width, Petal.Length))
```
[](https://i.stack.imgur.com/E4D9W.png)
So the bottom line is that you can use this if you want, but best avoid teaching it to beginners | ### TL;DR
I cannot see any strong reasons why not to use this pattern, but other patterns are recommended in the documentation, without elaboration.
### What does `+ aes()` do?
A ggplot has two types of aesthetics:
* the default one (typically supplied inside `ggplot()`), and
* `geom_*()` specific aesthetics
If `inherit.aes = TRUE` is set inside the geoms, then these two types of aesthetics are combined in the final plot. If the default aesthetic is not set, then the `geom_*` specific aesthetics must be set.
Using `ggplot(df) + aes(x, y)` changes the **default** aesthetic.
This is documented in `?"+.gg"`:
>
> An aes() object replaces the default aesthetics.
>
>
>
### Are there any reasons not to use it?
I cannot see any strong reasons not to. However, in the documentation of `?ggplot` it is stated that:
>
> There are three common ways to invoke ggplot():
>
>
> * ggplot(df, aes(x, y, other aesthetics))
> * ggplot(df)
> * ggplot()
>
>
> The first method is recommended if all layers use the same data and the same set of aesthetics.
>
>
>
As far as I can see, the typical use case for `+ aes()` is when all layers use the same aesthetics. So the documentation recommend the usual pattern `ggplot(df, aes(x, y, other aesthetics))`, but I cannot find an elaboration of why.
Further: even though the plots look identical, the objects returned by `ggplot(df, aes()` and `ggplot(df) + aes()` are not identical, so there might be some edge cases where one pattern would lead to errors or a different plot.
You can see the many small differences with this code:
```r
library(ggplot2)
a <- ggplot(mtcars, aes(hp, mpg)) + geom_point()
b <- ggplot(mtcars) + aes(hp, mpg) + geom_point()
waldo::compare(a, b, x_arg = "a", y_arg = "b")
``` | 15,767 |
39,086,368 | I'm trying to read beyond the EOF in Python, but so far I'm failing (also tried to work with seek to position and read fixed size).
I've found a workaround which only works on Linux (and is quite slow, too) by working with debugfs and subprocess, but this is to slow and does not work on windows.
My Question: is it possible to read a file beyond EOF in python (which works on all platforms)? | 2016/08/22 | [
"https://Stackoverflow.com/questions/39086368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691944/"
] | You can't read more bytes than is in the file. "End of file" literally means exactly that. | You can only move to the end using:
```
file.seek(0, 2)
```
Is that you're trying to do? | 15,768 |
35,118,312 | I am trying to install a python package that needs a Windos C++ compiler
The install procedure sent me to this link:
<https://wiki.python.org/moin/WindowsCompilers>
I am using Python 2.7 x86 on Win 7 x64
The version indicated on that page is not available anymore (Microsoft Visual C++ 9.0 standalone: Visual C++ Compiler for Python 2.7 (x86, x64) )
What can I do? Where can I find the above compiler ? | 2016/01/31 | [
"https://Stackoverflow.com/questions/35118312",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2059078/"
] | Not sure what is happening with Microsoft today or these days but here is the direct link
<http://download.microsoft.com/download/7/9/6/796EF2E4-801B-4FC4-AB28-B59FBF6D907B/VCForPython27.msi>
Alternatively you can search github, for "VCForPython27.msi site:github.com"
That will give you either the above link or links to files hosted on Github. | The [express versions of visual studio](https://www.visualstudio.com/products/visual-studio-express-vs) are free, I assume the command line compiler would work.
You might also need to read [Microsoft Visual C++ Compiler for Python 2.7](https://stackoverflow.com/questions/26140192/microsoft-visual-c-compiler-for-python-2-7?rq=1) | 15,769 |
58,862,894 | I'm working in python using pandas and ultimately wanting to run a random forest. Python bugs out because I can't get this numeric column with spaces as nulls to be converted to a float. I tried fillna with zero and astype(float) but no success. Thanks all!
```
sm['PopHalfMile']
Out[64]:
0 2072
1 4392
2 2986
3
4
...
281 3350
282 1481
283 607
284 4708
285 4237
Name: PopHalfMile, Length: 286, dtype: object
In [65]:
sm['PopHalfMile'].fillna(value=0)
Out[65]:
0 2072
1 4392
2 2986
3
4
...
281 3350
282 1481
283 607
284 4708
285 4237
Name: PopHalfMile, Length: 286, dtype: object
So i looked at the csv file in notepad and their is a space where the data is null. ...comma space comma.
6, ,2103,
This is causing me to get this error after trying to convert the field to a float.
sm["PopHalfMile"] = sm.PopHalfMile.astype(float)
ValueError: could not convert string to float:
``` | 2019/11/14 | [
"https://Stackoverflow.com/questions/58862894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12374239/"
] | Unfortunately, I don't think there is any way to do regex matching on `if` conditional expressions yet.
One option is to use filtering on `push` events.
```
on:
push:
tags:
- 'v*.*.*'
```
Another option is to do the regex check in a separate step where it [creates a step output](https://help.github.com/en/actions/reference/development-tools-for-github-actions#set-an-output-parameter-set-output). This can then be used in an `if` conditional.
```
- name: Check Tag
id: check-tag
run: |
if [[ ${{ github.event.ref }} =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo ::set-output name=match::true
fi
- name: Build
if: steps.check-tag.outputs.match == 'true'
run: |
echo "Tag is a match"
``` | As per [docs](https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#filter-pattern-cheat-sheet), you can do this:
```
on:
create:
tags:
- "v[0-9]+.[0-9]+"
```
I tried the above and can confirm it works. This is not full regex capability but should suffice for your needs. | 15,771 |
5,072,630 | I am trying to create a simple form/script combination that will allow someone to replace the contents of a certain div in an html file with the text they input in an html form on a separate page.
The script works fine if everything is local : the script is local, i set the working directory to where my html file is, and i pass the parameter myself when I run the script. When I load everything to my hosted site server, however, it gives me a 500 error.
I have been able to execute a simple python script that i stored on my site, and JustHost, my hosting service, has told me that BeuatifulSoup has been added to my server.
Here is the script, with the parameter "textcontent" coming from an html form which works fine. My scirpt is rooted under public\_html/cgi-bin/ and the html I am trying to read and write resides on the root of public\_html. I'm guessing either the html file isn't being found or beautifulsoup isn't actually available on my server...anyway way to test that??
```
#!/usr/bin/python
#import beautifulsoup
from BeautifulSoup import BeautifulSoup
# Import modules for CGI handling
import cgi, cgitb, traceback
# Create instance of FieldStorage
try:
form = cgi.FieldStorage()
def text_replace(word):
f = open('/public_html/souptest2.html', 'r')
soup = BeautifulSoup(f.read())
f.close()
text = soup.find('div', attrs={'id': 'sampletext'}).string
text.replaceWith(word)
deploy_html = open('/public_html/souptest2.html', 'w')
deploy_html.write(str(soup))
deploy_html.close()
# Get data from fields
if form.getvalue('textcontent'):
text_content = form.getvalue('textcontent')
text_replace(text_content)
else:
text_content = "Not entered"
except:
deploy_html = open('../souptest2.html', 'w')
traceback.print_exc(deploy_html)
deploy_html.close()
```
I have tried to load that as a script and run it from a url and still get a 500 error, with no output on my output page in order to debug using traceback... | 2011/02/21 | [
"https://Stackoverflow.com/questions/5072630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/627466/"
] | Yes, use:
```
range(1,7)
```
that should do it. | Use the [`range`](http://docs.python.org/library/functions.html#range) builtin.
```
range(1, 7)
``` | 15,773 |
59,141,776 | Follow the script below to convert a JSON file to parquet format. I am using the pandas library to perform the conversion.
However the following error is occurring: AttributeError: 'DataFrame' object has no attribute 'schema'
I am still new to python.
Here's the original json file I'm using:
[
{
"a": "01",
"b": "teste01"
},
{
"a": "02",
"b": "teste02"
}
]
What am i doing wrong?
```
import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
df = pd.read_json('C:/python/json_teste')
pq = pa.parquet.write_table(df, 'C:/python/parquet_teste')
```
Error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-23-1b4ced833098> in <module>
----> 1 pq = pa.parquet.write_table(df, 'C:/python/parquet_teste')
C:\Anaconda\lib\site-packages\pyarrow\parquet.py in write_table(table, where, row_group_size, version, use_dictionary, compression, write_statistics, use_deprecated_int96_timestamps, coerce_timestamps, allow_truncated_timestamps, data_page_size, flavor, filesystem, **kwargs)
1256 try:
1257 with ParquetWriter(
-> 1258 where, table.schema,
1259 filesystem=filesystem,
1260 version=version,
C:\Anaconda\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
5065 if self._info_axis._can_hold_identifiers_and_holds_name(name):
5066 return self[name]
-> 5067 return object.__getattribute__(self, name)
5068
5069 def __setattr__(self, name, value):
AttributeError: 'DataFrame' object has no attribute 'schema'
```
Print file:
```
#print
print(df)
a b
0 1 teste01
1 2 teste02
#following columns
df.columns
Index(['a', 'b'], dtype='object')
#following types
df.dtypes
a int64
b object
dtype: object
``` | 2019/12/02 | [
"https://Stackoverflow.com/questions/59141776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10515027/"
] | If your motive is to just convert json to parquet, you can probably use pyspark API:
```
>>> data = [ { "a": "01", "b": "teste01" }, { "a": "02", "b": "teste02" } ]
>>> df = spark.createDataFrame(data)
>>> df.write.parquet("data.parquet")
```
Now, this DF is a spark dataframe, which can be saved in parquet. | Welcome to Stackoverflow, the library you are using shows that in example that you need to write the column names in the data frame.
Try using column names of your data frame and it will work.
```
# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
pa.field('my_column', pa.string),
pa.field('my_int', pa.int64),
])
convert_json(input_filename, output_filename, schema)
```
reference: [json2parquet](https://pypi.org/project/json2parquet/) | 15,774 |
61,619,201 | While adding groups with permission from Django Admin Panel and adding other M2M relationships too. I got this error!!
It says : **TypeError: \_bulk\_create() got an unexpected keyword argument 'ignore\_conflicts'**
I can't find the error, Probably a noob mistake.
```
class GroupSerializer(serializers.ModelSerializer):
permissions = PermissionSerializerGroup(many=True, required=False)
class Meta:
model = Group
fields = ('id', 'name', 'permissions')
extra_kwargs = {
'name': {'validators': []},
}
def create(self, validated_data):
print(validated_data)
permissions_data = validated_data.pop("permissions")
obj, group = Group.objects.update_or_create(name=validated_data["name"])
obj.permissions.clear()
for permission in permissions_data:
per = Permission.objects.get(codename=permission["codename"])
obj.permissions.add(per)
obj.save()
return obj
```
Here is the Traceback:
```
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 607, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/utils/decorators.py", line 130, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/sites.py", line 231, in inner
return view(request, *args, **kwargs)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1638, in add_view
return self.changeform_view(request, None, form_url, extra_context)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/utils/decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/utils/decorators.py", line 130, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1522, in changeform_view
return self._changeform_view(request, object_id, form_url, extra_context)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1566, in _changeform_view
self.save_related(request, form, formsets, not add)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1107, in save_related
form.save_m2m()
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/forms/models.py", line 442, in _save_m2m
f.save_form_data(self.instance, cleaned_data[f.name])
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/db/models/fields/related.py", line 1618, in save_form_data
getattr(instance, self.attname).set(data)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 1008, in set
self.add(*new_objs, through_defaults=through_defaults)
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 946, in add
through_defaults=through_defaults,
File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 1129, in _add_items
], ignore_conflicts=True)
TypeError: _bulk_create() got an unexpected keyword argument 'ignore_conflicts'
``` | 2020/05/05 | [
"https://Stackoverflow.com/questions/61619201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8944012/"
] | You should use:
```
@EventBusSubscriber
public static class Class {
@SubscribeEvent
public static void onEvent(EntityJoinWorldEvent event) {
if ((event.getEntity() instanceof PlayerEntity)) {
LogManager.getLogger().info("Joined!");
}
}
}
```
I thought maybe you'd need the instance of the player to be able to get it to work. | ```java
...
@Mod(
modid = Kubecraft.MOD_ID,
name = Kubecraft.MOD_NAME,
version = Kubecraft.VERSION
)
public class Kubecraft {
...
@SubscribeEvent
public static void onEvent(EntityJoinWorldEvent event) {
Timer timer = new Timer(3000, new ActionListener() {
@Override
public void actionPerformed(java.awt.event.ActionEvent e) {
if(!sent) Minecraft.getMinecraft().player.sendChatMessage("/setblock ~ ~ ~ grass");
sent = true;
}
});
timer.setRepeats(false); // Only execute once
if(!sent) {
timer.start();
}
}
}
...
``` | 15,781 |
48,579,232 | [enter image description here](https://i.stack.imgur.com/g89q0.jpg)i was trying to run the following command::
```
python populate_book.py
```
and stuck with this error::
```
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
```
The Whole Traceback is as follow::
```
Traceback (most recent call last):
File "populate_book.py", line 6, in <module>
from opac.models import Book, BookCopy
File "/home/prashant/Desktop/po/opac/models.py", line 6, in <module>
from transactions.models import EndUser
File "/home/prashant/Desktop/po/transactions/models.py", line 4, in <module>
class EndUser(models.Model):
File "/home/prashant/Desktop/po/local/lib/python2.7/site-packages/django/db/models/base.py", line 110, in __new__
app_config = apps.get_containing_app_config(module)
File "/home/prashant/Desktop/po/local/lib/python2.7/site-packages/django/apps/registry.py", line 247, in get_containing_app_config
self.check_apps_ready()
File "/home/prashant/Desktop/po/local/lib/python2.7/site-packages/django/apps/registry.py", line 125, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
```
settings.py::
```
# Django settings for cope project.
import os
DEBUG = True
TEMPLATE_DEBUG = DEBUG
PROJECT_PATH = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
ADMINS = (
('pulkit', 'data.pulkit@gmail.com'),
('shashank', 'shashankgrovy@gmail.com'),
('sourabh', 'sourabh.coder@gmail.com'),
('utsav', 'kumaruts@gmail.com'),
)
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
# Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'cope.db',
# Or path to database file if using sqlite3.
# The following settings are not used with sqlite3:
'USER': '',
'PASSWORD': '',
'HOST': '127.0.0.1',
# Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
'PORT': '3306',
# Set to empty string for default.
}
}
# Hosts/domain names that are valid for this site; required if DEBUG is False
# See https://docs.djangoproject.com/en/1.5/ref/settings/#allowed-hosts
ALLOWED_HOSTS = []
# Local time zone for this installation. Choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
# although not all choices may be available on all operating systems.
# In a Windows environment this must be set to your system time zone.
TIME_ZONE = 'Asia/Kolkata'
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale.
USE_L10N = True
# If you set this to False, Django will not use timezone-aware datetimes.
USE_TZ = True
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/var/www/example.com/media/"
MEDIA_ROOT = ''
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash.
# Examples: "http://example.com/media/", "http://media.example.com/"
MEDIA_URL = ''
# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/var/www/example.com/static/"
STATIC_ROOT = ''
# URL prefix for static files.
# Example: "http://example.com/static/", "http://static.example.com/"
STATIC_URL = '/static/'
# Additional locations of static files
STATICFILES_DIRS = (
os.path.join(os.path.join(PROJECT_PATH, 'cope'), 'static'),
)
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'django.contrib.staticfiles.finders.FileSystemFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'li=xy2zchjmi=)$+t$*yi5soc77yankx#=au+5&fy17_j3-#e%'
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'cope.urls'
# Python dotted path to the WSGI application used by Django's runserver.
WSGI_APPLICATION = 'cope.wsgi.application'
TEMPLATE_DIRS = (
os.path.join(os.path.join(PROJECT_PATH, 'cope'), 'templates'),
)
DJANGO_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
)
LOCAL_APPS = (
'opac',
'transactions',
)
THIRD_PARTY_APPS = (
'bootstrap_admin',
)
INSTALLED_APPS = THIRD_PARTY_APPS + LOCAL_APPS + DJANGO_APPS
# A sample logging configuration. The only tangible logging
# performed by this configuration is to send an email to
# the site admins on every HTTP 500 error when DEBUG=False.
# See http://docs.djangoproject.com/en/dev/topics/logging for
# more details on how to customize your logging configuration.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
```
populate\_book.py::
```
import os
from random import randint
if __name__ == '__main__':
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cope.settings")
from opac.models import Book, BookCopy
def make_isbn():
return randint(1000000000000, 9999999999999)
titles = ['Gray Hat Python', 'Introduction to Algorithms', 'Pro Python',
'HTML5 and JavaScript Projects', 'NginX HTTP Server', 'Redis Cookbook',
'Python Fundamentals', 'Land of Lisp', 'Beginning iOS Game Development',
'Ruby on Rails for Dummies', 'Django, AJAX and jQuery']
authors = ['Seitz', 'Thomas H. Corman', 'Marty Alchin', 'Geanie Mayer', 'Clement',
'Tiago Macedo', 'Kent Lee', 'Courad', 'Patrick', 'Barry Burd', 'Jonathan Hayward']
publishers = ['No Starch Press', 'Stanford Press', 'Apress', 'Apress', 'Packt OpenSource',
'O\'Reilly', 'Springer', 'No Starch Press', 'WROK', 'For Dummies', 'Packt OpenSource']
imageurls = ['grayhat.png', 'introtoalgo.png', 'propython.png', 'htmljs.png', 'nginx.png',
'redis.png', 'pythonfundamentals.png', 'lisp.png', 'ios.png', 'ror.png', 'djangoajax.png']
def main():
for i in range(len(titles)):
isbn = make_isbn()
new_book = Book(title=titles[i], isbn=isbn, ddc=randint(100,999), authors = authors[i],
publisher=publishers[i], cost=randint(300,700), imageurl=imageurls[i], pages=randint(300,700))
new_book.save()
print '\n%4d | ISBN: %s | Title: %s' % (i+1, isbn, titles[i])
for j in range(randint(5,10)):
book_number = randint(50000,900000)
book_copy = BookCopy(book_number=book_number, book_category=new_book)
book_copy.save()
print '%10d | Book Number: %-6d | Category: %s' % (j+1, book_number, new_book.title)
if __name__ == '__main__':
main()
```
manage.py::
```
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cope.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
``` | 2018/02/02 | [
"https://Stackoverflow.com/questions/48579232",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9298418/"
] | Three things that you should make sure,
* are all the apps you have in your `INSTALLED_APPS` setting installed
on your system?
* Have you perhaps forgotten to activate the virtualenv
where everything was installed in the first place?
* If you have both of the things above on your system then maybe you
forgot to install the apps from requirements in your venv? You can do this by `pip install -r requirements.txt`
Replace `requirements.txt` with whatever your requirements file's name is. Make sure you do this after activating the virtual environment. | <https://www.dangtrinh.com/2014/11/how-to-avoid-models-arent-loaded-yet.html>
My advice would strongly be to perform this sort of operation within a Custom Management Command though <https://docs.djangoproject.com/en/2.0/howto/custom-management-commands/>. | 15,782 |
927,150 | I've made a python script which should modify the profile of the phone based on the phone position. Runned under ScriptShell it works great.
The problem is that it hangs, both with the "sis" script runned upon "boot up", as well as without it.
So my question is what is wrong with the code, and also whether I need to pass special parameters to ensymble?
```
import appuifw, e32, sensor, xprofile
from appuifw import *
old_profil = xprofile.get_ap()
def get_sensor_data(status):
#decide profile
def exit_key_handler():
# Disconnect from the sensor and exit
acc_sensor.disconnect()
app_lock.signal()
app_lock = e32.Ao_lock()
appuifw.app.exit_key_handler = exit_key_handler
appuifw.app.title = u"Acc Silent"
appuifw.app.menu = [(u'Close', app_lock.signal)]
appuifw.app.body = Canvas()
# Retrieve the acceleration sensor
sensor_type= sensor.sensors()['AccSensor']
# Create an acceleration sensor object
acc_sensor= sensor.Sensor(sensor_type['id'],sensor_type['category'])
# Connect to the sensor
acc_sensor.connect(get_sensor_data)
# Wait for sensor data and the exit event
app_lock.wait()
```
The script starts at boot, using ensymble and my developer certificate.
Thanks in advance | 2009/05/29 | [
"https://Stackoverflow.com/questions/927150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/88054/"
] | I often use something like that at the top of my scripts:
```
import os.path, sys
PY_PATH = None
for p in ['c:\\Data\\Python', 'e:\\Data\\Python','c:\\Python','e:\\Python']:
if os.path.exists(p):
PY_PATH = p
break
if PY_PATH and PY_PATH not in sys.path: sys.path.append(PY_PATH)
``` | xprofile is not a standard library, make sure you add path to it. My guess is that when run as SIS, it doesn't find xprofile and hangs up. When releasing your SIS, either instruct that users install that separately or include inside your SIS.
Where would you have it installed, use that path. Here's python default directory as sample:
```
# PyS60 1.9.x and above
sys.path.append('c:\\Data\\Python')
sys.path.append('e:\\Data\\Python')
# Pys60 1.4.x or below
sys.path.append('c:\\Python')
sys.path.append('e:\\Python')
```
Btw make clean exit, do this:
```
appuifw.app.menu = [(u'Close', exit_key_handler)]
``` | 15,783 |
61,082,945 | So, I'm learning python in school and as a part of my current project I want to be able to make small "popups" on the screen. I've chosen to do this with wxpython but I've run into a problem. Right now I can't find a way to add a variable so I can print anything I want. I tried adding an extra variable both to the class and the **init** function but get an error for both. I'm not used to classes or how they work and I would really appreciate some help. Here is the code I am using:
```
import wx
class HelloFrame(wx.Frame):
def __init__(self, *args, **kw):
super(HelloFrame, self).__init__(*args, **kw)
pnl = wx.Panel(self)
st = wx.StaticText(pnl, label = "Betting")
font = st.GetFont()
font.PointSize += 10
font = font.Bold()
st.SetFont(font)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(st, wx.SizerFlags().Border(wx.TOP|wx.LEFT, 25))
pnl.SetSizer(sizer)
self.makeMenuBar()
self.CreateStatusBar()
self.SetStatusText("Welcome to wxPython!")
def makeMenuBar(self):
fileMenu = wx.Menu()
helloItem = fileMenu.Append(-1, "&Hello...\tCtrl-H",
"Help string shown in status bar for this menu item")
fileMenu.AppendSeparator()
exitItem = fileMenu.Append(wx.ID_EXIT)
helpMenu = wx.Menu()
aboutItem = helpMenu.Append(wx.ID_ABOUT)
menuBar = wx.MenuBar()
menuBar.Append(fileMenu, "&file")
menuBar.Append(helpMenu, "&help")
self.SetMenuBar(menuBar)
self.Bind(wx.EVT_MENU, self.OnHello, helloItem)
self.Bind(wx.EVT_MENU, self.OnExit, exitItem)
self.Bind(wx.EVT_MENU, self.OnAbout, aboutItem)
def OnExit(self, event):
self.Close(True)
def OnHello(self, event):
wx.MessageBox("Hello again from wxPython")
def OnAbout(self, event):
wx.MessageBox("This is a wxPython Hello World Sample", "About Hello World 2", wx.OK|wx.ICON_INFORMATION)
if __name__ == '__main__':
x = number()
app = wx.App()
frm = HelloFrame(None, title='Betting predictions')
frm.Show()
app.MainLoop()
```
Now I want to be able to input anything as a label into the `st = wx.StaticText(pnl, label = "Betting")`
Instead of just "Betting" and have it show whatever text I want but for the life of me I can't figure it out. This is a relatively small part of my project since I'm done and have some extra time but I would really like to get it to work. | 2020/04/07 | [
"https://Stackoverflow.com/questions/61082945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13250047/"
] | You almost have it. You just need to straighten out a few details. First, if the input fails, you want an empty input:
```
try:
move = [int(s) for s in input("Select a cell (row,col) > ").split(",")]
except:
move = []
```
Now you want to repeat the input until it is valid. You first need the syntax for a while loop:
```
while <condition>:
<body>
```
Where `condition` evaluates to a boolean and `<body>` is the lines to repeat. In this case, you want to repeat the `try...except`.
```
def main():
while move != []:
try:
move = [int(s) for s in input("Select a cell (row,col) > ").split(",")]
except:
move = []
x = move[0]
y = move[1]
```
When you get stuck on syntax issues like this, I suggest you read the documentation and tutorials at <https://python.org>. They explain how to correctly write a while loop or try...except and more. | You could do this with a nested function and a recursive call if the input doesn't conform to expectations.
```py
import re
def main():
def prompt():
digits = input("Select a cell (row,col) > ")
if not re.match(r'\d+,\d+', digits):
print('Error message')
prompt()
return digits.split(',')
move = prompt()
x = move[0]
y = move[1]
main()
``` | 15,784 |
2,622,866 | How can I serialize a python Dictionary to JSON and pass back to javascript, which contains a string key, while the value is a List (i.e. [])
```
if request.is_ajax() and request.method == 'GET':
groupSet = GroupSet.objects.get(id=int(request.GET["groupSetId"]))
groups = groupSet.groups.all()
group_items = [] #list
groups_and_items = {} #dictionary
for group in groups:
group_items.extend([group_item for group_item in group.group_items.all()])
#use group as Key name and group_items (LIST) as the value
groups_and_items[group] = group_items
data = serializers.serialize("json", groups_and_items)
return HttpResponse(data, mimetype="application/json")
```
the result:
```
[{"pk": 5, "model": "myApp.group", "fields": {"name": "\u6fb4\u9584", "group_items": [13]}}]
```
while the group\_items should have many group\_item and each group\_item should have "name", rather than only the Id, in this case the Id is 13.
I need to serialize the group name, as well as the group\_item's Id and name as JSON and pass back to javascript.
I am new to Python and Django, please advice me if you have a better way to do this, appreciate. Thank you so much. :) | 2010/04/12 | [
"https://Stackoverflow.com/questions/2622866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/314614/"
] | Your 'groups' variable is a QuerySet object, not a dict. You will want to be more explicit with the data that you want to return.
```
import json
groups_and_items = {}
for group in groups:
group_items = []
for item in group.group_items.all():
group_items.append( {'id': item.id, 'name': item.name} )
# <OR> if you just want a list of the group_item names
#group_items = group.group_items.all().values_list('name', flat=True)
groups_and_items[group.name] = group_items
data = json.dumps(groups_and_items)
```
What exactly did you want you want your data to look like? The above should give you `data` like this :
```
[{ 'groupA': [{'id': 1, 'name': 'item-1'}],
'groupB': [{'id': 2, 'name': 'item-2'}, ...],
'groupC': []
}]
```
Or this if you just want the list of group\_item names:
```
[{ 'groupA': ['item-1'],
'groupB': ['item-2', ...],
'groupC': []
}]
``` | You should use Python's [json](http://docs.python.org/library/json.html) module to encode your JSON.
Also, what indentation level do you have `data = serializers` at? It looks like it could be inside the for loop? | 15,786 |
46,366,139 | Hi i have a simplified example of my problem.
i would like to get an output of
```
1
a
b
2
c
3
d
e
f
4
g
5
h
```
I have tried different variations but can figure out the logic. My code is below. Thanks for your help in advance. I am trying to do it without using numpy or panda. I am using python3.4
```
num = ["1" , "2" ,"3" , "4" , "5" ]
let = [["a","b"],["c"],["d","e","f"],["g"],["h"]]
for x in num :
print(x)
for y in let :
print(y)
zipBoth = zip(num,let)
for x,y in zipBoth :
print(x)
print(y)
``` | 2017/09/22 | [
"https://Stackoverflow.com/questions/46366139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7680853/"
] | Note that you are trying to print the contents of two lists. This is a linear operation in time. Two loops just won't cut it - that's quadratic in time complexity. Furthermore, your second solution doesn't flatten `y`.
---
Define a helper function using `yield` and `yield from`.
```
def foo(l1, l2):
for x, y in zip(l1, l2):
yield x
yield from y
for i in foo(num, let):
print(i)
1
a
b
2
c
3
d
e
f
4
g
5
h
```
If you want a list instead, just call `foo` with a `list` wrapper around it:
```
print(list(foo(num, let)))
['1', 'a', 'b', '2', 'c', '3', 'd', 'e', 'f', '4', 'g', '5', 'h']
```
Note that `yield from` becomes available to use from python3.3 onwards. | just `zip` the lists and flatten twice applying `itertools.chain`
```
num = ["1" , "2" ,"3" , "4" , "5" ]
let = [["a","b"],["c"],["d","e","f"],["g"],["h"]]
import itertools
result = list(itertools.chain.from_iterable(itertools.chain.from_iterable(zip(num,let))))
```
now `result` yields:
```
['1', 'a', 'b', '2', 'c', '3', 'd', 'e', 'f', '4', 'g', '5', 'h']
```
which you can print with:
```
print(*result,sep="\n")
``` | 15,787 |
3,248,194 | Whats wrong in this code?
Here is my HTML:
```
<html><body>
<form action="iindex.py" method="POST" enctype="multipart/form-data">
<p>File: <input type="file" name="ssfilename"></p>
<p><input type="submit" value="Upload" name="submit"></p>
</form>
</body></html>
```
This is my Python script:
```
#! /usr/bin/env python
import os, sys;
from mod_python import apache
import cgi
import cgitb; cgitb.enable()
form = cgi.FieldStorage(keep_blank_values=1)
fileitem = form["ssfilename"]
.....
```
This is the line where I get KeyError.
```
File "/Applications/MAMP/python/framework/Python.framework/Versions/2.6/lib/python2.6/cgi.py", line 541, in __getitem__
raise KeyError, key
KeyError: 'ssfilename'
``` | 2010/07/14 | [
"https://Stackoverflow.com/questions/3248194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/392373/"
] | Edit: Totally missed the part where you are `doing keep_blank_values = 1`; sorry, no idea what is wrong.
From <http://docs.python.org/library/cgi.html>:
>
> Form fields containing empty strings are ignored and do not appear in the dictionary; to keep such values, provide a true value for the optional keep\_blank\_values keyword parameter when creating the FieldStorage instance.
>
>
>
Therefore, this is happening because this field was left blank. | Check if you have no GET parameters in your form action URL.
If you need to pass on any data put it as form elements inside the form to be POSTed along with your upload file.
Then you find all your POSTed vars in `cgi.FieldStorage`. | 15,795 |
59,432,477 | I am working through an issue with scraping a webtable using python. I have been scraping what I would call 'standard' tables for a while and I feel like I understand that reasonably well. I define a standard table as having a structure like:
```
<table>
<tr class="row-class">
<th>Bill</th>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr class="row-class">
<th>Ben</th>
<td>2</td>
<td>3</td>
<td>4</td>
<td>1</td>
</tr>
<tr class="row-class">
<th>Barry</th>
<td>3</td>
<td>4</td>
<td>1</td>
<td>2</td>
</tr>
</table>
```
I have now come across a table instance which has a slightly different structure and I can't figure out how to get the data out of it in the format I need. The format I am now trying to scrape is:
```
<table>
<tr class="row-class">
<th>Bill</th></tr>
<tr><td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr class="row-class">
<th>Ben</th></tr>
<tr>
<td>2</td>
<td>3</td>
<td>4</td>
<td>1</td>
</tr>
<tr class="row-class">
<th>Barry</th></tr>
<tr>
<td>3</td>
<td>4</td>
<td>1</td>
<td>2</td>
</tr>
</table>
```
The output I am trying to achieve is:
```
Bill,1,2,3,4
Ben,2,3,4,1
Barry,3,4,1,2
```
I assume the problem I am encountering is that because the header is stored in a separate tr row, I only get an output of:
```
Bill
Ben
Barry
```
I am wondering if the solution is to traverse the rows and determine if the next tag is a th or td and then perform an appropriate action? I'd appreciate any advice on how the code I am using to test this could be modified to achieve the desired output. The code is:
```
from bs4 import BeautifulSoup
t_obj = """<tr class="row-class">
<th>Bill</th></tr>
<tr><td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr class="row-class">
<th>Ben</th></tr>
<tr>
<td>2</td>
<td>3</td>
<td>4</td>
<td>1</td>
</tr>
<tr class="row-class">
<th>Barry</th></tr>
<tr>
<td>3</td>
<td>4</td>
<td>1</td>
<td>2</td>
</tr>"""
soup = BeautifulSoup(t_obj)
trs = soup.find_all("tr", {"class":"row-class"})
for tr in trs:
for th in tr.findAll('th'):
print (th.get_text())
for td in tr.findAll('td'):
print(td.get_text())
print(td.get_text())
``` | 2019/12/20 | [
"https://Stackoverflow.com/questions/59432477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10862305/"
] | Here I use 3 methods how to pair the two `<tr>` tags together:
* 1st method is using `zip()` and CSS selector
* 2nd method is using BeautifulSoup's method `find_next_sibling()`
* 3rd method is using `zip()` and simple slicing with custom step
---
```
from bs4 import BeautifulSoup
t_obj = """<tr class="row-class">
<th>Bill</th></tr>
<tr><td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr class="row-class">
<th>Ben</th></tr>
<tr>
<td>2</td>
<td>3</td>
<td>4</td>
<td>1</td>
</tr>
<tr class="row-class">
<th>Barry</th></tr>
<tr>
<td>3</td>
<td>4</td>
<td>1</td>
<td>2</td>
</tr>"""
soup = BeautifulSoup(t_obj, 'html.parser')
for tr1, tr2 in zip(soup.select('tr.row-class'), soup.select('tr.row-class ~ tr:not(.row-class)')):
print( ','.join(tag.get_text() for tag in tr1.select('th') + tr2.select('td')) )
print()
for tr in soup.select('tr.row-class'):
print( ','.join(tag.get_text() for tag in tr.select('th') + tr.find_next_sibling('tr').select('td')) )
print()
trs = soup.select('tr')
for tr1, tr2 in zip(trs[::2], trs[1::2]):
print( ','.join(tag.get_text() for tag in tr1.select('th') + tr2.select('td')) )
```
Prints:
```
Bill,1,2,3,4
Ben,2,3,4,1
Barry,3,4,1,2
Bill,1,2,3,4
Ben,2,3,4,1
Barry,3,4,1,2
Bill,1,2,3,4
Ben,2,3,4,1
Barry,3,4,1,2
``` | You can use indexing:
```
from bs4 import BeautifulSoup as soup
d = soup(html, 'html.parser').find_all('tr')
result = [[d[i].text]+[c.text for c in d[i+1].find_all('td')] for i in range(0, len(d), 2)]
```
To print your result:
```
print('\n'.join(f'{a[1:]},{",".join(b)}' for a, *b in result))
```
Output:
```
Bill,1,2,3,4
Ben,2,3,4,1
Barry,3,4,1,2
``` | 15,797 |
10,061,124 | I once read this entry in mailing list <http://archives.postgresql.org/pgsql-hackers/2005-06/msg01481.php>
```
SELECT *
FROM foo_func(
c => current_timestamp::timestamp with time zone,
a => 2,
b => 5
);
```
Now I need this kindof solution where I can pass associative array argument to a function.
Do I need to make a dummy table and then use that table as argument type ? or there is any straight forward fix for this ? or has this hack been implemented ?
or can I emulate the same using pl/python ? | 2012/04/08 | [
"https://Stackoverflow.com/questions/10061124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/256007/"
] | If you have `a` as your clustering key, then that column is included in all non-clustered indices on that table.
So your index on `c` also includes `a`, so the condition
```
where c= 3 and a = 3
```
can be found in that index using an index seek. Most likely, the query optimizer decided that doing a index seek to find `a` and `c` and a key lookup to get the rest of the data is faster/more efficient here than using an index scan.
BTW: why did you expect / prefer an index scan over an index seek? The index seek typically is faster and uses a lot less resources - I would always strive to get index seeks over scans. | >
> *This is fine, because the non clustered index doesn't have b as the key value. Hence it does an index scan from column a.*
>
>
>
This assumption is not right. index seek and scan has to deal with WHERE clause and not the select clause.
Now your question -
Where clause is optimised by sql optimizer and as there is a=3 condition, clustered index can be applied. | 15,799 |
66,636,134 | i have written a python program which makes an api call to a webserver once every minute and then parse the json response and saves parsed values in to the csv files.
here is the code that is saving the values into the csv file :
```
with open('data.csv', 'a', newline='') as file:
writer = csv.writer(file)
writer.writerow([current_time,SHORTPERC, LONGPERC, SHORTvolume, longVolume, longPositions, shortPositions])
```
how can i make it so that it saves the header only once on the top most row and not on every row ?
**UPDATE:**
here is a bit of more code to make api call and write the data to file :
```
from apscheduler.schedulers.blocking import BlockingScheduler
from apscheduler.triggers.cron import CronTrigger
import requests
import json
import csv
from datetime import datetime
def fn():
print("Starting...")
session_id = "auZsJ4F2RsQNJxSPTMDt2238324"
Outlook='http://www.myfxbook.com/api/get-community-outlook.json?session=' + session_id
Outlook_response = requests.get(Outlook)
Outlook_data = Outlook_response.json()['symbols']
now = datetime.now()
current_time = now.strftime("%H:%M")
EURUSD=Outlook_data[0]
SHORTPERC=EURUSD['shortPercentage']
LONGPERC =EURUSD['longPercentage']
SHORTvolume=EURUSD['shortVolume']
longVolume=EURUSD['longVolume']
longPositions=EURUSD['longPositions']
shortPositions=EURUSD['shortPositions']
with open('data.csv', 'a', newline='') as file:
writer = csv.writer(file)
writer.writerow([current_time,SHORTPERC, LONGPERC, SHORTvolume, longVolume, longPositions, shortPositions])
with open('data1.csv', 'a', newline='') as file:
writer = csv.writer(file)
writer.writerow([SHORTvolume, longVolume])
with open('data2.csv', 'a', newline='') as file:
writer = csv.writer(file)
writer.writerow([SHORTPERC, LONGPERC])
```
i cant post the full code cuz it will be very ugly since its around 700 lines long , but the above mentioned code should work to create the csv file
this is how one of my csv files look :
```
07:11,31,69,555.55,1265.14,4750,2607
07:12,31,69,555.55,1265.16,4751,2607
07:13,31,69,555.55,1265.16,4751,2607
07:14,30,70,555.56,1267.36,4752,2608
07:15,30,70,555.56,1267.36,4752,2608
07:16,30,70,555.56,1267.36,4752,2608
07:17,30,70,555.46,1267.36,4752,2607
07:18,31,69,558.61,1267.36,4752,2610
07:19,31,69,558.61,1267.37,4753,2610
07:20,31,69,561.58,1267.37,4753,2611
07:21,31,69,561.61,1267.37,4753,2613
07:22,31,69,561.65,1267.37,4753,2614
07:23,31,69,561.65,1267.36,4752,2614
```
this is just part of the csv file , more rows keep adding as time passes
**EDIT 2:**
answer suggested by Sparkofska seems to work but somehow it ends up giving an empty row in between every line like this:
>
> Time,ShortPer,LongPer,ShortVolume,LongVolume,ShortPosition,LongPosition
>
>
> 05:47,44,56,19528.8,24789.27,65223,48630
>
>
> 05:48,44,56,19529.04,24789.27,65223,48633
>
>
>
code :
```
EURUSD=Outlook_data[0]
SHORTPERC=EURUSD['shortPercentage']
LONGPERC =EURUSD['longPercentage']
SHORTvolume=EURUSD['shortVolume']
longVolume=EURUSD['longVolume']
longPositions=EURUSD['longPositions']
shortPositions=EURUSD['shortPositions']
filename='EURUSD.csv';
def write_row_header_aware(filename, row):
if not os.path.exists(filename) or os.stat(filename).st_size == 0:
with open(filename, 'a') as file:
writer = csv.writer(file)
writer.writerow(['Time', 'ShortPer', 'LongPer','ShortVolume','LongVolume','ShortPosition','LongPosition'])
with open(filename, 'a') as file:
writer = csv.writer(file)
writer.writerow([current_time,SHORTPERC, LONGPERC, SHORTvolume, longVolume, longPositions, shortPositions])
write_row_header_aware(filename, [current_time,SHORTPERC, LONGPERC, SHORTvolume, longVolume, longPositions, shortPositions])
print("done...")
``` | 2021/03/15 | [
"https://Stackoverflow.com/questions/66636134",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15392786/"
] | The properties you've put on `<v-col>` don't exist (i.e. align-end and justify-end). They are properties on the `<v-row>` component (which is a flex container). You need to use classes instead.
Make sure to consult the API->props section on the Vuetify component page when choosing component properties.
Try
```html
<v-col class="d-flex justify-end">
<v-btn>Button</v-btn>
</v-col>
```
Note that the alignment is dependent upon the width of v-col. If v-col is only as wide as the button, you'll need to set the width by using the cols="x" property. | Add `direction: rtl` to your `v-btn`, Here is [codepen](https://codepen.io/MNSY22/pen/qBqWZEv):
```html
<template>
<v-btn class="btn rtl">
...
</v-btn>
</template>
<style>
.rtl { direction: rtl; }
</style>
``` | 15,800 |
29,711,646 | I'm trying to create examples on how to manipulate massive databases composed of CSV tables using only Python.
I'd like to find out a way to emulate efficient indexed queries in tables spread through some `list()`
The example below takes 24 seconds in a 3.2Ghz Core i5
```
#!/usr/bin/env python
import csv
MAINDIR = "../"
pf = open (MAINDIR+"atp_players.csv")
players = [p for p in csv.reader(pf)]
rf = open (MAINDIR+"atp_rankings_current.csv")
rankings = [r for r in csv.reader(rf)]
for i in rankings[:10]:
player = filter(lambda x: x[0]==i[2],players)[0]
print "%s(%s),(%s) Points: %s"%(player[2],player[5],player[3],i[3])
```
For [this dataset](https://github.com/JeffSackmann/tennis_atp).
A more efficient, or more *pythonic* way would be greatly appreciated. | 2015/04/18 | [
"https://Stackoverflow.com/questions/29711646",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/417415/"
] | You can `itertools.islice` instead of reading all rows and use `itertools.ifilter`:
```
import csv
from itertools import islice,ifilter
MAINDIR = "../"
with open(MAINDIR + "atp_players.csv") as pf, open(MAINDIR + "atp_rankings_current.csv") as rf:
players = list(csv.reader(pf))
rankings = csv.reader(rf)
# only get first ten rows using islice
for i in islice(rankings, None, 10):
# ifilter won't create a list, gives values in the fly
player = next(ifilter(lambda x: x[0] == i[2], players),"")
```
Not quite sure what `filter(lambda x: x[0]==i[2],players)[0]` is doing, you seem to be searching the whole players list each time and just keeping the first element. It might pay to sort the list once with the first element as the key and use bisection search or build a dict with the first element as the key and the row as the value then simply do lookups.
```
import csv
from itertools import islice,ifilter
from collections import OrderedDict
MAINDIR = "../"
with open(MAINDIR + "atp_players.csv") as pf, open(MAINDIR + "atp_rankings_current.csv") as rf:
players = OrderedDict((row[0],row) for row in csv.reader(pf))
rankings = csv.reader(rf)
for i in islice(rankings, None, 10):
# now constant work getting row as opposed to 0(n)
player = players.get(i[2])
```
What default value you use or indeed if any is needed you will have to decide.
If you have repeating elements at the start of each row but just want to return the first occurrence:
```
with open(MAINDIR + "atp_players.csv") as pf, open(MAINDIR + "atp_rankings_current.csv") as rf:
players = {}
for row in csv.reader(pf):
key = row[0]
if key in players:
continue
players[key] = row
rankings = csv.reader(rf)
for i in islice(rankings, None, 10):
player = players.get(i[2])
```
Output:
```
Djokovic(SRB),(R) Points: 11360
Federer(SUI),(R) Points: 9625
Nadal(ESP),(L) Points: 6585
Wawrinka(SUI),(R) Points: 5120
Nishikori(JPN),(R) Points: 5025
Murray(GBR),(R) Points: 4675
Berdych(CZE),(R) Points: 4600
Raonic(CAN),(R) Points: 4440
Cilic(CRO),(R) Points: 4150
Ferrer(ESP),(R) Points: 4045
```
Timing for the code on ten players shows ifilter to be the fastest but we will see the dict winning when we increase rankings and just how badly your code scales:
```
In [33]: %%timeit
MAINDIR = "tennis_atp-master/"
pf = open ("/tennis_atp-master/atp_players.csv") players = [p for p in csv.reader(pf)]
rf =open( "/tennis_atp-master/atp_rankings_current.csv")
rankings = [r for r in csv.reader(rf)]
for i in rankings[:10]:
player = filter(lambda x: x[0]==i[2],players)[0]
....:
10 loops, best of 3: 123 ms per loop
In [34]: %%timeit
with open("/tennis_atp-master/atp_players.csv") as pf, open( "/tennis_atp-master/atp_rankings_current.csv") as rf: players = list(csv.reader(pf))
rankings = csv.reader(rf) # only get first ten rows using islice
for i in islice(rankings, None, 10):
# ifilter won't create a list, gives values in the fly
player = next(ifilter(lambda x: x[0] == i[2], players),"")
....:
10 loops, best of 3: 43.6 ms per loop
In [35]: %%timeit
with open("/tennis_atp-master/atp_players.csv") as pf, open( "/tennis_atp-master/atp_rankings_current.csv") as rf:
players = {}
for row in csv.reader(pf):
key = row[0]
if key in players:
continue
players[row[0]] = row
rankings = csv.reader(rf)
for i in islice(rankings, None, 10):
player = players.get(i[2])
pass
....:
10 loops, best of 3: 50.7 ms per loop
```
Now with 100 players you will see the dict is as fast as it was for 10. The cost of building the dict has been offset by constant time lookups:
```
In [38]: %%timeit
with open("/tennis_atp-master/atp_players.csv") as pf, open("/tennis_atp-master/atp_rankings_current.csv") as rf:
players = list(csv.reader(pf))
rankings = csv.reader(rf)
# only get first ten rows using islice
for i in islice(rankings, None, 100):
# ifilter won't create a list, gives values in the fly
player = next(ifilter(lambda x: x[0] == i[2], players),"")
....:
10 loops, best of 3: 120 ms per loop
In [39]: %%timeit
with open("/tennis_atp-master/atp_players.csv") as pf, open( "/tennis_atp-master/atp_rankings_current.csv") as rf:
players = {}
for row in csv.reader(pf):
key = row[0]
if key in players:
continue
players[row[0]] = row
rankings = csv.reader(rf)
for i in islice(rankings, None, 100):
player = players.get(i[2])
pass
....:
10 loops, best of 3: 50.7 ms per loop
In [40]: %%timeit
MAINDIR = "tennis_atp-master/"
pf = open ("/tennis_atp-master/atp_players.csv")
players = [p for p in csv.reader(pf)]
rf =open( "/tennis_atp-master/atp_rankings_current.csv")
rankings = [r for r in csv.reader(rf)]
for i in rankings[:100]:
player = filter(lambda x: x[0]==i[2],players)[0]
....:
1 loops, best of 3: 806 ms per loop
```
For 250 players:
```
# your code
1 loops, best of 3: 1.86 s per loop
# dict
10 loops, best of 3: 50.7 ms per loop
# ifilter
10 loops, best of 3: 483 ms per loop
```
The final test looping over the whole rankings:
```
# your code
1 loops, best of 3: 2min 40s per loop
# dict
10 loops, best of 3: 67 ms per loop
# ifilter
1 loops, best of 3: 1min 3s per loop
```
So you can see as we loop over more rankings the **dict** option is by far the most efficient as far as runtime goes and will scale extremely well. | This code doesn't take that much time to run. So I'm going to assume that you were really running through more of the rankings that just 10. When I run through them all it takes a long time. If that is what you are interested in doing, then a dictionary would shorten the search time. For a bit of overhead to setup the dictionary, you can search it very fast. Here's how I've modified your for loop:
```
play_dict = {}
for index, player in enumerate(players):
play_dict[player[0]] = index
for i in rankings[:10]:
player = players[play_dict[i[2]]]
```
With this code you can process all the rankings instantaneously. | 15,801 |
68,705,417 | I am getting the below error while running a pyspark program on PYCHARM,
Error:
>
> java.io.IOException: Cannot run program "python3": CreateProcess error=2, The system cannot find the file specified ......
>
>
>
The interpreter is recognizing the python.exe file and I have added the Content root in project structure.
I got a similar issue while running the same program before in on windows command prompt and solved it using [What is the right way to edit spark-env.sh before running spark-shell?](https://stackoverflow.com/questions/38300099/what-is-the-right-way-to-edit-spark-env-sh-before-running-spark-shell) | 2021/08/08 | [
"https://Stackoverflow.com/questions/68705417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11609306/"
] | create an environment variable PYSPARK\_PYTHON with value 'python'.
it worked for me! | 1. Go to Environmental variable and within System variable set a new variable as `PYSPARK_PYTHON` and value as `python`
>
> PYSPARK\_PYTHON=python
>
>
>
2. Add below codebits to your pyspark code
```
import os
import sys
from pyspark import SparkContext
os.environ['PYSPARK_PYTHON'] = sys.executable
os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable
``` | 15,804 |
60,553,140 | I have the following insert statement that let me parse sql query into a python file and then returning a dataframe of that data that is collected from the query
```
params = 'DRIVER={ODBC Driver 13 for SQL Server};' \
'SERVER=localhost;' \
'PORT=XXX;' \
'DATABASE=database_name;' \
'UID=XXX;' \
'PWD=XXX;'
params = urllib.parse.quote_plus(params)
db = create_engine('mssql+pyodbc:///?odbc_connect=%s' % params)
sql = '''
select * from table_name
where column_name= variable_in_python
'''
dataframe = pd.read_sql_query(sql, db)
```
Now, after the 'where' statement, I want to have a variable that I declare in Python, for example an id-number (let's say 1123581321). This variable needs to come in that 'variable\_python'-place.
I tried:
```
import pyodbc as py
import urllib
from sqlalchemy import create_engine
import pandas as pd
x = 1123581321
params = 'DRIVER={ODBC Driver 13 for SQL Server};' \
'SERVER=localhost;' \
'PORT=XXX;' \
'DATABASE=database_name;' \
'UID=XXX;' \
'PWD=XXX;'
params = urllib.parse.quote_plus(params)
db = create_engine('mssql+pyodbc:///?odbc_connect=%s' % params)
sql = '''
select * from table_name
where column_name= 'x'
'''
dataframe = pd.read_sql_query(sql, db)
```
This obviously doesn't work. But I have not a single idea how I can do this (if it can be done).
Any suggestions are more than welcome! | 2020/03/05 | [
"https://Stackoverflow.com/questions/60553140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12292254/"
] | You can use `?` as a [placeholder](https://learn.microsoft.com/en-us/sql/connect/php/how-to-perform-parameterized-queries?view=sql-server-ver15) in the query and pass the value as a parameter to the `read_sql_query` function:
```
sql = '''
select * from table_name
where column_name= ?
'''
dataframe = pd.read_sql_query(sql, db, params=(variable_in_python,))
``` | You can do something like:
```
sql = '''
select * from table_name
where column_name= {}
'''.format(variable_in_python)
```
For more information, have a look at <https://docs.python.org/3/tutorial/inputoutput.html> | 15,806 |
51,759,688 | Why only ***if*** statement is executed & not ***else*** statement if we write an ***if-else*** with ***if*** having constant value. For example this code in python
```
x=5
if 5:
print("hello 5")
else:
print("bye")
```
Also the point to be noted is that in second line even if I replace 5 with 500 or any number, if statement will be only executed.Can anyone please explain. | 2018/08/09 | [
"https://Stackoverflow.com/questions/51759688",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6659144/"
] | Threading is your only possibility. Also it always requires the ENTER when you are using std::cin. This could work:
```
#include <future>
#include <iostream>
#include <thread>
int main(int argc, char** argv) {
int i = 1;
std::atomic_int ch{1};
std::atomic_bool readKeyboard{true};
std::thread t([&ch, &readKeyboard]() {
while (readKeyboard) {
int input;
if (std::cin >> input) {
ch = input;
if (ch == '0') {
break;
}
}
}
});
while (ch != '0') {
std::cout << "Hi" << i << std::endl;
++i;
}
readKeyboard = false;
t.join();
return 1;
}
``` | You can do this but you will have to use threads. Here is the minimal example how to achive this behaviour. Please note that you will need C++11 at least.
```
#include <iostream>
#include <thread>
#include <atomic>
int main()
{
std::atomic<bool> stopLoop;
std::thread t([&]()
{
while (!stopLoop)
{
std::cout << "Hi";
}
});
while (std::cin.get() != '0') //you will need to press enter after pressing '0'
{
; //empty loop, just wait until there is 0 on input
}
stopLoop = true; //this stops the other loop
}
```
Other options will be to dive into OS specific libraries. You must now that C++ doesn't have any kind of non-blocking I/O in standard library and for most time you will have to press `<ENTER>` to have any input in input stream (std::cin) | 15,809 |
16,066,838 | OK so I have this book
Violent Python - A Cookbook for Hackers, Forensic Analysts, Penetration Testers and Security Engineers.
I have gotten to page 10 and I'm a complete noob at this but it really fascinates me.
But this piece of code has me stumped:
```
import socket
socket.setdefaulttimeout(2)
s = socket.socket()
s.connect(("192.168.95.148",21))
ans = s.recv(1024)
print ans
```
This is what I get in the console as a result:
```
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.timeout: timed out
```
From what I can see the book says I should get this:
```
220 FreeFloat Ftp Server (Version 1.00).
```
Im using pydev and eclipse
Any help would be much appreciated. | 2013/04/17 | [
"https://Stackoverflow.com/questions/16066838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2282257/"
] | `s.connect(("192.168.95.148",21))` seems to try to connect to an FTP server on IP address 192.168.95.148. If you don't have an FTP server running on that IP, you will get a connection timeout error instead of a response from the FTP server. Do you have a FreeFloat FTP Server running on 192.168.95.148? | Well, you could try connecting to a known public FTP server? If the lack of a server is stopping you.
For example, ftp.mozilla.org | 15,810 |
73,625,732 | I have an table of people where each person can have a associate partner like this:
| id\_person | Name | id\_partner |
| --- | --- | --- |
| 1 | Javi | 5 |
| 2 | John | 4 |
| 3 | Mike | 6 |
| 4 | Lucy | 2 |
| 5 | Jenny | 1 |
| 6 | Cindy | 3 |
So I would like to have a query where I can get all the couples without repetance like
| Name 1 | Name 2 |
| --- | --- |
| Javi | Jenny |
| John | Lucy |
| Mike | Cindy |
I now how I would do it in python but in sql I have no clue
Thank you in advance!!! | 2022/09/06 | [
"https://Stackoverflow.com/questions/73625732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17563150/"
] | Assuming you want daily value counts, use `asfreq` and `fillna`:
```
july_log_mel.index = pd.to_datetime(july_log_mel.index)
july_log_mel.asfreq('D').fillna(0)
``` | You can `reindex` your Series with `date_range`:
```
s = df['date'].value_counts()
s = s.reindex(pd.date_range(s.index.min(), s.index.max(), freq='D')
.strftime('%Y-%m-%d'),
fill_value=0)
```
output:
```
2022-07-04 2
2022-07-05 0
2022-07-06 1
2022-07-07 0
2022-07-08 1
Name: date, dtype: int64
```
used input:
```
date
0 2022-07-04
1 2022-07-04
2 2022-07-06
3 2022-07-08
``` | 15,811 |
50,717,721 | HI I am following an install from a book "Python Crash Course" chapter 15 which directed me to install matplotlib via downloading from pypi and using the format
```
python -m pip install --user matplotlib-2.2.2-cp36-cp36m-win32.whl
```
This seems to go ok but reports at the end.
File "C:\Program Files (x86)\Python Community\Python 3.6.1\lib\decimal.py", line 3 in , from decimal import \*
AttributeError: module 'numbers' has no attribute 'Number'
and when I enter an interactive shell:
>
> import matplotlib
>
>
>
I get a ModuleNotFoundError
I have tried a matplotlib-2.2.2-cp36-cp36m-win32.whl and matplotlib-2.2.2-cp36-cp36m-win\_amd64.whl both give the same error.
Could someone please let me know how to fix this? | 2018/06/06 | [
"https://Stackoverflow.com/questions/50717721",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9902618/"
] | I am answering my own question.
The issue was to do with a file called numbers.py residing in a folder that I have all my python files, wheel files etc.
I found the answer in stack overflow. I will link to this [matplotlib - AttributeError: module 'numbers' has no attribute 'Integral'](https://stackoverflow.com/questions/49535107/matplotlib-attributeerror-module-numbers-has-no-attribute-integral).
The file was stopping the install so I renamed it and reinstalled matplotlib and it now works.
Sorry but this was not obvious to me ! | Try running cmd as **administrator** inside the python directory. Then execute:
```
pip3 install matplotlib-2.2.2-cp36-cp36m-win32.whl
```
Also make sure that you have all dependencies installed. | 15,812 |
46,630,311 | Actually I'm calculating throughput given certain window size.
However, I don't know how to accumulate the values by window. For instance:
```
time = [0.9, 1.1, 1.2, 2.1, 2.3, 2.6]
value = [1, 2, 3, 4, 5, 6]
```
After window size with 1 is applied, I should get
```
new_value = [1, 5, 15]
```
I've thought of using indexes of data frame but not sure how I can realize it since I'm new to python. | 2017/10/08 | [
"https://Stackoverflow.com/questions/46630311",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5785396/"
] | You could use `itertools.groupby` with a custom grouping function
```
from itertools import groupby
def f(time, values, dt=1):
vit = iter(values)
return [sum(v for _, v in zip(g, vit)) for _, g in groupby(time, lambda x: x // dt)]
```
```
In [14]: f([0.9, 1.1, 1.2, 2.1, 2.3, 2.6], [1, 2, 3, 4, 5, 6])
Out[14]: [1, 5, 15]
In [15]: f([0.9, 1.1, 1.2, 2.1, 2.3, 2.6], [1, 2, 3, 4, 5, 6], dt=2)
Out[15]: [6, 15]
```
Note that for the window of size 1 you could simply use `groupby(time, int)`. | You could use a [`Counter`](https://docs.python.org/3/library/collections.html#collections.Counter):
```
time = [0.9, 1.1, 1.2, 2.1, 2.3, 2.6]
value = [1, 2, 3, 4, 5, 6]
from collections import Counter
counter = Counter()
for t,v in zip(time, value):
counter[int(t)] += v
print(sorted(counter.items()))
# [(0, 1), (1, 5), (2, 15)]
``` | 15,816 |
48,982,187 | I am using `telegraf` as a measuring/monitoring tool in my tests. I need to edit `telegraf` configurations automatically; since all tests are being executed automatically.
Currently I am using `re` for configuring it; this is the process:
1. Read the whole file content.
2. Use regex to find and edit the required plugin/property.
3. Write the whole changed content to the file.
But I'm searching for a library, if exists, like [`ConfigParser`](https://docs.python.org/3/library/configparser.html) or [`reconfigure`](https://pypi.python.org/pypi/reconfigure/0.1.39) to handle the configurations as an object not content.
I tried `ConfigParser.ConfigParser`, `ConfigParser.RawConfigParser` and `ConfigParser.SafeConfigParser`; all return:
```
ConfigParser.ParsingError: File contains parsing errors: /etc/telegraf/telegraf.conf
```
`reconfigure` library has specific configuration classes, each belongs to a special type of linux configs (e.g. `FSTabConfig`, `ResolvConfig` and some other types), but it doesn't contain a class for `telegraf` configs.
Does anyone have an option in mind?
**EDIT 1:**
I tried `configobj` library (as @KevinC suggested), but it loads nothing:
```
>>> import configobj
>>> c = configobj.ConfigObj('/home/zeinab/Desktop/config-modification/telegraf.conf', list_values=False)
>>> c
ConfigObj({})
```
Using `list_values=True` returns the same results. | 2018/02/26 | [
"https://Stackoverflow.com/questions/48982187",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1626977/"
] | You can use [toml](https://pypi.org/project/toml/)
Configuration file
```
[[inputs.ping]]
## Hosts to send ping packets to.
urls = ["example.org"]
method = "exec"
```
Usage
```
import toml
conf = (toml.load("/etc/telegraf/telegraf.conf"))
conf.get("inputs")
```
Output
```
{'ping': [{'urls': ['example.org'], 'method': 'exec'}]}
``` | You can use [configobj](http://configobj.readthedocs.io/en/latest/), but you have to specify "list\_values"=False
```
c = configobj.ConfigObj('/etc/telegraf/telegraf.conf', list_values=False)
``` | 15,817 |
56,109,815 | If there is any bug in my code (code within a model that is used within a view which uses LoginRequiredMixin ) e.g. A bug like:
```
if (True: # <-- example bug to show how bugs like this are hidden
```
Then I get the following error:
```
"AUTH_USER_MODEL refers to model '%s' that has not been installed" % settings.AUTH_USER_MODEL
django.core.exceptions.ImproperlyConfigured:
AUTH_USER_MODEL refers to model 'auth.User' that has not been installed
```
This makes it really hard to debug the code. I have figured out if I remove this line from my views.py:
```
from django.contrib.auth.mixins import LoginRequiredMixin
```
Even if I remove all instances where LoginRequiredMixin is used, just by importing it, it hides the true bug and generates the above auth error.
I have read: [AUTH\_USER\_MODEL refers to model '%s' that has not been installed" % settings.AUTH\_USER\_MODEL](https://stackoverflow.com/questions/37028655/auth-user-model-refers-to-model-s-that-has-not-been-installed-settings-aut?rq=1)
My research shows most of the time this is due to "django.contrib.auth'" missing from INSTALLED\_APPS, which you can see I have (settings.py snippet, Django 2.2):
```
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth', # <-- Important for this topic!
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'myapp.apps.CommonConfig',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
```
I tried adding this to above the mixim import but neither helped:
```
from django.contrib import auth
from django.contrib.auth.models import User
```
This part of the full traceback agrees with the above:
```
Traceback (most recent call last):
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\apps\registry.py", line 155, in get_app_config
return self.app_configs[app_label]
KeyError: 'auth'
```
Here is the traceback if I call makemigrations (This is what I expect):
```
C:\code\myproject\src>pipenv run python manage.py makemigrations
Traceback (most recent call last):
File "manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
utility.execute()
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\__init__.py", line 357, in execute
django.setup()
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\apps\registry.py", line 114, in populate
app_config.import_models()
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\apps\config.py", line 211, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\code\myproject\src\myapp\models.py", line 14, in <module>
from .sql import myapp_entry_clean, myapp_entry_grouping
File "C:\code\myproject\src\myapp\sql.py", line 68
if (True:
^
SyntaxError: invalid syntax
C:\code\myproject\src>
```
Here is the traceback with runserver:
```
C:\code\myproject\src>pipenv run python manage.py runserver
Watching for file changes with StatReloader
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36\Lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Users\User\AppData\Local\Programs\Python\Python36\Lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\utils\autoreload.py", line 54, in wrapper
fn(*args, **kwargs)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\commands\runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\utils\autoreload.py", line 77, in raise_last_exception
raise _exception[0](_exception[1]).with_traceback(_exception[2])
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\utils\autoreload.py", line 54, in wrapper
fn(*args, **kwargs)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\apps\registry.py", line 114, in populate
app_config.import_models()
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\apps\config.py", line 211, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\code\myproject\src\myapp\models.py", line 14, in <module>
from .sql import myapp_entry_clean, myapp_entry_grouping
File "<string>", line None
SyntaxError: invalid syntax (sql.py, line 68)
Traceback (most recent call last):
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\apps\registry.py", line 155, in get_app_config
return self.app_configs[app_label]
KeyError: 'auth'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\contrib\auth\__init__.py", line 165, in get_user_model
return django_apps.get_model(settings.AUTH_USER_MODEL, require_ready=False)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\apps\registry.py", line 205, in get_model
app_config = self.get_app_config(app_label)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\apps\registry.py", line 162, in get_app_config
raise LookupError(message)
LookupError: No installed app with label 'auth'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
utility.execute()
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\commands\runserver.py", line 60, in execute
super().execute(*args, **options)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\base.py", line 364, in execute
output = self.handle(*args, **options)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\commands\runserver.py", line 95, in handle
self.run(**options)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\core\management\commands\runserver.py", line 102, in run
autoreload.run_with_reloader(self.inner_run, **options)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\utils\autoreload.py", line 579, in run_with_reloader
start_django(reloader, main_func, *args, **kwargs)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\utils\autoreload.py", line 564, in start_django
reloader.run(django_main_thread)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\utils\autoreload.py", line 272, in run
get_resolver().urlconf_module
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\utils\functional.py", line 80, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\urls\resolvers.py", line 564, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\code\myproject\src\linkcube\urls.py", line 20, in <module>
path('', include('common.urls')),
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\urls\conf.py", line 34, in include
urlconf_module = import_module(urlconf_module)
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\code\myproject\src\common\urls.py", line 5, in <module>
from . import views
File "C:\code\myproject\src\common\views.py", line 9, in <module>
from django.contrib.auth.mixins import LoginRequiredMixin
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\contrib\auth\mixins.py", line 3, in <module>
from django.contrib.auth.views import redirect_to_login
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\contrib\auth\views.py", line 10, in <module>
from django.contrib.auth.forms import (
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\contrib\auth\forms.py", line 20, in <module>
UserModel = get_user_model()
File "C:\Users\User\.virtualenvs\myproject-wc-xNQPL\lib\site-packages\django\contrib\auth\__init__.py", line 170, in get_user_model
"AUTH_USER_MODEL refers to model '%s' that has not been installed" % settings.AUTH_USER_MODEL
django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'auth.User' that has not been installed
C:\code\myproject\src>
```
How do I make it so that the auth error is not falsely raised? | 2019/05/13 | [
"https://Stackoverflow.com/questions/56109815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5506400/"
] | You need to do:
```
$typ = $reqarr['message']['entities'][0]['type'];
```
Output:-<https://3v4l.org/KQc2s> | Try this:
```
if(!isset($reqarr['message']['entities'][0])){
$reqarr['message']['entities']=array($reqarr['message']['entities']);
}
foreach($reqarr['message']['entities'] as $entity){
var_dump($entities);
die();
}
``` | 15,818 |
45,247,778 | Writing a script in python to get data from table, when I use xpath I get the data according to it's row and column wise format. However, when I use css selector with the same I get an error 'list' object has no attribute 'text'. How to get around that? Thanks in advance?
Using xpath which is working errorlessly:
```
import requests
from lxml import html
url="https://fantasy.premierleague.com/player-list/"
response = requests.get(url).text
tree = html.fromstring(response)
for titles in tree.xpath("//table[@class='ism-table']")[0]:
for row in titles.xpath('.//tr'):
tab_d = row.xpath('.//td/text()')
print(tab_d)
```
Using css selector with the same I get aforesaid error:
```
import requests
from lxml import html
url="https://fantasy.premierleague.com/player-list/"
response = requests.get(url).text
tree = html.fromstring(response)
for titles in tree.cssselect("table.ism-table")[0]:
for row in titles.cssselect('tr'):
tab_d = row.cssselect('td')
print(tab_d.text)
```
[](https://i.stack.imgur.com/wFLkk.jpg) | 2017/07/21 | [
"https://Stackoverflow.com/questions/45247778",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9189799/"
] | You can do it with CSS only if you use checkbox.
Use the `:checked` selector to display the content.
```
// You css
input[type=checkbox] + label {
color: #ccc;
font-style: italic;
}
// Set the content to be displayed when the radio/checkbox is checked.
// using the css3 selector :checked
input[type=checkbox]:checked + label {
color: #f00;
font-style: normal;
}
```
```
<input type="checkbox" id="ossm" name="ossm">
<label for="ossm">CSS is Awesome</label>
``` | You could give the same class name to everyone of your `<fieldset>` and then loop over all elements having this class name. This loop would be executed once the page is load and on every checkbox event. | 15,819 |
6,949,915 | I have several scripts written in perl, python, and java (wrapped under java GUI with system calls to perl & python). And I have many not-tech-savy users that need to use this in their windows machines (xp & 7).
To avoid users from installing perl,python,and java and to avoid potential incompatibility between various versions of these interpreters, I'd like to make a local copy of these interpreters in a folder and then calling them. I'd zip the whole folder (which would also contain my code) and send it away.
I'd have to worry about environment variables and make calls to the correct interpreter (especially when other versions of python,java,perl may exists in their current system), but not sure what other problems I may face. Any better ideas?
I never used jython and do not know the overhead of moving to it. I also suspect a complex python system, with many files and 3rd party modules will have problems. Same with perl scripts and I don't know a robust perl interpreter callable from java.
Thank you, in advance. | 2011/08/04 | [
"https://Stackoverflow.com/questions/6949915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/737088/"
] | Try [Portable Python](http://www.portablepython.com/) and [Portable Perl](http://portableapps.com/node/12595). You can unzip them into your application tree and they should work. | Why don't you try migrating your perl/python code into java and then packagin everything into a nice webstart application? What do perl/python offer that java doesn't support?
For perl you can use something like perl2exe and for python py2exe so you can have 2 exes (which would include all the necessary interpreter bits) and invoke them as resources from within java? Or unzip them inside user's home directory and call them again as normal external programs (ProcessBuilder ?) ? | 15,822 |
6,937,505 | I have python application that shoud be launched as windows executable. I'm using py2exe and pymssql 1.9.908.
I used next build script to generate application:
```
from distutils.core import setup
import MySQLdb
import fnmatch
import os
import pymssql
import shutil
import py2exe
import glob
##############
name = 'BGAgent'
old_version = '0.1'
ver = '0.1'
distDir = 'Dist' + name + ver
shutil.rmtree(distDir, True)
shutil.rmtree('Dist' + name + old_version, True)
os.mkdir(distDir)
##############
class Target(object):
""" A simple class that holds information on our executable file. """
def __init__(self, **kw):
""" Default class constructor. Update as you need. """
self.__dict__.update(kw)
# MySQLdb
#dst = os.path.join(distDir, "MySQLdb")
#copy_tree(MySQLdb.__path__[0], dst )
# pymssql
site_packages_dir = os.path.dirname(pymssql.__file__)
pymssql_files = []#'pymssql.py', 'pymssql.pyc', 'pymssql.pyo', '_mssql.pyd']
for eggInfo in glob.glob(os.path.join(site_packages_dir, '*mssql*')) :
pymssql_files.append(os.path.basename(eggInfo))
for fname in pymssql_files :
src = os.path.join(site_packages_dir, fname)
dst = os.path.join(distDir, fname)
if(os.path.isfile(src)) :
shutil.copy(src, dst)
else :
shutil.copytree(src, dst)
includes = ['MySQLdb', 'pymssql', 'OpenSSL']
excludes = ['run_w.exe'] #['_gtkagg', '_tkagg', 'bsddb', 'curses', 'email', 'pywin.debugger', 'pywin.debugger.dbgcon', 'pywin.dialogs', 'tcl', 'Tkconstants', 'Tkinter']
packages = ['MySQLdb', 'pymssql', 'OpenSSL']
dll_excludes = []#['libgdk-win32-2.0-0.dll', 'libgobject-2.0-0.dll', 'tcl84.dll', 'tk84.dll']
data_files = ['server.pem',
'config.ini',
'run.bat',
#os.path.join(os.path.split(pymssql.__file__)[0], 'ntwdblib.dll'),
]
icon_resources = []
bitmap_resources = []
other_resources = []
MyApp_Target = Target(
# what to build
script = "run.py",
icon_resources = icon_resources,
bitmap_resources = bitmap_resources,
other_resources = other_resources,
dest_base = name,
version = ver,
company_name = "",
copyright = "",
name = name,
)
setup(
data_files = data_files,
options = {"py2exe": {"compressed": 0,
"optimize": 1,
"includes": includes,
"excludes": excludes,
"packages": packages,
"dll_excludes": dll_excludes,
"bundle_files": 3,
"dist_dir": distDir,
"xref": False,
"skip_archive": False,
"ascii": False,
"custom_boot_script": '',
}
},
zipfile = r'library.zip',
console = [],
windows = [MyApp_Target],
service = [],
com_server = [],
ctypes_com_server = []
)
```
Build works, but I have error when I tried to launch application:
```
File "pymssql.pyo", line 12, in <module>
File "pymssql.pyo", line 10, in __load
File "_mssql.pxd", line 10, in init pymssql (pymssql.c:7370)
ImportError: No module named _mssql
```
\_mssql.pyd and pymssql.pyd files are in executable directory.
OS version Windows 7 Enterprice SP 1. | 2011/08/04 | [
"https://Stackoverflow.com/questions/6937505",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/412793/"
] | In the program you are trying to import (eg. in the A.py for A.exe ), specify import statement for \_mssql as well. You might also need to import a couple of other modules (decimal & uuid )to get the exe working | ```
from distutils.core import setup
import py2exe, os, pymssql
import decimal
data_files = []
data_files.append(os.path.join(os.path.split(pymssql.__file__)[0], 'ntwdblib.dll'))
py2exe_options = {"py2exe":{"includes": ['decimal'],
"dll_excludes":["mswsock.dll",
"powrprof.dll",
"user32.dll",
"shell32.dll",
"wsock32.dll",
"advapi32.dll",
"kernel32.dll",
"ntwdblib.dll",
"ws2_32.dll",
"oleaut32.dll",
"ole32.dll",
],
}}
setup(console=["jobs_pcc_main.py"], options= py2exe_options, data_files=data_files)
``` | 15,823 |
10,068,576 | Is there a way to remove all references to an object at once? I know that's unpythonic, so I'll explain what I'm trying to do and maybe someone knows a better way.
I'm writing an object-oriented wrapper around a SWIG wrapper for a C library. When a proxy for one of the C objects is deleted, it also deletes child objects (directly in C). I'd like that to also trigger deletion of their proxy objects in Python. Otherwise I run into a situation where there are Python objects carrying around invalid pointers which will segfault if they're accessed.
It looks sort of like this:
```
class Parent(object):
def __init__(self):
self.ptr = swig.createParent()
def __del__(self):
swig.deleteParent(self.ptr) # also deletes children
class Child(object):
def __init__(self, parent):
self.ptr = swig.createChild(parent)
def __del__(self):
swig.deleteChild(self.ptr)
```
And this is the situation I'm worried about:
```
p = Parent()
c = Child(parent)
del p
# accessing c.ptr now would be bad right?
``` | 2012/04/09 | [
"https://Stackoverflow.com/questions/10068576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/429898/"
] | If I understand you correctly, you are wrapping some C code, and the C code has a destructor that can be called. After that, any attempt to use the pointer to the C code object causes a fatal crash.
I am not sure of your exact situation, so I am going to give you two alternate answers.
0) If the C object can be freed for some reason out of your control, and you need to make sure your Python wrapper code doesn't crash, you need to make the Python wrapper know whether the C object is available or not. Make your Python object handle the pointer no longer being valid. You could raise a Python exception, return an error code, or just have the method functions become no-op functions, depending on what you are doing. The C object going away doesn't free the Python object, so you can handle this cleanly.
1) If the C object is only freed when the Python object is freed, you don't have a problem. Python references, when they go out of scope or you call `del()` on them, do not free the Python object; they just decrement the reference count on that object. When the reference count goes to zero, *then* the object is freed and your `__del__()` method function can call into the C code to free the C object.
You can watch how it works by running this code:
```
class DelTest(object):
def __init__(self):
print "__init__() called: object %08x created" % id(self)
def __del__(self):
print "__del__() called: object %08x destroyed" % id(self)
print "begin"
print "creating object, binding to name d"
d = DelTest()
print "adding reference bound to name x"
x = d
print "adding reference bound to lst[0]"
lst = []
lst.append(d)
print "deleting lst"
del(lst)
print "deleting x"
del(x)
print "deleting d"
del(d)
print "end"
```
Output from the above:
```
begin
creating object, binding to name d
__init__() called: object 01e4db50 created
adding reference bound to name x
adding reference bound to lst[0]
deleting lst
deleting x
deleting d
__del__() called: object 01e4db50 destroyed
end
``` | A note about the [behavior of `__del__()` method](http://docs.python.org/reference/datamodel.html#object.__del__).
>
> del x doesn’t directly call `x.__del__()` — the former decrements the reference count for x by one, and the latter is only called when x‘s reference count reaches zero.
>
>
>
Therefore even if you delete `parent`, it does not necessarily mean that `__del__` is executed immediately until there are any references to it. Here is an example.
```
>>> class C(object):
... def __del__(self):
... print "deleting object of type: %s" %self.__class__
...
>>> class D(object):
... def __init__(self, parent):
... self.parent = parent
... def __del__(self):
... print "deleting object of type: %s" % self.__class__
...
>>> c = C()
>>> d = D(c)
>>> del c
>>> del d
deleting object of type: <class '__main__.D'>
deleting object of type: <class '__main__.C'>
```
Note that the `__del__` method of `C` is called *after* the `del d` call. | 15,828 |
3,479,887 | I'm using python 2.6 and matplotlib. If I run the sample histogram\_demo.py provided in the matplotlib gallery page, it works fine. I've simplified this script greatly:
```
import numpy as np
import matplotlib.pyplot as plt
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
fig = plt.figure()
ax = fig.add_subplot(111)
n, bins, patches = ax.hist(x, 50, normed=1, facecolor='green', alpha=0.75)
ax.set_yscale('log') # <---- add this line to generate the error
plt.show()
```
I get this error (at the `plt.show()` line):
```
TypeError: coercing to Unicode: need string or buffer, dict found
```
I've tried changing the backend to many different values - nothing helps. I am using `Qt4Agg`. Is this a font issue? It seems that it must be something with my configuration. Note: Because of other problems, I just installed a fresh copy of python26, matplotlib, numpy, scipy. I have another XP-box running python26 and it executes both versions of the script with no errors. I hope someone can help. Many thanks in advance. | 2010/08/13 | [
"https://Stackoverflow.com/questions/3479887",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/419902/"
] | This is a bug in the font management of matplotlib, on my machine this is the file /usr/lib/pymodules/python2.6/matplotlib/font\_manager.py:1220. I've highlighted the change in the code snippet below; this is fixed in the newest version of matplotlib.
```
if best_font is None or best_score >= 10.0:
verbose.report('findfont: Could not match %s. Returning %s' %
(prop, self.defaultFont))
[+]result = self.defaultFont[fontext]
[-]result = self.defaultFont
print "defaultFont", result
else:
verbose.report('findfont: Matching %s to %s (%s) with score of %f' %
(prop, best_font.name, best_font.fname, best_score))
result = best_font.fname
print "best_font", result
```
This error occurs only if no "good" font was found and the font manager falls back to a default font. Therefore the error occured without apparent reason, probably because of changes in the installed fonts.
Hope that helps! | I experienced a similar error today, concerning code that I know for a fact was working a week ago. I also have recently uninstalled/reinstalled both Matplotlib and Numpy, while checking something else (I'm using Python 2.5).
The code went something like this:
```
self.ax.cla()
if self.logy: self.ax.set_yscale('log')
self.canvas.draw()
```
Whenever it was run with self.logy as True, it failed as above. Otherwise, it worked perfectly fine.
I ended up sidsteping the issue by uninstalling Matplotlib and Numpy and installing the latest versions of them. However, the version throwing the error had previously been used with no problems. Only after swapping the old version for the newer one and back again did this start happening.
Perhaps the uninstall/reinstall process messes up certain aspects of the configuration files.
For completeness, here is the complete traceback given:
```
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\path\to\file\being\called\by\Tkinter.py", line 1081, in refresh
self.canvas.draw()
File "C:\Python25\Lib\site-packages\matplotlib\backends\backend_tkagg.py", line 215, in draw
FigureCanvasAgg.draw(self)
File "C:\Python25\Lib\site-packages\matplotlib\backends\backend_agg.py", line 314, in draw
self.figure.draw(self.renderer)
File "C:\Python25\Lib\site-packages\matplotlib\artist.py", line 46, in draw_wrapper
draw(artist, renderer, *kl)
File "C:\Python25\Lib\site-packages\matplotlib\figure.py", line 773, in draw
for a in self.axes: a.draw(renderer)
File "C:\Python25\Lib\site-packages\matplotlib\artist.py", line 46, in draw_wrapper
draw(artist, renderer, *kl)
File "C:\Python25\Lib\site-packages\matplotlib\axes.py", line 1735, in draw
a.draw(renderer)
File "C:\Python25\Lib\site-packages\matplotlib\artist.py", line 46, in draw_wrapper
draw(artist, renderer, *kl)
File "C:\Python25\Lib\site-packages\matplotlib\axis.py", line 742, in draw
tick.draw(renderer)
File "C:\Python25\Lib\site-packages\matplotlib\artist.py", line 46, in draw_wrapper
draw(artist, renderer, *kl)
File "C:\Python25\Lib\site-packages\matplotlib\axis.py", line 196, in draw
self.label1.draw(renderer)
File "C:\Python25\Lib\site-packages\matplotlib\text.py", line 515, in draw
bbox, info = self._get_layout(renderer)
File "C:\Python25\Lib\site-packages\matplotlib\text.py", line 279, in _get_layout
clean_line, self._fontproperties, ismath=ismath)
File "C:\Python25\Lib\site-packages\matplotlib\backends\backend_agg.py", line 156, in get_text_width_height_descent
self.mathtext_parser.parse(s, self.dpi, prop)
File "C:\Python25\Lib\site-packages\matplotlib\mathtext.py", line 2797, in parse
font_output = fontset_class(prop, backend)
File "C:\Python25\Lib\site-packages\matplotlib\mathtext.py", line 658, in __init__
self._stix_fallback = StixFonts(*args, **kwargs)
File "C:\Python25\Lib\site-packages\matplotlib\mathtext.py", line 900, in __init__
fullpath = findfont(name)
File "C:\Python25\Lib\site-packages\matplotlib\font_manager.py", line 1306, in findfont
if not os.path.exists(font):
File "C:\Python25\lib\ntpath.py", line 255, in exists
st = os.stat(path)
TypeError: coercing to Unicode: need string or buffer, dict found
``` | 15,829 |
63,206,368 | Python does not work in PowerShell anymore.
I've never had any problems, until recently. CMD still recognizes the `py` command, but powershell doesn't recognize any of the basic python commands: `py`,`py3`,`python`,`python3`.
My problem occured after I installed MinGW and added its path to the Path variable.
I have restarted my computer many times, while trying new things.
Things I've tried:
* I have tried removing MinGW from the Path variable.
* I have tried uninstalling MinGW.
* I have tried reinstalling python.
* I have tried manually adding the Python path.
* I have tried letting Python automatically add my Python path.
* I have tried adding the Python path to both the user environment variables and the system variables.
* I have tried running PowerShell as administrator.
Finally, this is what my Path variable looks like now:
**User Path Variable:**
```
C:\Users\lcdew\AppData\Local\Microsoft\WindowsApps
C:\Users\lcdew\AppData\Local\GitHubDesktop\bin
```
**System Path Variable:**
```
C:\Program Files\Python38\Scripts
C:\Program Files\Python38
C:\Program Files\Scripts
C:\Program Files
C:\Program Files (x86)\Scripts
C:\Program Files (x86)
C:\Program Files (x86)\Razer Chroma SDK\bin
C:\Program Files\Razer Chroma SDK\bin
C:\Windows\system32
C:\Windows
C:\Windows\System32\Wbem
C:\Windows\System32\WindowsPowerShell\v1.0
C:\Windows\System32\OpenSSH
C:\Windows\system32\config\systemprofile\AppData\Local\Microsoft\WindowsApps
C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR
C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common
``` | 2020/08/01 | [
"https://Stackoverflow.com/questions/63206368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8583502/"
] | You can do this using [Comparator](https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html) as shown below:
```
List<String> sorted = List.of("1.1.2", "1.0.0", "1.3.3", "1.0.12", "1.0.2")
.stream()
.sorted((s1, s2) -> {
String[] s1Parts = s1.split("\\.");
String[] s2Parts = s2.split("\\.");
Integer[] s1IntParts = Arrays.stream(s1Parts).map(Integer::parseInt).toArray(Integer[]::new);
Integer[] s2IntParts = Arrays.stream(s2Parts).map(Integer::parseInt).toArray(Integer[]::new);
int comparisonResult = -1;
for (int i=0; i<3; i++) {
comparisonResult = Integer.compare(s1IntParts[i], s2IntParts[i]);
if (comparisonResult != 0) {
break;
}
}
return comparisonResult;
})
.collect(Collectors.toList());
``` | Assuming:
```
List<String> versions = Arrays.asList("1.1.2", "1.0.0", "1.3.3", "1.0.12", "1.0.2");
```
You should use a custom `Comparator` as long as the default comparator cannot be applied to this type of the String, otherwise the numbers will not be sorted numerically (ex, `12` is considered lower than `2`.
```
versions.sort(new Comparator<String>() {
@Override
public int compare(final String l, final String r) {
String[] left = l.split("\\.");
String[] right = r.split("\\.");
int iterableLength = Math.min(left.length, right.length);
for (int i=0; i<iterableLength; i++) {
if (!left[i].equals(right[i])) {
return Integer.parseInt(left[i]) - Integer.parseInt(right[i]);
}
}
return 0;
}
});
```
... using a lambda expression ...
```
versions.sort((l, r) -> {
String[] left = l.split("\\.");
String[] right = r.split("\\.");
int iterableLength = Math.min(left.length, right.length);
for (int i=0; i<iterableLength; i++) {
if (!left[i].equals(right[i])) {
return Integer.parseInt(left[i]) - Integer.parseInt(right[i]);
}
}
return 0;
});
```
This is a very straightforward solution using Regex to split the version notation into 3 separate parts. Each part not equal to another is a subject for the numeric comparison, hence `Integer.parseInt(String)` is needed. | 15,837 |
62,326,253 | ```
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" --header "Content-Type: application/json" \
--data '{"path": "<subgroup_path>", "name": "<subgroup_name>", "parent_id": <parent_group_id> } \
"https://gitlab.example.com/api/v4/groups/"
```
I was following the documentation from [gitlab](https://docs.gitlab.com/ee/api/groups.html#new-subgroup). I just wanted to to know how to represent the part after `--data` as a python request. Will it be a part of `params`, `json` or any other parameter in requests module?
Any help is appreciated. Thank you. | 2020/06/11 | [
"https://Stackoverflow.com/questions/62326253",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11430726/"
] | Here's the equivalent using `requests`:
```
import requests
import json
headers = {
"PRIVATE-TOKEN": "<your_access_token>",
"Content-Type": "application/json",
}
data = {
"path": "<subgroup_path>",
"name": "<subgroup_name>",
"parent_id": "<parent_group_id>",
}
requests.post("https://gitlab.example.com/api/v4/groups/",
headers=headers, data=json.dumps(data))
``` | It can be done by python's [requests](https://2.python-requests.org/en/master/) package.
```
import requests
import json
url = "https://gitlab.example.com/api/v4/groups/"
headers = {'PRIVATE-TOKEN': '<your_access_token>', 'Content-Type':'application/json'}
data = {"path": "<subgroup_path>", "name": "<subgroup_name>", "parent_id": <parent_group_id>}
requests.post(url, data=json.dumps(data), headers=headers)
```
reference : [Python Request Post with param data](https://stackoverflow.com/questions/15900338/python-request-post-with-param-data) | 15,845 |
49,643,205 | I installed ansible on MAC High Sierra 10.13.3 and when I am trying to run
"ansible --version" I am receiving following error
-bash: /usr/local/bin/ansible: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
Please let me know if you have ran into same issue or have solution. | 2018/04/04 | [
"https://Stackoverflow.com/questions/49643205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9594666/"
] | `/usr/local/bin/ansible` has PATH `"/usr/local/opt/python/bin/python2.7"` on the first line. and in `/usr/local/opt/python/bin/` directory I had python3.6 instead of python2.7.
So I changed PATH on file `vi /usr/local/bin/ansible`
from `#!/usr/local/opt/python/bin/python2.7`
to `#!/usr/local/opt/python/bin/python3.6`
and that fixed the issue
Verification :
```sh
$ ansible --version
ansible 2.5.0
config file = None
configured module search path = ['/Users/<username>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/<username>/Library/Python/3.6/lib/python/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.4 (default, Mar 1 2018, 18:36:50) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]
$ ansible localhost -m ping
localhost | SUCCESS => {
"changed": false,
"ping": "pong"
}
$ ansible localhost -m setup -a 'filter=ansible_distribution'
localhost | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "MacOSX"
},
"changed": false
}
``` | Changing the python version might be pushing into some compatibility issues
It happens, when we have multiple python versions installed in our OS.
Simple steps for troubleshooting:
1. Check the python version
command: `which python /usr/bin/python`
2. Create a soft link to the path
command : `ln -s /usr/bin/python /usr/local/opt/python/bin/python2.7`
I hope it will fix the error. | 15,846 |
1,045,151 | I really suck at math. I mean, I REALLY suck at math.
I'm trying to make a simple fibonacci sequence class for an algorithm I'll be using. I have seen the python example which looks something like this:
```
a = 0
b = 1
while b < 10:
print b
a, b = b, b+a
```
The problem is that I can't really make this work in any other language. I'd like to make it work in Java, since I can pretty much translate it into the other languages I use from there. This is the general thought:
```
public class FibonacciAlgorithm {
private Integer a = 0;
private Integer b = 1;
public FibonacciAlgorithm() {
}
public Integer increment() {
a = b;
b = a + b;
return value;
}
public Integer getValue() {
return b;
}
}
```
All that I end up with is doubling, which I could do with multiplication :(
Can anyone help me out? Math pwns me. | 2009/06/25 | [
"https://Stackoverflow.com/questions/1045151",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/128967/"
] | You need to store the value of either a or b in a temporary variable first;
```
public Integer increment()
{
int temp = a;
a = b;
b = temp + b;
return value;
}
``` | i'll do this
```
fib = 100;
for(int a = 1, b = 0;a <= fib;a += b, b = (a-b)) {
System.out.print(a + ",");
}
``` | 15,849 |
65,347,497 | What improvements can I make to my python pandas code to make it more efficient? For my case, I have this dataframe
```
In [1]: df = pd.DataFrame({'PersonID': [1, 1, 1, 2, 2, 2, 3, 3, 3],
'Name': ["Jan", "Jan", "Jan", "Don", "Don", "Don", "Joe", "Joe", "Joe"],
'Label': ["REL", "REL", "REL", "REL", "REL", "REL", "REL", "REL", "REL"],
'RuleID': [55, 55, 55, 3, 3, 3, 10, 10, 10],
'RuleNumber': [3, 4, 5, 1, 2, 3, 234, 567, 999]})
```
Which gives this result:
```
In [2]: df
Out[2]:
PersonID Name Label RuleID RuleNumber
0 1 Jan REL 55 3
1 1 Jan REL 55 4
2 1 Jan REL 55 5
3 2 Don REL 3 1
4 2 Don REL 3 2
5 2 Don REL 3 3
6 3 Joe REL 10 234
7 3 Joe REL 10 567
8 3 Joe REL 10 999
```
What I need to accomplished here is to update the fields under the Label column to MAIN for the lowest rule value associated with each Rule ID that is applied to a Person ID and Name. Therefore, the results need to look like this:
```
In [3]: df
Out[3]:
PersonID Name Label RuleID RuleNumber
0 1 Jan MAIN 55 3
1 1 Jan REL 55 4
2 1 Jan REL 55 5
3 2 Don MAIN 3 1
4 2 Don REL 3 2
5 2 Don REL 3 3
6 3 Joe MAIN 10 234
7 3 Joe REL 10 567
8 3 Joe REL 10 999
```
This is the code that I wrote to accomplish this:
```
In [4]:
df['Label'] = np.where(
df['RuleNumber'] ==
df.groupby(['PersonID', 'Name', 'RuleID'])['RuleNumber'].transform('min'),
"MAIN", df.Label)
```
Is there a better way to update the values under the Label column? I feel like I'm brute forcing my way through and this may not be the most efficient way to do this.
I used the following SO threads to arrive at my result:
[Replace column values within a groupby and condition](https://stackoverflow.com/questions/48142757/replace-column-values-within-a-groupby-and-condition)
[Replace values within a groupby based on multiple conditions](https://stackoverflow.com/questions/50418372/replace-values-within-a-groupby-based-on-multiple-conditions?noredirect=1&lq=1)
<https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.idxmin.html>
<https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.transform.html>
[Using Pandas to Find Minimum Values of Grouped Rows](https://stackoverflow.com/questions/41450963/using-pandas-to-find-minimum-values-of-grouped-rows)
Any advice would be appreciated.
Thank you. | 2020/12/17 | [
"https://Stackoverflow.com/questions/65347497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14557333/"
] | It seems like you can filter by the grouped `idxmin` regardless of sorted order and update `RuleNumber` based on that. You can use `loc`, `np.where`, `mask`, or `where` as follows:
```
df.loc[df.groupby(['PersonID', 'Name', 'RuleID'])['RuleNumber'].idxmin(), 'Label'] = 'MAIN'
```
OR with `np.where` as you were trying:
```
df['Label'] = (np.where((df.index == df.groupby(['PersonID', 'Name', 'RuleID'])
['RuleNumber'].transform('idxmin')), 'MAIN', 'REL'))
df
Out[1]:
PersonID Name Label RuleID RuleNumber
0 1 Jan MAIN 55 3
1 1 Jan REL 55 4
2 1 Jan REL 55 5
3 2 Don MAIN 3 1
4 2 Don REL 3 2
5 2 Don REL 3 3
6 3 Joe MAIN 10 234
7 3 Joe REL 10 567
8 3 Joe REL 10 999
```
Using `mask` or its inverse `where` would also work:
```
df['Label'] = (df['Label'].mask((df.index == df.groupby(['PersonID', 'Name', 'RuleID'])
['RuleNumber'].transform('idxmin')), 'MAIN'))
```
OR
```
df['Label'] = (df['Label'].where((df.index != df.groupby(['PersonID', 'Name', 'RuleID'])
['RuleNumber'].transform('idxmin')), 'MAIN'))
``` | Use `duplicated` on PersonID:
```
df.loc[~df['PersonID'].duplicated(),'Label'] = 'MAIN'
print(df)
```
Output:
```
PersonID Name Label RuleID RuleNumber
0 1 Jan MAIN 55 3
1 1 Jan REL 55 4
2 1 Jan REL 55 5
3 2 Don MAIN 3 1
4 2 Don REL 3 2
5 2 Don REL 3 3
6 3 Joe MAIN 10 234
7 3 Joe REL 10 567
8 3 Joe REL 10 999
``` | 15,859 |
73,504,727 | hi i want to make a class in python then import the class in another python file in python
we have a file called `squaretypes` that has a class called `Square` then its imported in `class2` but when i want to import the python file and then use `Square` but it gives an error
note: i am using jupyter notebook
error:
>
>
> ```
> ImportError Traceback (most recent call last)
> Input In [19], in <cell line: 1>()
> ----> 1 from squaretypes import Square
> 3 newClass = Square(5)
> 4 val = newClass.getVal()
>
> ImportError: cannot import name 'Square' from 'squaretypes' (C:\Users\DAVID\anaconda3\lib\types.py)
>
> ```
>
>
squaretypes:
```
class Square:
def __init__(self,val):
self.val=val
def getVal(self):
return self.val * self.val
```
class2:
```
from squaretypes import Square
newClass = Square(5)
val = newClass.getVal()
print(val)
```
note: `class2` is giving a error and i am using jupyter notebook | 2022/08/26 | [
"https://Stackoverflow.com/questions/73504727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19542186/"
] | "types" is the name of a standard library in python: <https://docs.python.org/3/library/types.html>
Rename your file to something different, e.g. "squaretype.py". | you should try to do following things
**- You can rename the name of classes**
* **if the above technic doesn't work just create an object without main in global and import it in the second python file . it will be imported with the values and functions, but you have to do some change in functions as well** | 15,861 |
38,736,721 | We have a scenario where we have to authenticate the user with LDAP server
Flow 1:
```
client --> application server --> LDAP server
```
In above flow the client enters LDAP credentials which comes to application server and then using python-ldap we can authenticate the user, straight forward. Since the user LDAP credentials comes to application server an organisation may not be willing for such flow for obvious reasons.
Flow 2:
```
client --> oauth2 --> LDAP server
```
Oauth scenario suites best here, since authentication of the user is responsibility of the oauth and application server do not need to know the user credentials.
Have anyone encountered such case, if yes, how you tackled it?
Is there are any Oauth client for LDAP free and paid ? | 2016/08/03 | [
"https://Stackoverflow.com/questions/38736721",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1207003/"
] | If you don't want user credentials to reach the Application server then what you need is a perimeter authentication. You need to have an external authentication provider , say Oracle Access Manager, that will perform the authentication and set a certain token in the request. The application server can assert this token and let user access resources. This model enables SSO as well.
1. The resources that require authorized access are configured as protected URLs in OAM.
2. When a user tries to access a protected resource he is challenged for credentials.
3. OAM authenticates the user against an LDAP directory(that is configured in OAM).
4. A token corresponding to the authenticated user is set in the request. Also an SSO cookie is set.
5. Application server (Weblogic) can assert (verify) this token and let the user access the resource.
Note: Oracle Access Manager supports oAuth as well. | Ory Hydra <https://ory.sh/hydra> might be what the original poster was asking for. This question is several years old now but in the interest of helping anyone else who sees this...check out Ory Hydra. It provides the OAuth2/OpenID parts and can be linked to an LDAP server behind the scenes. | 15,862 |
10,296,483 | ```
class Item(models.Model):
name = models.CharField(max_length = 200)
image = models.ImageField(upload_to = 'read', blank=True)
creative_url = models.CharField(max_length = 200)
description = RichTextField()
def save(self, *args, **kwargs):
content = urllib2.urlopen(self.creative_url).read()
self.image.save("test.jpg", File(content))
super(Item, self).save(*args, **kwargs)
```
Gives exception:
'str' object has no attribute 'name'
I was trying to follow this answer (http://stackoverflow.com/questions/1393202/django-add-image-in-an-imagefield-from-image-url) But it did not help to get rid of the exception.
---
```
AttributeError at /admin/collection/item/1/ 'str' object has no attribute 'name' Request Method: POST Request
URL: http://127.0.0.1:8000/admin/collection/item/1/ Django
Version: 1.2.5 Exception Type: AttributeError Exception Value: 'str'
object has no attribute 'name' Exception
Location: D:\FF\django\core\files\base.py in _get_size, line 39 Python
Executable: C:\Python27\python.exe Python Version: 2.7.2 Python
Path: ['D:\\FF',
'C:\\Python27\\lib\\site-packages\\django_social_auth-0.6.7-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\python_openid-2.2.5-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\oauth2-1.5.211-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\httplib2-0.7.4-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\selenium-2.20.0-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\ipython-0.12-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\django_localeurl-1.5-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\pil-1.1.7-py2.7-win32.egg',
'C:\\Python27\\lib\\site-packages\\pip-1.1-py2.7.egg',
'C:\\Windows\\system32\\python27.zip', 'C:\\Python27\\DLLs',
'C:\\Python27\\lib', 'C:\\Python27\\lib\\plat-win',
'C:\\Python27\\lib\\lib-tk', 'C:\\Python27',
'C:\\Python27\\lib\\site-packages',
'c:\\python27\\lib\\site-packages'] Server time: Tue, 24 Apr 2012
14:19:00 +0300
``` | 2012/04/24 | [
"https://Stackoverflow.com/questions/10296483",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126545/"
] | Instead of `File`, you need to use [`django.core.files.base.ContentFile`](https://docs.djangoproject.com/en/1.4/ref/files/file/#the-contentfile-class)
```
self.image.save("test.jpg", ContentFile(content), save=False)
```
`File` accepts file object or `StringIO` object having `size` property or you need to manually set `size` property of a `File` or `ImageFile` to make it working w/ `StringIO`:
```
s = StringIO()
s.write(urllib2.urlopen(self.creative_url).read())
s.size = s.tell()
self.image.save('test.jpg', File(s), save=False)
```
Also, please note the `save=False` inside `self.image.save`: by default, `save=True`, this will cause the instance, which contains the image field, to be saved. Thus the `save` logic in your code might encounter an infinite loop and reach maximum recursion depth. | Try something like:
-------------------
(As supposed at: [Programmatically saving image to Django ImageField](https://stackoverflow.com/questions/1308386/programmatically-saving-image-to-django-imagefield))
```
from django.db import models
from django.core.files.base import ContentFile
import urllib2
from PIL import Image
from StringIO import StringIO
class Item(models.Model):
name = models.CharField(max_length=200)
image = models.ImageField(upload_to='read', blank=True)
creative_url = models.URLField(max_length=200)
class Meta:
verbose_name = "Item"
verbose_name_plural = "Items"
def download_image(self, url):
input_file = StringIO(urllib2.urlopen(url).read())
output_file = StringIO()
img = Image.open(input_file)
if img.mode != "RGB":
img = img.convert("RGB")
img.save(output_file, "JPEG")
self.image.save(self.name+".jpg", ContentFile(output_file.getvalue()), save=False)
def save(self, *args, **kwargs):
self.download_image(self.creative_url)
super(Item, self).save(*args, **kwargs)
``` | 15,864 |
65,009,888 | I wrote a python script (with pandas library) to create txt files. I also use a txt file as an input. It works well but I want to make it more automated.
My code starts like;
```
girdi = input("Lütfen gir: ")
input2 = girdi+".txt"
veriCNR = pd.read_table(
input2, decimal=",",
usecols=[
"Chromosome",
"Name",
.
.
.
```
I am entering the name of the files one by one and getting outputs like this:
```
.
.
.
outputCNR = girdi+".cnr"
sonTabloCNR.to_csv(outputCNR, sep="\t", index=False)
outputCNS = girdi+".cns"
sonTabloCNS.to_csv(outputCNS, sep="\t", index=False)
outputCNG = girdi+".genemetrics.cns"
sonTabloCNG.to_csv(outputCNG, sep="\t", index=False)
```
As you see I am using input name also for outputs. They are tab seperated txt files with different file extensions.
I want to use all txt files in a folder as an input and run this script for every one of them.
I hope I explained it clearly.
ps. I am not a programmer. Please be explanatory with codes :) | 2020/11/25 | [
"https://Stackoverflow.com/questions/65009888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14570497/"
] | Your question was not clear, I assume that you will have multiple rows and multiple elements. There is my solution according to what I understand.
```
payload.rows.forEach(x=> x.elements.forEach(y => console.log(y.distance.value)))
``` | ```
var payload = JSON.parse(body);
console.log(payload.rows[0]["elements"][0].distance.value);
``` | 15,867 |
14,659,118 | <http://pypi.python.org/pypi/pylinkgrammar>
I am encountering an error when attempting to install pylinkgrammar:
```
Running setup.py egg_info for package pylinkgrammar
Installing collected packages: pylinkgrammar
Running setup.py install for pylinkgrammar
...
running build_ext
building 'pylinkgrammar/_clinkgrammar' extension
swigging pylinkgrammar/link_grammar.i to pylinkgrammar/link_grammar_wrap.c
swig -python -o pylinkgrammar/link_grammar_wrap.c pylinkgrammar/link_grammar.i
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c pylinkgrammar/link_grammar_wrap.c -o build/temp.linux-x86_64-2.7/pylinkgrammar/link_grammar_wrap.o
pylinkgrammar/link_grammar_wrap.c:2973:40: fatal error: link-grammar/link-includes.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
``` | 2013/02/02 | [
"https://Stackoverflow.com/questions/14659118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2011567/"
] | Besides installing the liblink-grammar4 package also install liblink-grammar4-dev package which is available in synaptic.
I had been grappling with the same for over an hour and it worked for me | You first need to install the liblink-grammar4 library:
If you're on ubuntu system, you can run:
```
sudo apt-add-repository ppa:python-pylinkgrammar/getsome
sudo apt-get install liblink-grammar4
```
If you're on a different flavor of linux, just make sure `liblink-grammar4` is installed. | 15,874 |
58,614,691 | im trying to login into my google account using python selenium with chromedriver,
the code works but not in headless mode. in hm i get the the identifierId never appears :(
EDIT: added missing --disable-gpu
```py
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--window-size=1920,1080')
def do_login(email, password):
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get('https://accounts.google.com/signin/v2/identifier?flowName=GlifWebSignIn&flowEntry=ServiceLogin')
email_phone = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//input[@id='identifierId']")))
email_phone.send_keys(email)
driver.find_element_by_id("identifierNext").click()
pw = WebDriverWait(driver, 5).until(
EC.element_to_be_clickable((By.XPATH, "//input[@name='password']"))
)
pw.send_keys(password)
driver.find_element_by_id("passwordNext").click()
time.sleep(2)
print(driver.title)
driver.close()
```
should login like in non headless mode | 2019/10/29 | [
"https://Stackoverflow.com/questions/58614691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7419986/"
] | You also have to add `--disable-gpu` to your chrome options.
```
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--window-size=1920,1080')
chrome_options.add_argument('--disable-gpu')
```
That's what I had to add to get my headless code fully working. | This code works in headless mode but not with gui enabled
```py
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--window-size=1920,1080')
def do_login(email, password):
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get('https://accounts.google.com/signin/v2/identifier?flowName=GlifWebSignIn&flowEntry=ServiceLogin')
email_phone = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "Email")))
email_phone.send_keys(email)
driver.find_element_by_id("next").click()
pw = WebDriverWait(driver, 5).until(
EC.element_to_be_clickable((By.ID, "Passwd"))
)
pw.send_keys(password)
driver.find_element_by_id("signIn").click()
driver.close()
``` | 15,879 |
21,513,899 | I am trying to store the following info in a python list but the strip function isnt working
```
u'Studio', u'5', u'550.00 SqFt', u'No', u'Agent', u'Quarterly', u'Mediterranean Buildings (38-107)', u'Central A/C & Heating\n , \n \n Balcony\n , \n \n Shared Pool\n , \n \n Built in Wardrobes\n , \n \n Security\n , \n \n View of Landmark', u'']
``` | 2014/02/02 | [
"https://Stackoverflow.com/questions/21513899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1928163/"
] | You can remove internal spaces from string by regular expression:
```
import re
text_result = re.sub('\s+',' ', text_input)
```
*EDIT:*
You can even apply this function to every item in your list:
```
list_result = [re.sub("\s+", " ",x) for x in list_input]
``` | You have a list of strings (which you have left the opening brace off of).
You have one *really* ungainly string in index 7 of that list.
You just need to clean that one up. So:
```
li = [u'Studio', u'5', u'550.00 SqFt', u'No', u'Agent', u'Quarterly', u'Mediterranean Buildings (38-107)', u'Central A/C & Heating\n , \n \n Balcony\n , \n \n Shared Pool\n , \n \n Built in Wardrobes\n , \n \n Security\n , \n \n View of Landmark', u'']
', '.join([line.strip(', ') for line in li[7].splitlines() if line.strip(' ,')])
Out[35]: 'Central A/C & Heating, Balcony, Shared Pool, Built in Wardrobes, Security, View of Landmark'
```
`strip` accepts an iterable of characters to strip off both ends of a given string; I've told it to remove spaces and commas. Then we rejoin that into a string.
If you need to do this for *every* line in `li`, just do the above in a list comp:
```
[', '.join([line.strip(', ') for line in x.splitlines() if line.strip(' ,')]) for x in li]
```
At this point I would generally go back and re-write whatever code generated this list of clunky strings, since it's cleaner to simply fix the problem before it gets this far. | 15,881 |
62,228,457 | I'm trying to increase the efficiency of a non-conformity management program. Basically, I have a database containing about a few hundred rows, each row describes a non-conformity using a text field.
Text is provided in Italian and I have no control over what the user writes.
I'm trying to write a python program using NTLK to detect how many of these rows report the same problem, written differently but with similar content.
For example, the following sentences need to be related, with a high rate of confidence
* I received 10 pieces less than what was ordered
* 10 pieces have not been shipped
I already found the following article describing how to preprocess text for analysis:
[How to Develop a Paraphrasing Tool Using NLP (Natural Language Processing) Model in Python](https://paraphrase.projecttopics.org/how-to-develop-a-paraphrasing-tool-using-nlp-natural-language-processing-model-in-python.html)
I also found other questions on SO but they all refer to word similarity, two sentences comparison, or comparison using a reference meaning.
* [This one uses a reference meaning](https://stackoverflow.com/questions/24407333/how-to-perform-semantic-similarity-in-document)
* [This one refers to two sentences comparison](https://stackoverflow.com/questions/44234909/wordnet-based-semantic-similarity-measurements)
In my case, I have no reference and I have multiple sentences that needs to be grouped if they refer to similar problems, so I wonder if this job it's even possible to do with a script.
[This answer](https://stackoverflow.com/a/2057825/10511191) says that it cannot be done but it's quite old and maybe someone knows something new.
Thanks to everyone who can help me. | 2020/06/06 | [
"https://Stackoverflow.com/questions/62228457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10511191/"
] | Thank's to Anurag Wagh advice I figured it out.
I used [this tutorial](https://www.machinelearningplus.com/nlp/gensim-tutorial/) about gensim and how to use it in many ways.
[Chapter 18](https://www.machinelearningplus.com/nlp/gensim-tutorial/#18howtocomputesimilaritymetricslikecosinesimilarityandsoftcosinesimilarity) does what I was asking for, but during my test, I found out a better way to achieve my goal.
[Chatper 11](http://%5B1%5D:%20https://www.machinelearningplus.com/nlp/gensim-tutorial/#11howtocreatetopicmodelswithlda) shows how to build an LDA model and how to extract a list of main topics among a set of documents.
Here is my code used to build the LDA model
```
# Step 0: Import packages and stopwords
from gensim.models import LdaModel, LdaMulticore
import gensim.downloader as api
from gensim.utils import simple_preprocess, lemmatize
from nltk.corpus import stopwords
from gensim import corpora
import re
import nltk
import string
import pattern
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s')
logging.root.setLevel(level=logging.INFO)
docs = [doc for doc in open('file.txt', encoding='utf-8')]
import nltk
import string
import pattern
# dictionary of Italian stop-words
it_stop_words = nltk.corpus.stopwords.words('italian')
it_stop_words = it_stop_words + [<custom stop words>]
# Snowball stemmer with rules for the Italian language
ita_stemmer = nltk.stem.snowball.ItalianStemmer()
# the following function is just to get the lemma
# out of the original input word
def lemmatize_word(input_word):
in_word = input_word
word_it = pattern.it.parse(
in_word,
tokenize=False,
tag=False,
chunk=False,
lemmata=True
)
the_lemmatized_word = word_it.split()[0][0][4]
return the_lemmatized_word
# Step 2: Prepare Data (Remove stopwords and lemmatize)
data_processed = []
for doc in docs:
word_tokenized_list = nltk.tokenize.word_tokenize(doc)
word_tokenized_no_punct = [x.lower() for x in word_tokenized_list if x not in string.punctuation]
word_tokenized_no_punct_no_sw = [x for x in word_tokenized_no_punct if x not in it_stop_words]
word_tokenized_no_punct_no_sw_no_apostrophe = [x.split("'") for x in word_tokenized_no_punct_no_sw]
word_tokenized_no_punct_no_sw_no_apostrophe = [y for x in word_tokenized_no_punct_no_sw_no_apostrophe for y in x]
data_processed.append(word_tokenized_no_punct_no_sw_no_apostrophe)
dct = corpora.Dictionary(data_processed)
corpus = [dct.doc2bow(line) for line in data_processed]
lda_model = LdaMulticore(corpus=corpus,
id2word=dct,
random_state=100,
num_topics=7,
passes=10,
chunksize=1000,
batch=False,
alpha='asymmetric',
decay=0.5,
offset=64,
eta=None,
eval_every=0,
iterations=100,
gamma_threshold=0.001,
per_word_topics=True)
# save the model
lda_model.save('lda_model.model')
# See the topics
lda_model.print_topics(-1)
```
With the model trained i can get a list of topic for each new non-conformity and detect if it's related to something already reported by others non-conformities | Perhaps converting document to vectors and the computing distance between two vectors would be helpful
[doc2vec](https://radimrehurek.com/gensim/auto_examples/tutorials/run_doc2vec_lee.html#sphx-glr-auto-examples-tutorials-run-doc2vec-lee-py) can be helpful over here | 15,883 |
60,621,433 | pip install has suddenly stopped working - unsure if related to recent update. I've tried it both on pip 19.0.3 and pip.20.0.2
When using:
```
python -m pip install matplotlib --user
```
I get an error like this
```
PermissionError: [Errno 13] Permission denied: 'C:\\Program Files\\Python37\\Lib\\site-packages\\accesstest_deleteme_fishfingers_custard_d1677h'
```
Any help appreciated | 2020/03/10 | [
"https://Stackoverflow.com/questions/60621433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9988108/"
] | Using:
```
python -m pip install matplotlib
```
worked | I suspect you need to run your terminal as an administrator-elevated account to access the restricted resource. | 15,884 |
31,714,060 | For one of my assignments, rather than reading directly from a text file, we are directly taking the input from `sys.in`. I was wondering what the best way of obtaining this input and storing it would be?
So far, I've tried using:
`sys.stdin.readlines()` -- But this will not terminate unless it recieves an EOF statement, while a zero in my input signals the end of the file.
`sys.stdin.readline()` -- It will only read the final line of the input code.
`input()` -- I can only read in the first line of the input and when I try to loop it, I end up with a syntax error.
An example of the input is below:
```
3
2 1 3
3 2 1
1 3 2
2 1 3
3 2 1
1 3 2
2
1 2
1 2
2 1
2 1
0
```
My attempt at obtaining the input:
```
input_list = []
while True:
input_str = input("")
if input_str == '0':
break
else:
input_list.append(input_str)
print(input_list)
```
Returns this error while parsing the second line through:
[](https://i.stack.imgur.com/tsuNd.png)
Any help with this would be greatly appreciated. I've probably spent longer trying to get the input to work now than the actual assignment now.
EDIT: The version of Python we are using is 3.4
FINAL EDIT: Like both the answers said, it turns out that the university labs have a older version of python running when run in cmd prompt in the labs, where the input() method started behaving differently. Once I tried the code at home, it worked as intended. | 2015/07/30 | [
"https://Stackoverflow.com/questions/31714060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5171552/"
] | Looks like a typo. You use thread1 in both calls to pthread\_create.
```
iret1 = pthread_create( &thread1, 0, print_message_function1, (void*) message1);
iret2 = pthread_create( &thread1, 0, print_message_function2, (void*) message2);
```
So `pthread_join(thread2, 0);` is pretty much doomed. | This is really just **relevant information**, not an answer as such, but unfortunately SO does not support code in comments.
The problem that you *noticed* with your code was a simple typo, but I didn't see that until I read the now [accepted answer](https://stackoverflow.com/a/31714197/464581). For, I sat down and rewrote the code to standard C++, and in that process the typo disappeared, completely unnoticed! :) The problem that I did note was that you're using output statements without synchronization in your code, and that might cause output lines to be mixed, and is, I think, formally Undefined Behavior.
To fix that you can use a **mutex**. In this rewrite of your code I do not abstract up anything. But I think you can readily see the natural abstractions that hide between the code lines here:
```
#include <chrono> // operator""ms
#include <iostream>
#include <mutex>
#include <stdlib.h>
#include <thread>
using namespace std;
mutex output_ownership;
void print_message_function1( char const* const message )
{
for( int i = 0; i < 10; ++i )
{
{
lock_guard<mutex> mux( output_ownership );
cout << "Hello #" << i << message << endl;
}
this_thread::sleep_for( 1ms );
}
lock_guard<mutex> mux( output_ownership );
cout << "Thread 0 terminates" << endl;
}
void print_message_function2( char const* const message )
{
for( int i = 0; i < 10; ++i )
{
{
lock_guard<mutex> mux( output_ownership );
cout << "Hello #" << i << message << endl;
}
this_thread::sleep_for( 1ms );
}
lock_guard<mutex> mux( output_ownership );
cout << "Thread 1 terminates" << endl;
}
auto main() -> int
{
cout << "Main: Creating threads" << endl;
cout << "Main: Waiting for threads to finish" << endl << endl;
thread thread1( print_message_function1, " from Thread 1" );
thread thread2( print_message_function2, " from Thread 2" );
thread1.join();
thread2.join();
cout << "Main: Exiting" << endl;
}
```
This compiles and runs fine with Visual C++ 2015. It does not compile with MinGW-64 g++ 5.1.0, because its `std::thread` support is very much lacking. I do not know whether it will compile with latest version of g++ in Unix-land. | 15,885 |
26,752,856 | I am using python 2.7 with docx and I would like to change the background and text color of cells in my table based on condition.
I could not find any usefull resources about single cell formatting
Any suggestions?
Edit 1
my code
```
style_footer = "DarkList"
style_red = "ColorfulList"
style_yellow = "LightShading"
style_green = "MediumShading2-Accent6"
style_transperent = "TableNormal"
for a,rec in enumerate(data):
#V headinh se piše prvo polje iz table heada
document.add_heading(rec['tableHead'][0][0], level=1)
image_path = imageFolder + "\\" + slike[a]
document.add_picture(image_path, height=Inches(3.5))
#y += 28
#worksheet.insert_image( y, 1,imageFolder + "/" + slike[a])
for i, head in enumerate(rec['tableHead']):
table = document.add_table(rows=1, cols = len(head))
hdr_cells = table.rows[0].cells
for a in range(0,len(head)):
hdr_cells[a].text = head[a]
for a,body in enumerate(rec['tableData']):
row_cells = table.add_row().cells
for a in range(0,len(body)):
if body[a]['style'] == 'footer':
stil = style_footer
elif body[a]['style'] == 'red':
stil = style_red
elif body[a]['style'] == 'yellow':
stil = style_yellow
elif body[a]['style'] == 'green':
stil = style_green
else:
stil = style_transperent
row_cells[a].add_paragraph(body[a]['value'], stil)
document.save(wordDoc)
```
All cells are still the same. | 2014/11/05 | [
"https://Stackoverflow.com/questions/26752856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/945446/"
] | If you want to color fill a specific cell in a table you can use the code below.
For example let's say you need to fill the first cell in the first row of your table with the RGB color 1F5C8B:
```
from docx.oxml.ns import nsdecls
from docx.oxml import parse_xml
shading_elm_1 = parse_xml(r'<w:shd {} w:fill="1F5C8B"/>'.format(nsdecls('w')))
table.rows[0].cells[0]._tc.get_or_add_tcPr().append(shading_elm_1)
```
Now if you want to also fill the second cell in the first row with the same color, you should create a new element
otherwise if you use the same element as above the fill will move on and will disappear from the first cell...
```
shading_elm_2 = parse_xml(r'<w:shd {} w:fill="1F5C8B"/>'.format(nsdecls('w')))
table.rows[0].cells[1]._tc.get_or_add_tcPr().append(shading_elm_2)
```
...and so on for other cells.
Source: <https://groups.google.com/forum/#!topic/python-docx/-c3OrRHA3qo> | With Nikos Tavoularis' solution, we have to create a new element for every cell.
I have created a function that achieves this. Works in Python revision 3.5.6 and python-docx revision 0.8.10
```
from docx.oxml import OxmlElement
from docx.oxml.ns import qn
def set_table_header_bg_color(table.rows[row_ix].cell):
"""
set background shading for Header Rows
"""
tblCell = cell._tc
tblCellProperties = tblCell.get_or_add_tcPr()
clShading = OxmlElement('w:shd')
clShading.set(qn('w:fill'), "00519E") #Hex of Dark Blue Shade {R:0x00, G:0x51, B:0x9E}
tblCellProperties.append(clShading)
return cell
"""
End of set_table_header_bg_color Function
"""
# main function
"""
..
..
..
1. Load Document
..
2. Access the required section
..
3. Load the required Table
..
4. Traverse to the cell by accessing the rows object
..
"""
for each_row in table.rows :
for each_cell in each_row.cells:
if each_cell.value satisfies a condition:
set_table_header_bg_color(each_cell)
"""
5. Continue execution
"""
``` | 15,886 |
66,310,704 | ```
Python 3.8.3
asgiref 3.3.1
Django 2.2
django-crispy-forms 1.10.0
djangorestframework 3.11.1
Pillow 7.2.0
pip 19.2.3
psycopg2 2.8.6
pytz 2020.1
setuptools 41.2.0
sqlparse 0.3.1
```
Sorry, I'm not sure where there is an abnormality in a situation, can you mention me?
Then I posted error information in stackoverflow and presented it with my code, the link is [AttributeError: type object 'Product' has no attribute 'objects'](https://stackoverflow.com/questions/66310704/attributeerror-type-object-product-has-no-attribute-objects)
, You can click the link to enter and watch...Thank you.
Everyone, help me think about the screen. After I clicked on the product I need add to car, an error appeared, and the product could not be added to my shopping cart. May I ask if this message appears.
File "C:\Users\georgiawang\PycharmProjects\libshopapp\store\views.py", line 195, in updateItem\*\*
product = Product.objects.get(id=productId)
AttributeError: type object'Product' has no attribute'objects'\*\*
Is it because views.py is damaged?
Because **"POST /store/update\_item/ HTTP/1.1" 500 59** the information came out a bit unexpectedly, after all, I didn’t mess with the program. I don’t know which side of the program affects QWQ.
Hope to mention me a bit, thank you very much.
遇到問題是:The problems encountered are:
```
product = Product.objects.get(id=productId)
AttributeError: type object 'Product' has no attribute 'objects'
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
February 22, 2021 - 13:09:19
Django version 2.2, using settings 'libshopapp.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
[22/Feb/2021 13:09:22] "GET /store/ HTTP/1.1" 200 32153
Action: add
Product: 135
1.
Traceback (most recent call last):
File "D:\python\Python38\lib\site-packages\django\core\handlers\exception.py", line 34, in inner
response = get_response(request)
File "D:\python\Python38\lib\site-packages\django\core\handlers\base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "D:\python\Python38\lib\site-packages\django\core\handlers\base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\georgiawang\PycharmProjects\libshopapp\store\views.py", line 195, in updateItem**
product = Product.objects.get(id=productId)
AttributeError: type object 'Product' has no attribute 'objects'**
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\python\Python38\lib\site-packages\django\core\handlers\exception.py", line 34, in inner
response = get_response(request)
File "D:\python\Python38\lib\site-packages\django\utils\deprecation.py", line 94, in __call__
response = response or self.get_response(request)
File "D:\python\Python38\lib\site-packages\django\core\handlers\exception.py", line 36, in inner
response = response_for_exception(request, exc)
File "D:\python\Python38\lib\site-packages\django\core\handlers\exception.py", line 90, in response_for_exception
response = handle_uncaught_exception(request, get_resolver(get_urlconf()), sys.exc_info())
2.
Traceback (most recent call last):
File "D:\python\Python38\lib\wsgiref\handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "D:\python\Python38\lib\site-packages\django\contrib\staticfiles\handlers.py", line 65, in __call__
return self.application(environ, start_response)
File "D:\python\Python38\lib\site-packages\django\core\handlers\wsgi.py", line 141, in __call__
response = self.get_response(request)
File "D:\python\Python38\lib\site-packages\django\core\handlers\base.py", line 75, in get_response
response = self._middleware_chain(request)
File "D:\python\Python38\lib\site-packages\django\core\handlers\exception.py", line 36, in inner
response = response_for_exception(request, exc)
File "D:\python\Python38\lib\site-packages\django\core\handlers\exception.py", line 90, in response_for_exception
response = handle_uncaught_exception(request, get_resolver(get_urlconf()), sys.exc_info())
File "D:\python\Python38\lib\site-packages\django\core\handlers\exception.py", line 125, in handle_uncaught_exception
return debug.technical_500_response(request, *exc_info)
File "D:\python\Python38\lib\site-packages\django\views\debug.py", line 94, in technical_500_response
html = reporter.get_traceback_html()
File "D:\python\Python38\lib\site-packages\django\views\debug.py", line 332, in get_traceback_html
t = DEBUG_ENGINE.from_string(fh.read())
UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 9735: illegal multibyte sequence
**[22/Feb/2021 13:09:27] "POST /store/update_item/ HTTP/1.1" 500 59**
```
請幫忙看看,謝謝?
please help to see, thank you.
//////////////////////////////////////////////////////code//////////////////////////////////////////////
***/// store / models.py ///***
```
from django.contrib.auth.models import User
from django.db import models
from django.urls import reverse
from django.utils import timezone
class Product(models.Model):
name = models.CharField(max_length=200)
content = models.TextField(default='')
price = models.DecimalField(max_digits=7, decimal_places=2)
digital = models.BooleanField(default=False, null=True, blank=True)
draft = models.BooleanField(default=False)
image = models.ImageField(
upload_to=imgs,
null=True,
blank=True,
width_field="width_field",
height_field="height_field",
)
height_field = models.IntegerField(default=0)
width_field = models.IntegerField(default=0)
updated = models.DateTimeField(auto_now=True, auto_now_add=False)
timestamp = models.DateTimeField(auto_now=True, auto_now_add=False)
products = ProductManage()
def __unicode__(self):
return self.name
def __str__(self):
return self.name
@property
def imageURL(self):
try:
url = self.image.url
except:
url = ''
return url
def get_absolute_url(self):
return reverse('store:detail', kwargs={'id': self.id})
class Meta:
ordering = ["-timestamp", "-updated"]
```
***/// home / urls.py ///***
```
from django.urls import path
from . import views
app_name = 'home'
urlpatterns = [
# path('order/', views.order, name='order'),
path('social/', views.socialview, name='social'),
path('about/', views.aboutview, name='about'),
path('contact/', views.contact, name='contact'),
path('', views.index, name='index'),
]
```
***/// store / urls.py ///***
```
from django.urls import path
from . import views
app_name = 'store'
urlpatterns = [
# Leave as empty string for base url
path('', views.store, name='store'),
path('cart/', views.cart, name='cart'),
path('create/', views.productCreate, name='create'),
path('<int:id>/', views.productDetail, name='detail'),
path('<int:id>/update/', views.productUpdate, name='update'),
path('<int:id>/delete/', views.productDelete, name='delete'),
path('checkout/', views.checkout, name='checkout'),
path('update_item/', views.updateItem, name='update_item'),
# path('update_item/', views.updateItem, name='update_item'),
path('process_order/', views.processOrder, name='process_order'),
path('searchbar/', views.searchbar, name='searchbar'),
]
```
***/// store / views.py ///***
```
import datetime
import json
from django.contrib import messages
from django.core.paginator import Paginator, PageNotAnInteger, EmptyPage
from django.http import JsonResponse, HttpResponseRedirect, Http404
from django.shortcuts import render, get_object_or_404, redirect
from .forms import ProductForms
from .models import Product
from .utils import cartData, guestOrder
from django.utils import timezone
def updateItem(request):
data = json.loads(request.body)
productId = data['productId']
action = data['action']
print('Action:', action)
print('Product:', productId)
customer = request.user.customer
product = Product.objects.get(id=productId)
order, created = Order.objects.get_or_create(customer=customer, complete=False)
orderItem, created = OrderItem.objects.get_or_create(order=order, product=product)
if action == 'add':
orderItem.quantity = (orderItem.quantity + 1)
elif action == 'remove':
orderItem.quantity = (orderItem.quantity - 1)
orderItem.save()
if orderItem.quantity <= 0:
orderItem.delete()
return JsonResponse('Item was added', safe=False)
```
***/// cart.js ///***
question: "POST /store/update\_item/ HTTP/1.1" 500 59
```
var updateBtns = document.getElementsByClassName('update-cart')
for (i = 0; i < updateBtns.length; i++) {
updateBtns[i].addEventListener('click', function(){
var productId = this.dataset.product
var action = this.dataset.action
console.log('productId:', productId, 'Action:', action)
console.log('USER:', user)
if (user == 'AnonymousUser'){
addCookieItem(productId, action)
}else{
updateUserOrder(productId, action)
}
})
}
function updateUserOrder(productId, action){
console.log('User is authenticated, sending data...')
var url = '/store/update_item/'
fetch(url, {
method:'POST',
headers:{
'Content-Type':'application/json',
'X-CSRFToken':csrftoken,
},
body:JSON.stringify({'productId':productId, 'action':action})
})
.then((response) => {
return response.json();
})
.then((data) => {
location.reload()
});
}
function addCookieItem(productId, action){
console.log('User is not authenticated')
if (action == 'add'){
if (cart[productId] == undefined){
cart[productId] = {'quantity':1}
}else{
cart[productId]['quantity'] += 1
}
}
if (action == 'remove'){
cart[productId]['quantity'] -= 1
if (cart[productId]['quantity'] <= 0){
console.log('Item should be deleted')
delete cart[productId];
}
}
console.log('CART:', cart)
document.cookie ='cart=' + JSON.stringify(cart) + ";domain=;path=/"
location.reload()
}
```
***/// froms.py ///***
```
from django import forms
from store.models import Product, RegistrationData
class ProductForms(forms.ModelForm):
class Meta:
model = Product
fields = [
'name',
'price',
'digital',
'image',
'width_field',
'height_field',
'content',
]
``` | 2021/02/22 | [
"https://Stackoverflow.com/questions/66310704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15011706/"
] | where you import your models.py in your views.py?
just import your models file in your views and then you will be able to access your models classes in your views file.
something just like this:
```
from <models_directory> import models
def updateItem(request):
data = json.loads(request.body)
productId = data['productId']
product = models.Product.objects.get(id=productId)
# and other...
``` | make sure you did not define class name or function or varible with same name | 15,896 |
25,709,478 | If I shuffle a small list using [python's `random.shuffle`](https://docs.python.org/3/library/random.html#random.shuffle), I'm getting also unshuffled results.
```
import random
for i in range(10):
ori = [1, 2, 3]
per = ori[:]
random.shuffle(per)
print i, per, (per == ori) or ""
```
Here is a sample output:
```
0 [1, 3, 2]
1 [1, 2, 3] True
2 [3, 1, 2]
3 [2, 3, 1]
4 [1, 2, 3] True
5 [2, 3, 1]
6 [3, 2, 1]
7 [2, 1, 3]
8 [2, 1, 3]
9 [2, 1, 3]
```
I understand that this must necessarily be the case, after looking into the algorithm details [[1]](https://stackoverflow.com/a/3062966/2932052), [[2]](https://stackoverflow.com/a/18628010/2932052). But I really want get a small unsorted list (say 3 to 6 items) programmatically.
What do you think is the best approach to do this? | 2014/09/07 | [
"https://Stackoverflow.com/questions/25709478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2932052/"
] | If you must exclude the original ordering, you could simply retry the shuffle until `per != ori`. | Since it's a short array do a quick shuffle manually:
```
import random
for i in range(len(ori) - 1):
j = random.randint(i + 1, len(ori) - 1)
ori[i], ori[j] = ori[j], ori[i]
```
This way you will ensure that you won't get the original array. This is an `O(n)` solution, you should only use it on small arrays. | 15,897 |
42,136,431 | I'm using Active directory with windows server 2008 R2. I have an application running with Django and python 2.7. Now I need to use active directory authentication to access into my application.
To do that, i'm using this packages:
```
sudo apt-get-update
sudo apt-get install python-dev libldap2-dev libsasl2-dev libssl-dev
sudo pip install django-auth-ldap
sudo pip install python-ldap
```
I used the full name to have a successful bind.
```
import ldap
from django.conf import settings
username='my full name in AD'
password= 'my password'
l = ldap.initialize(settings.AUTH_LDAP_SERVER_URI)
l.simple_bind_s(username,password)
```
[](https://i.stack.imgur.com/V5vYg.png)
My problem is when i'm trying to use my cn attribute for bind, i got this error:
```
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 207, in
simple_bind_s
return self.result(msgid,all=1,timeout=self.timeout)
File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 422, in
result res_type,res_data,res_msgid = self.result2(msgid,all,timeout)
File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 426, in
result2 res_type, res_data, res_msgid, srv_ctrls =
self.result3(msgid,all,timeout)
File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 432, in
result3 ldap_result = self._ldap_call(self._l.result3,msgid,all,timeout)
File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 96, in
_ldap_call result = func(*args,**kwargs)
INVALID_CREDENTIALS: {'info': '80090308: LdapErr: DSID-0C0903A9, comment:
AcceptSecurityContext error, data 52e, v1db1', 'desc': 'Invalid
credentials'}
```
[](https://i.stack.imgur.com/V5vYg.png)
Can I use my cn (common name)attribute for authentication to my Active directory?
Thanks. | 2017/02/09 | [
"https://Stackoverflow.com/questions/42136431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4787419/"
] | You can't use the cn value in your simple\_bind(). Use the alternate user principal name `sAMAccountName@FQDN` instead, or one of the other [supported bind names](https://msdn.microsoft.com/en-us/library/cc223499.aspx). | @marabu, thanks for the reply. You're right, ican't use any editor attribute (like cn, ....) in the simple bind.
we have an access to this attribute only by search method after having a successfull bind.
In my case i have two choices:
1) simple\_bind\_s(full\_name, password)
2) simple\_bind\_s(sAMAccountName@FQDN, password) if we are using email address to our session authenticate
like this:
[](https://i.stack.imgur.com/YCmuk.png) | 15,899 |
51,268,745 | How can I create a permutation mapping of two lists in python?
For example I have two lists `[1,2,3]` and `['A','B','C']`
Then my code should generate a list of 6 dictionaries
```
[ {1:'A',2:'B',3:'C'},
{1:'A',2:'C',3:'B'},
{1:'B',2:'A',3:'C'},
{1:'B',2:'C',3:'A'},
{1:'C',2:'A',3:'B'},
{1:'C',2:'B',3:'A'} ]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51268745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8272518/"
] | Using [`zip`](https://docs.python.org/3/library/functions.html#zip) and [`itertools.permutations`](https://docs.python.org/3/library/itertools.html#itertools.permutations) in a list comprehension:
```
>>> from itertools import permutations
>>> L1 = [1,2,3]
>>> L2 = ['A','B','C']
>>> [dict(zip(L1, p)) for p in permutations(L2)]
[{1: 'A', 2: 'B', 3: 'C'},
{1: 'A', 2: 'C', 3: 'B'},
{1: 'B', 2: 'A', 3: 'C'},
{1: 'B', 2: 'C', 3: 'A'},
{1: 'C', 2: 'A', 3: 'B'},
{1: 'C', 2: 'B', 3: 'A'}]
``` | You seem to permutate only the values of the dicts, so you could do something like
```
from itertools import permutations
dicts = []
keys = [1, 2, 3]
for values in permutations(['A', 'B', 'C']):
new_dict = dict(zip(keys, values))
dicts.append(new_dict)
``` | 15,900 |
1,150,765 | I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:
```
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
```
prints:
```
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
```
I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "`+`" in C for example implies a function call to **add**. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome.
So, what is the special treatment that A-D undergo, making them come out as the same instance? | 2009/07/19 | [
"https://Stackoverflow.com/questions/1150765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/125185/"
] | Python is allowed to inline string constants; A,B,C,D are actually the same literals (if Python sees a constant expression, it treats it as a constant).
`str` is actually a class, so `str(whatever)` is calling this class' constructor, which should yield a fresh object. This explains E,F,G (note that each of these has separate identity).
As for H, I am not sure, but I'd go for explanation that this expression is too complicated for Python to figure out it's actually a constant, so it computes a new string. | I believe short strings that can be evaluated at compile time, will be interned automatically. In the last examples, the result can't be evaluated at compile time because `str` or `join` might be redefined. | 15,901 |
20,448,734 | ```
#!/bin/sh
echo "Hello from sh"
```
---
When I run this program (hello.sh) I get a command not found error. I'm using cygwin on Windows and I looked at cy's FAQ. It said to change the permissions to 755. I already did that and still no luck. I get the same error with these other two programs.
```
#!/usr/bin/env python
print "Hello from python"
```
---
```
#!/usr/local/bin/perl
print "Hello from perl\n";
```
---
Any answers are appreciated. Thanks. | 2013/12/08 | [
"https://Stackoverflow.com/questions/20448734",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | As has already been said, you need to add the Cygwin binaries to your path. To do so, right click on "My Computer", click "Properties", then "Advanced", then "Environment Variables".
Create a new environment variable with name `CYGWIN_HOME` and value `C:\cygwin` (or wherever you installed cygwin. The default location is `C:\cygwin\` so this should probably work for you).
Then edit the environment variable named "PATH", and tack on the following to the end:
```
;%CYGWIN_HOME%\bin;%CYGWIN_HOME%\sbin;%CYGWIN_HOME%\usr\bin;%CYGWIN_HOME%\usr\sbin;%CYGWIN_HOME%\usr\local\bin;%CYGWIN_HOME%\usr\local\sbin
```
Close your command prompt, then reopen it. The cygwin binaries should now be available. You can double-check this by typing "which bash". It should report the location of your bash executable | I was getting the "command not found" error on a Perl script. That script has the shebang line: "#!/usr/bin/env perl" as the first line and my user is the owner who has execute permissions. I was trying to run the script using the command line "ppminstall.pl ?" (the script is set up to display documentation for using the script if passed the '?' argument). I was cd'd to the directory containing the script. The solution turned out to be to run it using "./ppminstall.pl ?", i.e. explicitly specify the current directory. I think that you might be able to add "." to the current path and get the original command line to work, but it does seem safer to me to use "./" to run it since it's pretty explicit about where the script that you want to run is located. It may be possible for someone to manipulate your Path variable and cause you to be running a version of the script that you didn't intend. The '.' and '/' characters are easy to type without taking your eyes off the screen, so it seems like a useful habit to get into. Of course, I don't know if that is your problem, but it was mine. | 15,906 |
67,698,235 | I wonder that is there anyway to speed up python3/numpy's `np.where` operation? I have a minimal working example as follows:
```
from time import time
import numpy as np
a = np.random.randint(0, 4, (768, 512, 512))
b = a.copy().astype(np.uint8)
c = b.copy()
print(a.shape)
ts = time()
a[a > 0] = 1
print(f'normalize in {time() - ts}s')
ts = time()
b[b > 0] = 1
print(f'normalize in {time() - ts}s')
ts = time()
c = np.where(c > 0, 1, c)
print(f'normalize in {time() - ts}s')
```
and the output is:
```
normalize in 0.9307191371917725s
normalize in 0.8891170024871826s
normalize in 0.7120938301086426s
```
as we can see that `np.where` gives the fastest result but it still pretty slow, as in my project I need to do such normalize operation about 50 times...
I am wondering that is there any faster way to do this? All I need is to convert any non-zero elements to 1. Thx! | 2021/05/26 | [
"https://Stackoverflow.com/questions/67698235",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15424596/"
] | The best way to accomplish your operation is to convert the matrix to the boolean datatype and then back to integer, especially if 8-bit integers are used:
```
import numpy as np
import timeit
a = np.random.randint(0, 4, (768, 512, 512))
a_short = a.astype(np.uint8)
# Time to initialize
%timeit b=a.copy()
201 ms ± 561 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit b=a.copy(); b=b.astype(bool).astype(int)
373 ms ± 347 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit b=a.copy(); b=np.where(b>0,1,b)
985 ms ± 150 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit b=a.copy(); b[b>0]=1
1.09 s ± 1.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# Time to initialize
%timeit b=a_short.copy()
26.7 ms ± 202 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
The best of the best:
```
%timeit b=a_short.copy(); b=b.astype(bool).astype(np.uint8)
77.5 ms ± 47.8 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit b=a_short.copy(); b=np.where(b>0,1,b)
570 ms ± 476 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit b=a_short.copy();b[b>0]=1
844 ms ± 4.09 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
``` | You could try with `numexpr`:
```
import numpy as np
import numexpr as ne
# for timing reference
a = np.random.randint(0, 4, (768, 512, 512))
%timeit a[a > 0] = 1
```
782 ms ± 9.42 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
a = np.random.randint(0, 4, (768, 512, 512))
%timeit a = ne.evaluate("where(a > 0, 1, a)")
```
254 ms ± 2.65 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
---
Or with `Numba`:
```
import numba as nb
@nb.jit(nopython=True, fastmath=True, parallel=True)
def n(x):
for a in nb.prange(x.shape[0]):
for b in range(x.shape[1]):
for c in range(x.shape[2]):
if x[a,b,c]>0:
x[a,b,c]=1
return x
# You should do this twice as it gets compiled first time through
%timeit c = n(a)
# Also, try re-assigning back on top of original, i.e. a=n(a)
```
113 ms ± 551 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) | 15,908 |
66,730 | I've created a python object, but I want to send signals on it. I made it inherit from gobject.GObject, but there doesn't seem to be any way to create a new signal on my object. | 2008/09/15 | [
"https://Stackoverflow.com/questions/66730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8453/"
] | You can also define signals inside the class definition:
```
class MyGObjectClass(gobject.GObject):
__gsignals__ = {
"some-signal": (gobject.SIGNAL_RUN_FIRST, gobject.TYPE_NONE, (object, )),
}
```
The contents of the tuple are the the same as the three last arguments to `gobject.signal_new`. | Here is how:
```
import gobject
class MyGObjectClass(gobject.GObject):
...
gobject.signal_new("signal-name", MyGObjectClass, gobject.SIGNAL_RUN_FIRST,
None, (str, int))
```
Where the second to last argument is the return type and the last argument is a tuple of argument types. | 15,909 |
73,069,374 | I have this table
| Stars |
| --- |
| 3 stars |
| Stars 20 |
| 901stars |
| 8 |
I'm using python to filter the table but I'm not sure of the regex to reject 8. `[^0-9]` will flag `3 stars, 901stars` as errors too but I just want to flag that 8 is incorrect based on the regex.
The regex I need would only flag out the numbers (not number+string or string+number). | 2022/07/21 | [
"https://Stackoverflow.com/questions/73069374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16125533/"
] | I'd suggest `\d+$` to match 1 or more digits at the end of the line. (or even `^\d+$` to search from beginning of line).
disclaimer: I don't have a clue wrt Python, but if has a standard rx library, this should work. | Here is a way without using regex.
```
pd.to_numeric(df['Stars'],errors = 'coerce').isna()
``` | 15,911 |
2,587,709 | I was wondering if there is a way to automatically run commands on entering the python shell as you would with the .bash\_profile or .profile scripts with bash. I would like to automatically import some modules so I don't have to type the whole shebang everytime I hop into the shell.
Thanks, | 2010/04/06 | [
"https://Stackoverflow.com/questions/2587709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/278205/"
] | Yup you can use the `PYTHONSTARTUP` environment variable to do this as outlined [here](http://docs.python.org/tutorial/interpreter.html#the-interactive-startup-file) | Also consider using [ipython](http://ipython.scipy.org/) if you're doing a lot of interactive work. Your options for this kind of automation expand significantly. | 15,912 |
56,902,458 | I am trying to use this example code from the PyTorch [website](https://pytorch.org/tutorials/advanced/cpp_export.html) to convert a python model for use in the PyTorch c++ api (LibTorch).
```
Converting to Torch Script via Tracing
To convert a PyTorch model to Torch Script via tracing, you must pass an instance of your model along with an example input to the torch.jit.trace function. This will produce a torch.jit.ScriptModule object with the trace of your model evaluation embedded in the module’s forward method:
import torch
import torchvision
# An instance of your model.
model = torchvision.models.resnet18()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("model.pt")
```
This example works fine, and saves out the file as expected.
When i switch to this model:
`model = models.segmentation.deeplabv3_resnet101(pretrained=True)`
It gives me the following error:
```
File "convert.py", line 14, in <module>
traced_script_module = torch.jit.trace(model, example)
File "C:\Python37\lib\site-packages\torch\jit\__init__.py", line 636, in trace
raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1])
```
I assume this is because the `example` format is wrong, but how can I get the correct one?
Based on the comments below, my new code is:
```
import torch
import torchvision
from torchvision import models
model = models.segmentation.deeplabv3_resnet101(pretrained=True)
model.eval()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("model.pt")
```
And i now get the error:
```
File "convert.py", line 15, in <module>
traced_script_module = torch.jit.trace(model, example)
File "C:\Python37\lib\site-packages\torch\jit\__init__.py", line 636, in trace
var_lookup_fn, _force_outplace)
RuntimeError: Only tensors and (possibly nested) tuples of tensors are supported as inputs or outputs of traced functions (toIValue at C:\a\w\1\s\windows\pytorch\torch/csrc/jit/pybind_utils.h:91)
(no backtrace available)
``` | 2019/07/05 | [
"https://Stackoverflow.com/questions/56902458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4402282/"
] | (from pytorch forums)
trace only supports modules that have tensor or tuple of tensor as output.
According to deeplabv3 implementation, its output is OrderedDict. That is a problem.
To solve this, make a wrapper module
```
class wrapper(torch.nn.Module):
def __init__(self, model):
super(wrapper, self).__init__()
self.model = model
def forward(self, input):
results = []
output = self.model(input)
for k, v in output.items():
results.append(v)
return tuple(results)
model = wrapper(deeplap_model)
#trace...
```
Has my model saving out. | Your problem originates in the BatchNorm layer. If it requires *more than one value per channel*, then your model is in training mode. Could you invoke <https://pytorch.org/cppdocs/api/classtorch_1_1nn_1_1_module.html#_CPPv4N5torch2nn6Module4evalEv> on the model and see if there's an improvement?
Otherwise you could also try to generate random data with more than one instance in a batch, i.e. `example = torch.rand(5, 3, 224, 224)`.
Furthermore, you should take care to properly normalise your data, however, this isn't causing the error here. | 15,913 |
14,425,833 | What I'm trying to do seems rather simple, but I can't find a way to do it.
Imagine somebody sends you a link for a dropbox folder. You can go to that URL and see all the files in the folder.
I'm trying to write a script in either python, php, or javascript to get all the download links in that folder from that URL.
How would I do that? Without logging in/authenticating into the account of the user who sent the link in the first place. | 2013/01/20 | [
"https://Stackoverflow.com/questions/14425833",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/706798/"
] | In absence of suffixes, sufficiently small numbers have `int` or `double` types
```
a = 42; /* 42 has type int */
b = 42.0; /* 42.0 has type double */
```
You can use suffixes to specify the type of the literal
```
c = 42U; /* unsigned int */
d = 42.0f; /* float */
e = 42.0L; /* long double */
f = 42ULL; /* unsigned long long (in C99; C89 didn't have long long) */
``` | >
> Will I need to cast one of the operands to (float) to make this
> condition true?
>
>
>
Yes, because integral literals are of type `int` and a division between two `int` types returns also an `int`, meaning that the fraction is omitted.
>
> Has the situation now changed, because the compiler notices one of
> the operands has to be float?
>
>
>
Yes, because one of the operands of the division is now a `double` and therefore the division returns a `double` to maintain precision. (`float` literals are followed by an `f`, as in `1.0f`, but this doesn't matter much regarding your question.) | 15,914 |
58,464,713 | `H4sIAAAAAAAAAO1aT3PbSHaHrPHYkj1j73gn2doku3B2N7PJLjz4T0BVqQpFQiQ4BCCBoCjiomoADRIk/mhBUBT5AXJL5ZbkkqocUqVrDvkE+ijzFXJNJXkNSjLHpjyyx95JuWQfRHSjG6/f+/V7v/e6tylqi9qItimKYu5R96Jg4x82qPu1bJoWG9vUZoEGW9QnOPWHFPm3SW01owDvxWgwgcf/2aa2O+NpHFuzFOcPqXt6QP06CESZD8KAYVUkMUJFDhkk4goTYB7xvuf7LO/BuP08O8F5EeHJFvWwwGfFNMcTIsbGQ+r+IYqnmPpPPG+x7tGQDY5asT/XZXh2Omxs6aOTip4ezr2aLusJ9Dd3F1aknAaNQzFoHs7dI2Pq9eKp2zPn/Z7LthMpDmrq9VzthRm7I501Fi34a8xMx+ANxxDNxgG0aaLb6HP9xB6avdbY6O2NjORA6ieHQ6PuwjP08W7c7x3wbuNANPnuwuTtsTvqRuERp8IKtqkHQTQ5idEcdNfOcvwQGh9Rzy7OFRv/YRrBQukYn+KYlqkH0LibzYg26YvzSmeYzeh5Ns1pHOMBSn1Mz6JiSM+GUYGpn8Ibk5MoR3GUDmiU5zDwOYz8S2ivnpzEc7oYRhM6n6aYLjLag246y+lwOsHUn8M7xSyD9gEuhjinUQFvY9qeppFPPYbefRzgSYFiMuPnF+fo4jzumjXLMCzzIfWJiRJM/dnFefjtP/893SPi0J1SFDIDpnUdlv1EOytyVC2KPPKmBTHnfSLLZJN63Gvqjnbc2dftahvmv0c9JGCjPrG7pkYeP62jBA0wIKxEIcVdofDxEoWvI2+Nju9RXxJ15nhG9KOngxwHEU4LsrwQlvPdxTyCRsubREGE0jXCLwV8aO129LpeNVeFpJZCbvzTlZCbt9kqawTepH55cY7taDAsGD+O/DEx22mEZ3SO/egEl9bdvjiXQHhtX69diQ5Qkmr5nFiL3svRICGLvGkJT2t2v+NU28d7drVhaKazbiknV0t5+hb63livWABPqKUBYGwfozy+UbBHmlnX7ON9rWq318n0Qz1R6Ps+UkSfkYUKeKJACBlP9XxG9LGIKkooCzz60T3RwaLv7A3dup30eYM1EjcyR+T/4MxY6KLrHMzden/RXwwWfV47sxwzMhbGot/rz/vQZ478udXQ5mZPE2E8b9SrvOXob++JxFVPRPxJLccI9jiiBzihcRhiv6A9PIzSoPRPBJWVD+mB1iLri4tznvifBsh0G7ez2dCMW3ibjfwKaX/3ITfyDU71M2juZXFIO1lWDG/cLds9q7137FiW01y3WT6CJfz8agl/9Y5LeLBEbpZOpgmJACUWDTSIfPCTxnQyzLMsoX5B4mw2PaGzdBlqT/LolGA9msQoBb9BkB3HBLoEnuQbAFZQRgjwDgHNsFiefAnBR8ji4P0XdBuj0+U38dWE1HPoWU4KUTyOQZFJdorLV5Y76g0q/Rk0vyo6TcS+UbvPjGpDrx0b3U7TtizjuGN199ep+W9ecasb1KNu6sF2HyMvxhu30vRnZCtW6jjE6QTvEI38jmepP4U2PS1wHEcDsNxlhyTBgL+4OJf3wD/THVzQu1k6nezQNg5z5BdRlpaackAtsPsL8Bjh0nQoT0Dd/hClIP5X8EqYgxJgThZ+E4sQS7Dsb8hTgE9wGoAFSvuCaYmW/WmeQ2ikY4Lqpad7AcL8CUHJZVe77GqTLuonMB8HsVahfwvTvmB/89eXtAyaHCIQsJ6E9lEKnhCMCVgY4OA5GXYVoOmm1jY0Z5O672dxllP//V//e2XQn6zE7CaOE7w+Yl8OfMRL/A4vCju8JD2ktrI8GkSpgwbUlzW7uufoZuO4Yev145rVbms1Z4mAz68i/VIK+PJ0Cs2/VjxfDRFSGBZxEP44RWUUUcJMKAU8z8tCIAoyfKSIEuJ+kxPqifo1r37NqbS6I6j0vvHG6PzsJsbwekRGPK9glVOYCs8HjIAEjvF8TmBAPJWVEBsIiP/xIzLw/y7E2HjojiDyOvrc6hmSMTJEt16dA99nTb7PGfxh3B91IW84OINIfGZBXmD09IXZg/jcg1zBOZDMBskxDsdu3XhjRH5XGggRVQJdFeBJYJ/UgQgC7r+XD36xb1sOYMay1zLCu5h4t4S7JXx0S3j+yhI+QEJNGhvT9CSbQe53o5wPOt32frNrf6R6/t2HYVeCRBjhGnbFcexHTq++XKFXtabWcfbbVUdbR7G+XKFYtSFwGdAr0PD3TbO+uKJZL6W5plqCxFUQV1EZT8UqI8oKZhQFcQwbyh5f8X0RS69TLYlQLXlHVOjqWqr1EWyLuyXcLeEHLuF6Ff/4Sm34rVdxn/p56Ut9cMsTvEzz02niQZqfhdSvoA/yjWya+9Dnl3WwgPYuS1uQ8xOPSrwgNMGSzgTiH0ldQKCHMIiUA24sYJGacw2BfwIndXPNuVYF7wIuZp0Jf/+B4stN2bv6sYeXZyvhpa01GuDzO+uCy2rNvY0HECdAw+87tFzX6q8kuQ4snqAqiljxGFlRIYcXQ4HxOCFkFM4LIbCwouwH6wNLZYcT1ufw1G/fhRTeUPV/Ao02bJTrOttN8H5sa/XrKtWdVD9Mqq/eo1SwgcLdPJul3y/X57u21TPvJHu/kr124nWbmposewKreiIji4hlBFypgE9Q4FcQSArrS7zgK7eqqW0ckZoa6nGxL9hD76gaWSONM0fG3BzpgumMWfg90dPduce7J17jsOWSGtuoKxgNfWb2WkNj5CbGwh5ajcPITezIqu/Gfb41cutBDHPM9GgW+c3WqZvEE7crDb1eN7KiluEJrXGXV6deOo7atdYC9YJp/8jm/OSw4x7tceioFZNvBc0W53Zm0UotL7LSSdRli5YTlTVAGDMjc2ZB0569Vd2P12bmyB5bji+ZyeHY7LnD/uiANXt21F+MZ/D7zBy5I7PRnZuLg7kxGojGQhcMJ45MxzizGtrMqGtzM9HmllOFOTTWHWlvrPs9JIEVwlNllVBB8EPpnJ6hOCbn7RUIp+UphT/MIr88VZuc5GhOIuEvlqPbOFxhY8gH2lJymsuMcUwbKAWU5TQ5iOiQweVpWwfHZeHwxU1khTSWr9O1G07IH0FUgzCMShbwiVbtkMr38q4CRf0tAKsUldoiXqxqA/w3qU/nmJytQP+T5bbY6uzb1f5x7btH7FdVyMH6M+ASuPeozfg0pjaXu5qm3oUKPSI7WNXCMPJhJf6c1nW9PEV6yQ7JlYw/TEG78XxpiSLLSrZRKb81ob0488eTF0Rfqo2i1IMBesk8amhKZpgMMT4htglgE9KIzsEQ4CjIgegM5qIJYYB5Z0OckndRjoMX1FOYzQETjaMUT4C9LMXaJd8qDZlkXjnfhB7AxBHw7yKelyQuSuFTpaRRegrmyfISLc9WbUx3mlrV7lyZeouYmnx5PaUh+v10OWKbekwMgNKCFJknm9SDfLlkYpFNahtfq5Kw8U3qUfFyDaUbXef3mA9DajnpBlIrfuxHUk9XSO0upFJrGe3TFUa7CznZ+6ezn13R2VKGl0USTgwVXvQYkVcqjIh4CFkyChhflnhPlCsIAtqrXJYruay4Iwo3FEl+8L01XlGCioIYgav4jOBJEqMgFTEVzgtkFYUKV2F/7LMpXhchzkIUChKr3hpDrB33RwOp74y5fo/cQRvGEIvmpmPG5M6a6XTZ/sJN3HorMZLuwupB7KobPPTBX3NoNEhE87/ntshPX78tAq9vQ2sPo5Msvbq7VhuW99VIgAL0jPES4impHMN2Ihl3efCd4gSUR/wTbJAQkuvnl+PX3B+ZLT9AA/7IdiTXSOjbXyEBli3DJoBUSjPrVbu/crSuklsk7Wv5bnOXZKutN5qOCUCnbnGj5F2YXIhDVfUUgeEVjyAQdoSq4gpTkcEFqEhBSJVvh8D/uD0Cq3J7rq68KxWoJ8V9oTV004OplxyybcGOcZOwse6pyR8srHo8Mp1DYHZ9wW3onLlojc3RmDd6gDr+gDP5si02F/0zYzEW+qO+0HfgHYLIhiEYiTuynL2x27DHwKTeiL6tMuh9A+FziRgUR6d4GSjLxxzTl646ACQNv5rQQZblv6cHEGKL8pIkBNQsL0omBVqLsoC4coiLUepHXhRHBcTGZ9e3RiAnAz8NMCM++XnpZn3TcrQd2jLbfbpn2d904CftNDUaQEVqWr7eaVfNOrkEQq09pSVVaRsnKUi0DCOY1ubrq9IrbnV71+p0jjv71Z55WXW2NcOsms6xtXcMnz/W+i+rzlwgsYIcYqaCPJ4RWcgBVFn1GKwIKqtwqsCz/ncdKsd+zV86VO6V4sAdhO8g/MeHMM+rIe+TG+OeyDGiDwktYkWf8VSVq4ghL7BicAfhOwj/P4ZwKCOskvvGCgoURgyQyCicX2FYFPisKEqBLEvfgfBTgPCS1yo7vHKH4TsM/+gYZoNAxaIgAIZ5lRErQcioISsyHhIUlvXEkGPDt8HwVXb2s7fKztbWyhig8zrJdqtLbe/QnQTF5DIl5K8X5xiUZROKTtfaeu2bsrR22eXz7Lf/+u/w/PWkPKirTF5cVt6UHimlDOE1Yu/0q4Kk+/64NKRCymZ0LZsUpGggyCwpMCi1LIuDbJaWhQSOyPtL+MC3//JvdGmcyzTJzRIvwnT7sE1zZK7V81C6B0ZeudGOShnAnkRaUi34Hnu+KeF+QuYmdm1q1Ta8cm1XUVBCWUYKw7EepNyiLzFqgDnGD8Mg9Cuij9Aau3LErtIOJ792fkRR/weTySE2OjYAAA\u003d\u003d
H4sIAAAAAAAAAO1aT3PbSHaHrPHYkj1j73gn2doku3B2N7PJLjz4T0BVqQpFQiQ4BCCBoCjiomoADRIk/mhBUBT5AXJL5ZbkkqocUqVrDvkE+ijzFXJNJXkNSjLHpjyyx95JuWQfRHSjG6/f+/V7v/e6tylqi9qItimKYu5R96Jg4x82qPu1bJoWG9vUZoEGW9QnOPWHFPm3SW01owDvxWgwgcf/2aa2O+NpHFuzFOcPqXt6QP06CESZD8KAYVUkMUJFDhkk4goTYB7xvuf7LO/BuP08O8F5EeHJFvWwwGfFNMcTIsbGQ+r+IYqnmPpPPG+x7tGQDY5asT/XZXh2Omxs6aOTip4ezr2aLusJ9Dd3F1aknAaNQzFoHs7dI2Pq9eKp2zPn/Z7LthMpDmrq9VzthRm7I501Fi34a8xMx+ANxxDNxgG0aaLb6HP9xB6avdbY6O2NjORA6ieHQ6PuwjP08W7c7x3wbuNANPnuwuTtsTvqRuERp8IKtqkHQTQ5idEcdNfOcvwQGh9Rzy7OFRv/YRrBQukYn+KYlqkH0LibzYg26YvzSmeYzeh5Ns1pHOMBSn1Mz6JiSM+GUYGpn8Ibk5MoR3GUDmiU5zDwOYz8S2ivnpzEc7oYRhM6n6aYLjLag246y+lwOsHUn8M7xSyD9gEuhjinUQFvY9qeppFPPYbefRzgSYFiMuPnF+fo4jzumjXLMCzzIfWJiRJM/dnFefjtP/893SPi0J1SFDIDpnUdlv1EOytyVC2KPPKmBTHnfSLLZJN63Gvqjnbc2dftahvmv0c9JGCjPrG7pkYeP62jBA0wIKxEIcVdofDxEoWvI2+Nju9RXxJ15nhG9KOngxwHEU4LsrwQlvPdxTyCRsubREGE0jXCLwV8aO129LpeNVeFpJZCbvzTlZCbt9kqawTepH55cY7taDAsGD+O/DEx22mEZ3SO/egEl9bdvjiXQHhtX69diQ5Qkmr5nFiL3svRICGLvGkJT2t2v+NU28d7drVhaKazbiknV0t5+hb63livWABPqKUBYGwfozy+UbBHmlnX7ON9rWq318n0Qz1R6Ps+UkSfkYUKeKJACBlP9XxG9LGIKkooCzz60T3RwaLv7A3dup30eYM1EjcyR+T/4MxY6KLrHMzden/RXwwWfV47sxwzMhbGot/rz/vQZ478udXQ5mZPE2E8b9SrvOXob++JxFVPRPxJLccI9jiiBzihcRhiv6A9PIzSoPRPBJWVD+mB1iLri4tznvifBsh0G7ez2dCMW3ibjfwKaX/3ITfyDU71M2juZXFIO1lWDG/cLds9q7137FiW01y3WT6CJfz8agl/9Y5LeLBEbpZOpgmJACUWDTSIfPCTxnQyzLMsoX5B4mw2PaGzdBlqT/LolGA9msQoBb9BkB3HBLoEnuQbAFZQRgjwDgHNsFiefAnBR8ji4P0XdBuj0+U38dWE1HPoWU4KUTyOQZFJdorLV5Y76g0q/Rk0vyo6TcS+UbvPjGpDrx0b3U7TtizjuGN199ep+W9ecasb1KNu6sF2HyMvxhu30vRnZCtW6jjE6QTvEI38jmepP4U2PS1wHEcDsNxlhyTBgL+4OJf3wD/THVzQu1k6nezQNg5z5BdRlpaackAtsPsL8Bjh0nQoT0Dd/hClIP5X8EqYgxJgThZ+E4sQS7Dsb8hTgE9wGoAFSvuCaYmW/WmeQ2ikY4Lqpad7AcL8CUHJZVe77GqTLuonMB8HsVahfwvTvmB/89eXtAyaHCIQsJ6E9lEKnhCMCVgY4OA5GXYVoOmm1jY0Z5O672dxllP//V//e2XQn6zE7CaOE7w+Yl8OfMRL/A4vCju8JD2ktrI8GkSpgwbUlzW7uufoZuO4Yev145rVbms1Z4mAz68i/VIK+PJ0Cs2/VjxfDRFSGBZxEP44RWUUUcJMKAU8z8tCIAoyfKSIEuJ+kxPqifo1r37NqbS6I6j0vvHG6PzsJsbwekRGPK9glVOYCs8HjIAEjvF8TmBAPJWVEBsIiP/xIzLw/y7E2HjojiDyOvrc6hmSMTJEt16dA99nTb7PGfxh3B91IW84OINIfGZBXmD09IXZg/jcg1zBOZDMBskxDsdu3XhjRH5XGggRVQJdFeBJYJ/UgQgC7r+XD36xb1sOYMay1zLCu5h4t4S7JXx0S3j+yhI+QEJNGhvT9CSbQe53o5wPOt32frNrf6R6/t2HYVeCRBjhGnbFcexHTq++XKFXtabWcfbbVUdbR7G+XKFYtSFwGdAr0PD3TbO+uKJZL6W5plqCxFUQV1EZT8UqI8oKZhQFcQwbyh5f8X0RS69TLYlQLXlHVOjqWqr1EWyLuyXcLeEHLuF6Ff/4Sm34rVdxn/p56Ut9cMsTvEzz02niQZqfhdSvoA/yjWya+9Dnl3WwgPYuS1uQ8xOPSrwgNMGSzgTiH0ldQKCHMIiUA24sYJGacw2BfwIndXPNuVYF7wIuZp0Jf/+B4stN2bv6sYeXZyvhpa01GuDzO+uCy2rNvY0HECdAw+87tFzX6q8kuQ4snqAqiljxGFlRIYcXQ4HxOCFkFM4LIbCwouwH6wNLZYcT1ufw1G/fhRTeUPV/Ao02bJTrOttN8H5sa/XrKtWdVD9Mqq/eo1SwgcLdPJul3y/X57u21TPvJHu/kr124nWbmposewKreiIji4hlBFypgE9Q4FcQSArrS7zgK7eqqW0ckZoa6nGxL9hD76gaWSONM0fG3BzpgumMWfg90dPduce7J17jsOWSGtuoKxgNfWb2WkNj5CbGwh5ajcPITezIqu/Gfb41cutBDHPM9GgW+c3WqZvEE7crDb1eN7KiluEJrXGXV6deOo7atdYC9YJp/8jm/OSw4x7tceioFZNvBc0W53Zm0UotL7LSSdRli5YTlTVAGDMjc2ZB0569Vd2P12bmyB5bji+ZyeHY7LnD/uiANXt21F+MZ/D7zBy5I7PRnZuLg7kxGojGQhcMJ45MxzizGtrMqGtzM9HmllOFOTTWHWlvrPs9JIEVwlNllVBB8EPpnJ6hOCbn7RUIp+UphT/MIr88VZuc5GhOIuEvlqPbOFxhY8gH2lJymsuMcUwbKAWU5TQ5iOiQweVpWwfHZeHwxU1khTSWr9O1G07IH0FUgzCMShbwiVbtkMr38q4CRf0tAKsUldoiXqxqA/w3qU/nmJytQP+T5bbY6uzb1f5x7btH7FdVyMH6M+ASuPeozfg0pjaXu5qm3oUKPSI7WNXCMPJhJf6c1nW9PEV6yQ7JlYw/TEG78XxpiSLLSrZRKb81ob0488eTF0Rfqo2i1IMBesk8amhKZpgMMT4htglgE9KIzsEQ4CjIgegM5qIJYYB5Z0OckndRjoMX1FOYzQETjaMUT4C9LMXaJd8qDZlkXjnfhB7AxBHw7yKelyQuSuFTpaRRegrmyfISLc9WbUx3mlrV7lyZeouYmnx5PaUh+v10OWKbekwMgNKCFJknm9SDfLlkYpFNahtfq5Kw8U3qUfFyDaUbXef3mA9DajnpBlIrfuxHUk9XSO0upFJrGe3TFUa7CznZ+6ezn13R2VKGl0USTgwVXvQYkVcqjIh4CFkyChhflnhPlCsIAtqrXJYruay4Iwo3FEl+8L01XlGCioIYgav4jOBJEqMgFTEVzgtkFYUKV2F/7LMpXhchzkIUChKr3hpDrB33RwOp74y5fo/cQRvGEIvmpmPG5M6a6XTZ/sJN3HorMZLuwupB7KobPPTBX3NoNEhE87/ntshPX78tAq9vQ2sPo5Msvbq7VhuW99VIgAL0jPES4impHMN2Ihl3efCd4gSUR/wTbJAQkuvnl+PX3B+ZLT9AA/7IdiTXSOjbXyEBli3DJoBUSjPrVbu/crSuklsk7Wv5bnOXZKutN5qOCUCnbnGj5F2YXIhDVfUUgeEVjyAQdoSq4gpTkcEFqEhBSJVvh8D/uD0Cq3J7rq68KxWoJ8V9oTV004OplxyybcGOcZOwse6pyR8srHo8Mp1DYHZ9wW3onLlojc3RmDd6gDr+gDP5si02F/0zYzEW+qO+0HfgHYLIhiEYiTuynL2x27DHwKTeiL6tMuh9A+FziRgUR6d4GSjLxxzTl646ACQNv5rQQZblv6cHEGKL8pIkBNQsL0omBVqLsoC4coiLUepHXhRHBcTGZ9e3RiAnAz8NMCM++XnpZn3TcrQd2jLbfbpn2d904CftNDUaQEVqWr7eaVfNOrkEQq09pSVVaRsnKUi0DCOY1ubrq9IrbnV71+p0jjv71Z55WXW2NcOsms6xtXcMnz/W+i+rzlwgsYIcYqaCPJ4RWcgBVFn1GKwIKqtwqsCz/ncdKsd+zV86VO6V4sAdhO8g/MeHMM+rIe+TG+OeyDGiDwktYkWf8VSVq4ghL7BicAfhOwj/P4ZwKCOskvvGCgoURgyQyCicX2FYFPisKEqBLEvfgfBTgPCS1yo7vHKH4TsM/+gYZoNAxaIgAIZ5lRErQcioISsyHhIUlvXEkGPDt8HwVXb2s7fKztbWyhig8zrJdqtLbe/QnQTF5DIl5K8X5xiUZROKTtfaeu2bsrR22eXz7Lf/+u/w/PWkPKirTF5cVt6UHimlDOE1Yu/0q4Kk+/64NKRCymZ0LZsUpGggyCwpMCi1LIuDbJaWhQSOyPtL+MC3//JvdGmcyzTJzRIvwnT7sE1zZK7V81C6B0ZeudGOShnAnkRaUi34Hnu+KeF+QuYmdm1q1Ta8cm1XUVBCWUYKw7EepNyiLzFqgDnGD8Mg9Cuij9Aau3LErtIOJ792fkRR/weTySE2OjYAAA==`
When I try to store it in a variable in python, it doesn't work because it contains single and double quotes, so I get an EOL error. How can I fix this? | 2019/10/19 | [
"https://Stackoverflow.com/questions/58464713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11989704/"
] | There are no quotes in your string; it's simply made up of two identical base64 encoded strings, each of which can be decoded fine after a small fix: it appears that what has happened is that the trailing `==` in the first string have become `\u003d\u003d`. Replace `\u003d\u003d` with `==` and use the first string, or just use the second string as is, and you're good.
I.e.,
```py
s1 = 'H4sIAAAAAAAAAO1aT3PbSHaHrPHYkj1j73gn2doku3B2N7PJLjz4T0BVqQpFQiQ4BCCBoCjiomoADRIk/mhBUBT5AXJL5ZbkkqocUqVrDvkE+ijzFXJNJXkNSjLHpjyyx95JuWQfRHSjG6/f+/V7v/e6tylqi9qItimKYu5R96Jg4x82qPu1bJoWG9vUZoEGW9QnOPWHFPm3SW01owDvxWgwgcf/2aa2O+NpHFuzFOcPqXt6QP06CESZD8KAYVUkMUJFDhkk4goTYB7xvuf7LO/BuP08O8F5EeHJFvWwwGfFNMcTIsbGQ+r+IYqnmPpPPG+x7tGQDY5asT/XZXh2Omxs6aOTip4ezr2aLusJ9Dd3F1aknAaNQzFoHs7dI2Pq9eKp2zPn/Z7LthMpDmrq9VzthRm7I501Fi34a8xMx+ANxxDNxgG0aaLb6HP9xB6avdbY6O2NjORA6ieHQ6PuwjP08W7c7x3wbuNANPnuwuTtsTvqRuERp8IKtqkHQTQ5idEcdNfOcvwQGh9Rzy7OFRv/YRrBQukYn+KYlqkH0LibzYg26YvzSmeYzeh5Ns1pHOMBSn1Mz6JiSM+GUYGpn8Ibk5MoR3GUDmiU5zDwOYz8S2ivnpzEc7oYRhM6n6aYLjLag246y+lwOsHUn8M7xSyD9gEuhjinUQFvY9qeppFPPYbefRzgSYFiMuPnF+fo4jzumjXLMCzzIfWJiRJM/dnFefjtP/893SPi0J1SFDIDpnUdlv1EOytyVC2KPPKmBTHnfSLLZJN63Gvqjnbc2dftahvmv0c9JGCjPrG7pkYeP62jBA0wIKxEIcVdofDxEoWvI2+Nju9RXxJ15nhG9KOngxwHEU4LsrwQlvPdxTyCRsubREGE0jXCLwV8aO129LpeNVeFpJZCbvzTlZCbt9kqawTepH55cY7taDAsGD+O/DEx22mEZ3SO/egEl9bdvjiXQHhtX69diQ5Qkmr5nFiL3svRICGLvGkJT2t2v+NU28d7drVhaKazbiknV0t5+hb63livWABPqKUBYGwfozy+UbBHmlnX7ON9rWq318n0Qz1R6Ps+UkSfkYUKeKJACBlP9XxG9LGIKkooCzz60T3RwaLv7A3dup30eYM1EjcyR+T/4MxY6KLrHMzden/RXwwWfV47sxwzMhbGot/rz/vQZ478udXQ5mZPE2E8b9SrvOXob++JxFVPRPxJLccI9jiiBzihcRhiv6A9PIzSoPRPBJWVD+mB1iLri4tznvifBsh0G7ez2dCMW3ibjfwKaX/3ITfyDU71M2juZXFIO1lWDG/cLds9q7137FiW01y3WT6CJfz8agl/9Y5LeLBEbpZOpgmJACUWDTSIfPCTxnQyzLMsoX5B4mw2PaGzdBlqT/LolGA9msQoBb9BkB3HBLoEnuQbAFZQRgjwDgHNsFiefAnBR8ji4P0XdBuj0+U38dWE1HPoWU4KUTyOQZFJdorLV5Y76g0q/Rk0vyo6TcS+UbvPjGpDrx0b3U7TtizjuGN199ep+W9ecasb1KNu6sF2HyMvxhu30vRnZCtW6jjE6QTvEI38jmepP4U2PS1wHEcDsNxlhyTBgL+4OJf3wD/THVzQu1k6nezQNg5z5BdRlpaackAtsPsL8Bjh0nQoT0Dd/hClIP5X8EqYgxJgThZ+E4sQS7Dsb8hTgE9wGoAFSvuCaYmW/WmeQ2ikY4Lqpad7AcL8CUHJZVe77GqTLuonMB8HsVahfwvTvmB/89eXtAyaHCIQsJ6E9lEKnhCMCVgY4OA5GXYVoOmm1jY0Z5O672dxllP//V//e2XQn6zE7CaOE7w+Yl8OfMRL/A4vCju8JD2ktrI8GkSpgwbUlzW7uufoZuO4Yev145rVbms1Z4mAz68i/VIK+PJ0Cs2/VjxfDRFSGBZxEP44RWUUUcJMKAU8z8tCIAoyfKSIEuJ+kxPqifo1r37NqbS6I6j0vvHG6PzsJsbwekRGPK9glVOYCs8HjIAEjvF8TmBAPJWVEBsIiP/xIzLw/y7E2HjojiDyOvrc6hmSMTJEt16dA99nTb7PGfxh3B91IW84OINIfGZBXmD09IXZg/jcg1zBOZDMBskxDsdu3XhjRH5XGggRVQJdFeBJYJ/UgQgC7r+XD36xb1sOYMay1zLCu5h4t4S7JXx0S3j+yhI+QEJNGhvT9CSbQe53o5wPOt32frNrf6R6/t2HYVeCRBjhGnbFcexHTq++XKFXtabWcfbbVUdbR7G+XKFYtSFwGdAr0PD3TbO+uKJZL6W5plqCxFUQV1EZT8UqI8oKZhQFcQwbyh5f8X0RS69TLYlQLXlHVOjqWqr1EWyLuyXcLeEHLuF6Ff/4Sm34rVdxn/p56Ut9cMsTvEzz02niQZqfhdSvoA/yjWya+9Dnl3WwgPYuS1uQ8xOPSrwgNMGSzgTiH0ldQKCHMIiUA24sYJGacw2BfwIndXPNuVYF7wIuZp0Jf/+B4stN2bv6sYeXZyvhpa01GuDzO+uCy2rNvY0HECdAw+87tFzX6q8kuQ4snqAqiljxGFlRIYcXQ4HxOCFkFM4LIbCwouwH6wNLZYcT1ufw1G/fhRTeUPV/Ao02bJTrOttN8H5sa/XrKtWdVD9Mqq/eo1SwgcLdPJul3y/X57u21TPvJHu/kr124nWbmposewKreiIji4hlBFypgE9Q4FcQSArrS7zgK7eqqW0ckZoa6nGxL9hD76gaWSONM0fG3BzpgumMWfg90dPduce7J17jsOWSGtuoKxgNfWb2WkNj5CbGwh5ajcPITezIqu/Gfb41cutBDHPM9GgW+c3WqZvEE7crDb1eN7KiluEJrXGXV6deOo7atdYC9YJp/8jm/OSw4x7tceioFZNvBc0W53Zm0UotL7LSSdRli5YTlTVAGDMjc2ZB0569Vd2P12bmyB5bji+ZyeHY7LnD/uiANXt21F+MZ/D7zBy5I7PRnZuLg7kxGojGQhcMJ45MxzizGtrMqGtzM9HmllOFOTTWHWlvrPs9JIEVwlNllVBB8EPpnJ6hOCbn7RUIp+UphT/MIr88VZuc5GhOIuEvlqPbOFxhY8gH2lJymsuMcUwbKAWU5TQ5iOiQweVpWwfHZeHwxU1khTSWr9O1G07IH0FUgzCMShbwiVbtkMr38q4CRf0tAKsUldoiXqxqA/w3qU/nmJytQP+T5bbY6uzb1f5x7btH7FdVyMH6M+ASuPeozfg0pjaXu5qm3oUKPSI7WNXCMPJhJf6c1nW9PEV6yQ7JlYw/TEG78XxpiSLLSrZRKb81ob0488eTF0Rfqo2i1IMBesk8amhKZpgMMT4htglgE9KIzsEQ4CjIgegM5qIJYYB5Z0OckndRjoMX1FOYzQETjaMUT4C9LMXaJd8qDZlkXjnfhB7AxBHw7yKelyQuSuFTpaRRegrmyfISLc9WbUx3mlrV7lyZeouYmnx5PaUh+v10OWKbekwMgNKCFJknm9SDfLlkYpFNahtfq5Kw8U3qUfFyDaUbXef3mA9DajnpBlIrfuxHUk9XSO0upFJrGe3TFUa7CznZ+6ezn13R2VKGl0USTgwVXvQYkVcqjIh4CFkyChhflnhPlCsIAtqrXJYruay4Iwo3FEl+8L01XlGCioIYgav4jOBJEqMgFTEVzgtkFYUKV2F/7LMpXhchzkIUChKr3hpDrB33RwOp74y5fo/cQRvGEIvmpmPG5M6a6XTZ/sJN3HorMZLuwupB7KobPPTBX3NoNEhE87/ntshPX78tAq9vQ2sPo5Msvbq7VhuW99VIgAL0jPES4impHMN2Ihl3efCd4gSUR/wTbJAQkuvnl+PX3B+ZLT9AA/7IdiTXSOjbXyEBli3DJoBUSjPrVbu/crSuklsk7Wv5bnOXZKutN5qOCUCnbnGj5F2YXIhDVfUUgeEVjyAQdoSq4gpTkcEFqEhBSJVvh8D/uD0Cq3J7rq68KxWoJ8V9oTV004OplxyybcGOcZOwse6pyR8srHo8Mp1DYHZ9wW3onLlojc3RmDd6gDr+gDP5si02F/0zYzEW+qO+0HfgHYLIhiEYiTuynL2x27DHwKTeiL6tMuh9A+FziRgUR6d4GSjLxxzTl646ACQNv5rQQZblv6cHEGKL8pIkBNQsL0omBVqLsoC4coiLUepHXhRHBcTGZ9e3RiAnAz8NMCM++XnpZn3TcrQd2jLbfbpn2d904CftNDUaQEVqWr7eaVfNOrkEQq09pSVVaRsnKUi0DCOY1ubrq9IrbnV71+p0jjv71Z55WXW2NcOsms6xtXcMnz/W+i+rzlwgsYIcYqaCPJ4RWcgBVFn1GKwIKqtwqsCz/ncdKsd+zV86VO6V4sAdhO8g/MeHMM+rIe+TG+OeyDGiDwktYkWf8VSVq4ghL7BicAfhOwj/P4ZwKCOskvvGCgoURgyQyCicX2FYFPisKEqBLEvfgfBTgPCS1yo7vHKH4TsM/+gYZoNAxaIgAIZ5lRErQcioISsyHhIUlvXEkGPDt8HwVXb2s7fKztbWyhig8zrJdqtLbe/QnQTF5DIl5K8X5xiUZROKTtfaeu2bsrR22eXz7Lf/+u/w/PWkPKirTF5cVt6UHimlDOE1Yu/0q4Kk+/64NKRCymZ0LZsUpGggyCwpMCi1LIuDbJaWhQSOyPtL+MC3//JvdGmcyzTJzRIvwnT7sE1zZK7V81C6B0ZeudGOShnAnkRaUi34Hnu+KeF+QuYmdm1q1Ta8cm1XUVBCWUYKw7EepNyiLzFqgDnGD8Mg9Cuij9Aau3LErtIOJ792fkRR/weTySE2OjYAAA=='
s2 = 'H4sIAAAAAAAAAO1aT3PbSHaHrPHYkj1j73gn2doku3B2N7PJLjz4T0BVqQpFQiQ4BCCBoCjiomoADRIk/mhBUBT5AXJL5ZbkkqocUqVrDvkE+ijzFXJNJXkNSjLHpjyyx95JuWQfRHSjG6/f+/V7v/e6tylqi9qItimKYu5R96Jg4x82qPu1bJoWG9vUZoEGW9QnOPWHFPm3SW01owDvxWgwgcf/2aa2O+NpHFuzFOcPqXt6QP06CESZD8KAYVUkMUJFDhkk4goTYB7xvuf7LO/BuP08O8F5EeHJFvWwwGfFNMcTIsbGQ+r+IYqnmPpPPG+x7tGQDY5asT/XZXh2Omxs6aOTip4ezr2aLusJ9Dd3F1aknAaNQzFoHs7dI2Pq9eKp2zPn/Z7LthMpDmrq9VzthRm7I501Fi34a8xMx+ANxxDNxgG0aaLb6HP9xB6avdbY6O2NjORA6ieHQ6PuwjP08W7c7x3wbuNANPnuwuTtsTvqRuERp8IKtqkHQTQ5idEcdNfOcvwQGh9Rzy7OFRv/YRrBQukYn+KYlqkH0LibzYg26YvzSmeYzeh5Ns1pHOMBSn1Mz6JiSM+GUYGpn8Ibk5MoR3GUDmiU5zDwOYz8S2ivnpzEc7oYRhM6n6aYLjLag246y+lwOsHUn8M7xSyD9gEuhjinUQFvY9qeppFPPYbefRzgSYFiMuPnF+fo4jzumjXLMCzzIfWJiRJM/dnFefjtP/893SPi0J1SFDIDpnUdlv1EOytyVC2KPPKmBTHnfSLLZJN63Gvqjnbc2dftahvmv0c9JGCjPrG7pkYeP62jBA0wIKxEIcVdofDxEoWvI2+Nju9RXxJ15nhG9KOngxwHEU4LsrwQlvPdxTyCRsubREGE0jXCLwV8aO129LpeNVeFpJZCbvzTlZCbt9kqawTepH55cY7taDAsGD+O/DEx22mEZ3SO/egEl9bdvjiXQHhtX69diQ5Qkmr5nFiL3svRICGLvGkJT2t2v+NU28d7drVhaKazbiknV0t5+hb63livWABPqKUBYGwfozy+UbBHmlnX7ON9rWq318n0Qz1R6Ps+UkSfkYUKeKJACBlP9XxG9LGIKkooCzz60T3RwaLv7A3dup30eYM1EjcyR+T/4MxY6KLrHMzden/RXwwWfV47sxwzMhbGot/rz/vQZ478udXQ5mZPE2E8b9SrvOXob++JxFVPRPxJLccI9jiiBzihcRhiv6A9PIzSoPRPBJWVD+mB1iLri4tznvifBsh0G7ez2dCMW3ibjfwKaX/3ITfyDU71M2juZXFIO1lWDG/cLds9q7137FiW01y3WT6CJfz8agl/9Y5LeLBEbpZOpgmJACUWDTSIfPCTxnQyzLMsoX5B4mw2PaGzdBlqT/LolGA9msQoBb9BkB3HBLoEnuQbAFZQRgjwDgHNsFiefAnBR8ji4P0XdBuj0+U38dWE1HPoWU4KUTyOQZFJdorLV5Y76g0q/Rk0vyo6TcS+UbvPjGpDrx0b3U7TtizjuGN199ep+W9ecasb1KNu6sF2HyMvxhu30vRnZCtW6jjE6QTvEI38jmepP4U2PS1wHEcDsNxlhyTBgL+4OJf3wD/THVzQu1k6nezQNg5z5BdRlpaackAtsPsL8Bjh0nQoT0Dd/hClIP5X8EqYgxJgThZ+E4sQS7Dsb8hTgE9wGoAFSvuCaYmW/WmeQ2ikY4Lqpad7AcL8CUHJZVe77GqTLuonMB8HsVahfwvTvmB/89eXtAyaHCIQsJ6E9lEKnhCMCVgY4OA5GXYVoOmm1jY0Z5O672dxllP//V//e2XQn6zE7CaOE7w+Yl8OfMRL/A4vCju8JD2ktrI8GkSpgwbUlzW7uufoZuO4Yev145rVbms1Z4mAz68i/VIK+PJ0Cs2/VjxfDRFSGBZxEP44RWUUUcJMKAU8z8tCIAoyfKSIEuJ+kxPqifo1r37NqbS6I6j0vvHG6PzsJsbwekRGPK9glVOYCs8HjIAEjvF8TmBAPJWVEBsIiP/xIzLw/y7E2HjojiDyOvrc6hmSMTJEt16dA99nTb7PGfxh3B91IW84OINIfGZBXmD09IXZg/jcg1zBOZDMBskxDsdu3XhjRH5XGggRVQJdFeBJYJ/UgQgC7r+XD36xb1sOYMay1zLCu5h4t4S7JXx0S3j+yhI+QEJNGhvT9CSbQe53o5wPOt32frNrf6R6/t2HYVeCRBjhGnbFcexHTq++XKFXtabWcfbbVUdbR7G+XKFYtSFwGdAr0PD3TbO+uKJZL6W5plqCxFUQV1EZT8UqI8oKZhQFcQwbyh5f8X0RS69TLYlQLXlHVOjqWqr1EWyLuyXcLeEHLuF6Ff/4Sm34rVdxn/p56Ut9cMsTvEzz02niQZqfhdSvoA/yjWya+9Dnl3WwgPYuS1uQ8xOPSrwgNMGSzgTiH0ldQKCHMIiUA24sYJGacw2BfwIndXPNuVYF7wIuZp0Jf/+B4stN2bv6sYeXZyvhpa01GuDzO+uCy2rNvY0HECdAw+87tFzX6q8kuQ4snqAqiljxGFlRIYcXQ4HxOCFkFM4LIbCwouwH6wNLZYcT1ufw1G/fhRTeUPV/Ao02bJTrOttN8H5sa/XrKtWdVD9Mqq/eo1SwgcLdPJul3y/X57u21TPvJHu/kr124nWbmposewKreiIji4hlBFypgE9Q4FcQSArrS7zgK7eqqW0ckZoa6nGxL9hD76gaWSONM0fG3BzpgumMWfg90dPduce7J17jsOWSGtuoKxgNfWb2WkNj5CbGwh5ajcPITezIqu/Gfb41cutBDHPM9GgW+c3WqZvEE7crDb1eN7KiluEJrXGXV6deOo7atdYC9YJp/8jm/OSw4x7tceioFZNvBc0W53Zm0UotL7LSSdRli5YTlTVAGDMjc2ZB0569Vd2P12bmyB5bji+ZyeHY7LnD/uiANXt21F+MZ/D7zBy5I7PRnZuLg7kxGojGQhcMJ45MxzizGtrMqGtzM9HmllOFOTTWHWlvrPs9JIEVwlNllVBB8EPpnJ6hOCbn7RUIp+UphT/MIr88VZuc5GhOIuEvlqPbOFxhY8gH2lJymsuMcUwbKAWU5TQ5iOiQweVpWwfHZeHwxU1khTSWr9O1G07IH0FUgzCMShbwiVbtkMr38q4CRf0tAKsUldoiXqxqA/w3qU/nmJytQP+T5bbY6uzb1f5x7btH7FdVyMH6M+ASuPeozfg0pjaXu5qm3oUKPSI7WNXCMPJhJf6c1nW9PEV6yQ7JlYw/TEG78XxpiSLLSrZRKb81ob0488eTF0Rfqo2i1IMBesk8amhKZpgMMT4htglgE9KIzsEQ4CjIgegM5qIJYYB5Z0OckndRjoMX1FOYzQETjaMUT4C9LMXaJd8qDZlkXjnfhB7AxBHw7yKelyQuSuFTpaRRegrmyfISLc9WbUx3mlrV7lyZeouYmnx5PaUh+v10OWKbekwMgNKCFJknm9SDfLlkYpFNahtfq5Kw8U3qUfFyDaUbXef3mA9DajnpBlIrfuxHUk9XSO0upFJrGe3TFUa7CznZ+6ezn13R2VKGl0USTgwVXvQYkVcqjIh4CFkyChhflnhPlCsIAtqrXJYruay4Iwo3FEl+8L01XlGCioIYgav4jOBJEqMgFTEVzgtkFYUKV2F/7LMpXhchzkIUChKr3hpDrB33RwOp74y5fo/cQRvGEIvmpmPG5M6a6XTZ/sJN3HorMZLuwupB7KobPPTBX3NoNEhE87/ntshPX78tAq9vQ2sPo5Msvbq7VhuW99VIgAL0jPES4impHMN2Ihl3efCd4gSUR/wTbJAQkuvnl+PX3B+ZLT9AA/7IdiTXSOjbXyEBli3DJoBUSjPrVbu/crSuklsk7Wv5bnOXZKutN5qOCUCnbnGj5F2YXIhDVfUUgeEVjyAQdoSq4gpTkcEFqEhBSJVvh8D/uD0Cq3J7rq68KxWoJ8V9oTV004OplxyybcGOcZOwse6pyR8srHo8Mp1DYHZ9wW3onLlojc3RmDd6gDr+gDP5si02F/0zYzEW+qO+0HfgHYLIhiEYiTuynL2x27DHwKTeiL6tMuh9A+FziRgUR6d4GSjLxxzTl646ACQNv5rQQZblv6cHEGKL8pIkBNQsL0omBVqLsoC4coiLUepHXhRHBcTGZ9e3RiAnAz8NMCM++XnpZn3TcrQd2jLbfbpn2d904CftNDUaQEVqWr7eaVfNOrkEQq09pSVVaRsnKUi0DCOY1ubrq9IrbnV71+p0jjv71Z55WXW2NcOsms6xtXcMnz/W+i+rzlwgsYIcYqaCPJ4RWcgBVFn1GKwIKqtwqsCz/ncdKsd+zV86VO6V4sAdhO8g/MeHMM+rIe+TG+OeyDGiDwktYkWf8VSVq4ghL7BicAfhOwj/P4ZwKCOskvvGCgoURgyQyCicX2FYFPisKEqBLEvfgfBTgPCS1yo7vHKH4TsM/+gYZoNAxaIgAIZ5lRErQcioISsyHhIUlvXEkGPDt8HwVXb2s7fKztbWyhig8zrJdqtLbe/QnQTF5DIl5K8X5xiUZROKTtfaeu2bsrR22eXz7Lf/+u/w/PWkPKirTF5cVt6UHimlDOE1Yu/0q4Kk+/64NKRCymZ0LZsUpGggyCwpMCi1LIuDbJaWhQSOyPtL+MC3//JvdGmcyzTJzRIvwnT7sE1zZK7V81C6B0ZeudGOShnAnkRaUi34Hnu+KeF+QuYmdm1q1Ta8cm1XUVBCWUYKw7EepNyiLzFqgDnGD8Mg9Cuij9Aau3LErtIOJ792fkRR/weTySE2OjYAAA=='
print(s1 == s2) # True
print(base64.b64decode(s1))
``` | You can use triple quotes like so :
```
my_var = """My text with quotes ' " is stored in a variable this way"""
```
You could also use ''' instead of """ if you prefer.
```
my_var = '''My text with quotes ' " is also stored in a variable this way'''
```
See : <https://docs.python.org/3/tutorial/introduction.html#strings> | 15,917 |
28,654,590 | Our security team asked me to not submit `plain text` passwords in my log in page, we use HTTPS though. so I thought that I need to do client side encryption before submit, I searched for solution and decided to implement [jCryption](http://www.jcryption.org/).
However the example presented there is PHP/python, after a few google found [this](https://github.com/JakeJP/jCryption.NET). I did whatever was explained in the link but I don't know how to retrieve form data that user submitted in form.

I only see a `key` returns in my login post back action and the `LoginModel` that should contain username, password is null.
**Login.cshtml**
```
@model Portal.Model.Membership.LoginModel
@using jCryption
@{
Layout = null;
jCryption.HandleRequest(Request);
}
<html>
<head>
<script src="~/Assets/global/plugins/jquery.min.js" type="text/javascript"></script>
<script src="~/Assets/global/plugins/jquery-migrate.min.js" type="text/javascript"></script>
@jCryption.RenderScriptFor("form", src: @Url.Content("~/Assets/admin/scripts/jquery.jcryption.3.1.0.js"))
</head>
<body>
@using (Html.BeginForm(null, null, FormMethod.Post, new { @autocomplete = "off" }))
{
<div class="form-body">
<div class="form-group">
@Html.LabelFor(x => x.Username, new { @class = "placeholder" })
@Html.TextBoxFor(x => x.Username, new { @class = "form-input", autocomplete = "off" })
<span></span>
</div>
<div class="form-group">
@Html.LabelFor(x => x.Password, new { @class = "placeholder" })
@Html.PasswordFor(x => x.Password, new { @class = "form-input", autocomplete = "off" })
<span></span>
</div>
</div>
<div class="form-group">
<button id="btnLogin">Login</button>
</div>
}
</body>
<!-- END BODY -->
</html>
```
**Update**
I put break point on login post action and it popup twice, one with `key` and another with `jCryption`:
 | 2015/02/22 | [
"https://Stackoverflow.com/questions/28654590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/735839/"
] | Your understanding about the second function is correct.
You may want to store actual nodes in the `edges` slot instead of node numbers. Then, instead of binding local variables to the node list inside of the two nodes that you want to connect, though, you can bind them to the nodes themselves, which would also look better than the repeated invocations of `nth` inside of the `setf` forms. You could then also directly operate with the nodes when you access `edges` instead of having to perform an extra lookup.
To improve the style of the first function, I'd suggest two things:
Use `push` instead of `(setf ... (cons thing ...))`
===================================================
`slot-value` is an accessor, and as such, it can be used as a place. `setf` is one way to change the value of a place, but Common Lisp defines other operations on places. The pattern that you are using here is implemented in the macro [`push`](http://www.lispworks.com/documentation/lw51/CLHS/Body/m_push.htm). By using it, you can simplify your expressions significantly:
```
(push end-node (slot-value (nth begin-node node-list) 'edges))
```
Define an accessor for edges instead of using `slot-value`
==========================================================
`slot-value` should be used rarely, and as a low-level mechanism, because it is verbose and less flexible than using a named accessor. `slot-value` also puts the important part of the access, the name of the slot, to the end of the expression, which often makes the code harder to read. In your case, I would name the accessor `edges` in the class definition:
```
(edges :initform nil :accessor edges)
```
That would make your first version more readable:
```
(push end-node (edges (nth begin-node node-list)))
``` | Instead of:
```
(setf (slot-value (nth begin-node node-list) 'edges)
(cons end-node (slot-value (nth begin-node node-list) 'edges)))
```
You can write:
```
(push end-node (slot-value (nth begin-node node-list) 'edges))
```
Why is the following not working as expected?
```
(let ((begin-node-lst (slot-value (nth begin-node node-list) 'edges))
(end-node-lst (slot-value (nth end-node node-list) 'edges)))
(setf begin-node-lst (cons end-node begin-node-lst))
(setf end-node-lst (cons begin-node end-node-lst)))
```
You write: *attempts to solve the double lookup by binding the location*.
That does not work. You can bind locations. You can only bind values. `LET` binds the values of forms to variables.
In Common Lisp there is the idea of a *place*. Many side-effect macros work with places: `SETF` and `PUSH` are examples. A place is only the source of the accessing code, not a real first-class object
Examples for places:
* `foo` as a variable
* `(aref foo 10)` as an array access
* `(slot-value object 'foo)` as a slot access
* `(slot-value (find-object *somewhere* 'foo) 'bar)` as a slot access...
Macros like `SETF` find out at macroexpansion time, based on the source of the accessing form, what form for a setting form to generate. It can't look at things like bindings, where the bindings form is coming from.
In this case one would usually retrieve the object (typically CLOS object or structure) from the data structure, keep a reference to the object and then change the slot value using `SLOT-VALUE` or `WITH-SLOTS`. Alternatively use an accessor.
```
(setf (slot-value person 'name) "Eva Lu Ator")
(setf (slot-value person 'group) :development)
```
would be
```
(with-slots (name group) person
(setf name "Eva Lu Ator"
group :development))
```
**General Advice**:
Also note in your function the confusion what a `node` is. Is it an object of type `node` or is it a number? If it is a number, I would name the variable `node-number`.
Avoid `NTH` and lists. If you need random access, use vectors.
Either use node objects directly (and not numbers for those) or use symbols for them: `node-123` and link the node symbol to the node object in some registry. You might want to use numbers only in some cases...
I would write code like this:
```
(defun connect-nodes (node-vector)
"Given a NODE-VECTOR, repeats for +MAX-EDGES+ amount of times to connect
nodes via randomly generated edges."
(loop repeat +max-edges+
for begin-node-number = (random +max-rooms+) and
end-node-number = (random +max-rooms+)
when (/= begin-node-number end-node-number) do
(let ((begin-node (aref node-vector begin-node-number))
(end-node (aref node-vector begin-node-number)))
(push end-node (slot-value begin-node 'edges))
(push begin-node (slot-value end-node 'edges))))
node-vector)
``` | 15,918 |
40,712,568 | This python script returns a value of `90.0`:
```
import itertools
a=[12,345,1423,65,234]
b=[234,12,34,1,1,1]
c=[1,2,3,4]
def TestFunction (a, b, c):
result = a + b/c
return result
Params=itertools.product(a, b, c)
x = 2
print(TestFunction(*list(Params)[x]))
```
However, I would like to evaluate my function for a range of x as:
```
for x in range (5):
print(TestFunction(*list(Params)[x]))
```
I expected this to return a sequence of values: `246.0`, `129.0`, `90.0`, `70.5`, `14.0`; however, I get:
>
> "IndexError: list index out of range."
>
>
>
Why does the function evaluate when `x` is defined explicitly but not when it is an iterator? | 2016/11/21 | [
"https://Stackoverflow.com/questions/40712568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7098896/"
] | ```
for x in range (5):
print(TestFunction(*list(Params)[x]))
```
`Params` is an iterator. The first time through the loop, you consume it entirely by converting it to a list. Therefore on the second iteration there's nothing in it and converting it to a list yields `[]`, the empty list, and trying get index 1 of that won't work.
Instead, convert the iterator to a list *outside* the loop.
```
params = list(Params)
for x in range(5):
print(TestFunction(*params[x]))
``` | Because calling `list()` on the iterator exhausts the iterator. Thus it can be called once only:
```
>>> Params=itertools.product(a, b, c)
>>> Params
<itertools.product object at 0x7f5ed3da5870>
>>> list(Params)
[(12, 234, 1), (12, 234, 2)..., (234, 1, 4)]
>>> list(Params)
[]
```
You can see that the second call to `list(Params)` evaluates to an empty list, and attempting to subscript that raises an `IndexError`.
You could store the values from the iterator in another variable:
```
l = list(Params)
```
and then access it as you want, accessing arbitrary items by subscript.
If you want to access the items sequentially, as per your for loop example, just call `next()` on the iterator:
```
for i in range(5):
print(TestFunction(*next(Params)))
``` | 15,919 |
68,650,493 | I have some experience starting starting up Apache Airflow but I have now an error when I try to `airflow db init` command. The error is as below. I am running Airflow on virtual env with Python 3.8. Any help would appreciated. I am not sure to understand this error as I managed to init the db without importing any `_cffi_backend` module in the past.
The error:
```
airflow) airflow@26c8ed88c008:~/airflow$ airflow db init
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 47, in command
func = import_string(import_path)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 24, in <module>
from airflow.utils import cli as cli_utils, db
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/db.py", line 26, in <module>
from airflow.jobs.base_job import BaseJob # noqa: F401
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/__init__.py", line 19, in <module>
import airflow.jobs.backfill_job
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/backfill_job.py", line 29, in <module>
from airflow import models
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/__init__.py", line 20, in <module>
from airflow.models.baseoperator import BaseOperator, BaseOperatorLink
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 59, in <module>
from airflow.models.taskinstance import Context, TaskInstance, clear_task_instances
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 57, in <module>
from airflow.models.variable import Variable
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/variable.py", line 24, in <module>
from cryptography.fernet import InvalidToken as InvalidFernetToken
File "/usr/lib/python3/dist-packages/cryptography/fernet.py", line 17, in <module>
from cryptography.hazmat.primitives import hashes, padding
File "/usr/lib/python3/dist-packages/cryptography/hazmat/primitives/padding.py", line 13, in <module>
from cryptography.hazmat.bindings._padding import lib
ModuleNotFoundError: No module named '_cffi_backend'
``` | 2021/08/04 | [
"https://Stackoverflow.com/questions/68650493",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8867871/"
] | Use `CROSS JOIN` to build all combinations and top up with a `LEFT JOIN`:
```
SELECT p.product_id, s.status, COUNT(t.any_not_null_column)
FROM (SELECT DISTINCT product_id FROM t) AS p
CROSS JOIN (SELECT DISTINCT status FROM t) AS s
LEFT JOIN t ON p.product_id = t.product_id AND s.status = t.status
GROUP BY p.product_id, s.status
``` | The following is a Postgres solution (a database I strongly recommend over MS Access). The idea is to generate all the rows and then use `left join` and `group by` to get the counts you want:
```
select p.product_id, s.status, count(d.product_id)
from (select distinct product_id from details) p cross join
(values ('ok'), ('bad')) s left join
details d
on d.product_id = p.product_id and d.status = s.status
group by p.product_id, s.status;
```
Note: You might have other tables that have the list of products and/or statuses that you want.
An equivalent version in MS Access (which would also work in Postgres) might look like:
```
select p.product_id, s.status, count(d.product_id)
from ((select distinct product_id from details) p,
(select distinct status from details) s
) left join
details d
on d.product_id = p.product_id and
d.status = s.status
group by p.product_id, s.status;
``` | 15,922 |
42,881,650 | I have a list e.g. `l1 = [1,2,3,4]` and another list: `l2 = [1,2,3,4,5,6,7,1,2,3,4]`.
I would like to check if `l1` is a subset in `l2` and if it is, then I want to delete these elements from `l2` such that `l2` would become `[5,6,7,1,2,3,4]`, where indexes 0-3 have been removed.
Is there a pythonic way of doing this?
I tried this:
```
l1 = [1,2,3,4]
l2 = [1,2,3,4,5,6,7,1,2,3,4]
l3 = []
for i in l2:
if i in l1:
l3.append(i)
-> prints [5,6,7]
```
However I would like the output to be `[5,6,7,1,2,3,4]`. | 2017/03/19 | [
"https://Stackoverflow.com/questions/42881650",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Well, here is a brute-force way. There are probably more efficient ways. If you expect to encounter a matching sublist early, the performance shouldn't be terrible.
```
>>> l1 = [1,2,3,4]
>>> l2 = [1,2,3,4,5,6,7,1,2,3,4]
>>> for i in range(0, len(l2), len(l1)):
... if l2[i:len(l1)] == l1:
... del l2[i:len(l1)]
... break
...
>>> l1
[1, 2, 3, 4]
>>> l2
[5, 6, 7, 1, 2, 3, 4]
>>>
```
Or if you don't want to modify `l2`, you could do the following:
```
>>> l1 = [1,2,3,4]
>>> l2 = [1,2,3,4,5,6,7,1,2,3,4]
>>> for i in range(0, len(l2), len(l1)):
... if l2[i:len(l1)] == l1:
... break
...
>>> l2[:i] + l2[i+len(l1):]
[5, 6, 7, 1, 2, 3, 4]
>>>
``` | I'm not proud of this, and it's not pythonic, but I thought it might be a bit of fun to write. I've annotated the code to make it a little more obvious what's happening.
```
>>> import re
>>> from ast import literal_eval
>>> l1 = [1,2,3,4]
>>> l2 = [1,2,3,4,5,6,7,1,2,3,4]
>>> literal_eval( # convert the string into a python collection
... re.sub( # use a regex as a replacement
... str(l1)[1:-1], # string of the list, without surrounding brackets
... '', # replace with empty
... str(l2)[1:-1], # string for replacement, again without brackets
... count=1 # only replace the first match
... ).strip(',') # replace any preceeding or trailing commas
... .strip() # replace any preceeding or trailing whitespace
... )
(5, 6, 7, 1, 2, 3, 4)
```
The output here is a tuple, but you could wrap it in `list()` if that's what you really want. Again, I'm not proud of this, but it works at least for this simple case. Juanpa's solution is much more preferable to this. | 15,923 |
42,890,951 | I have anaconda installed in my Mac. I am trying to install python-igraph.
I tried the following commands to install it:
```
$ brew install igraph
$ pip install python-igraph
```
My python setup:
```
Python 2.7.13 |Anaconda custom (x86_64)| (default, Dec 20 2016, 23:05:08)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin`
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
```
But when I try to import `igraph`, I get the following error:
```
>>> import igraph
```
>
> Traceback (most recent call last):
>
>
> File "", line 1, in
>
>
> File
> "/Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/**init**.py",
> line 34, in
>
>
>
> ```
> from igraph._igraph import *
>
> ```
>
> ImportError:
> dlopen(/Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so,
> 2): Symbol not found: \_iconv
>
>
> Referenced from:
> /Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so
>
>
> Expected in: flat namespace
>
>
> in
> /Users/arefindk/anaconda2/lib/python2.7/site-packages/igraph/\_igraph.so
>
>
>
Now, I have tried to follow all the steps that is provided in this [answer](https://stackoverflow.com/questions/30505490/error-importing-igraph).
My hunch is, it might be a problem related to Anaconda because my friend successfully installed and imported python-igraph using the above commands and he doesn't have Anaconda installed.
How can I both solve this problem and keep Anaconda in my Mac? | 2017/03/19 | [
"https://Stackoverflow.com/questions/42890951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2529269/"
] | I found exactly what I was looking for, [SwipeCellKit](https://github.com/jerkoch/SwipeCellKit), by jerkoch. This library performs the same exact actions as the stock iOS Mail app does when swiping to the left. No need to deal with different `UIViews` and `UIButtons`.
To use, simply conform to the `SwipeTableViewCellDelegate`, and use it in `editActionsForRowAt` like so:
```
func tableView(_ tableView: UITableView, editActionsForRowAt indexPath: IndexPath, for orientation: SwipeActionsOrientation) -> [SwipeAction]? {
guard orientation == .right else { return nil }
let deleteAction = SwipeAction(style: .destructive, title: "Delete") { action, indexPath in
// handle action by updating model with deletion
}
// customize the action appearance
deleteAction.image = UIImage(named: "delete")
return [deleteAction]
}
```
Make sure to change the cell's class to `SwipeTableViewCell` and set its delegate like so: `cell.delegate = self`. | I would take a look at the [SWTableViewCell](https://github.com/CEWendel/SWTableViewCell) by CEWendel. It looks like it has exactly what you're looking for. | 15,924 |
49,132,008 | I have next method:
```
public void callPython() throws IOException {
Runtime rt = Runtime.getRuntime();
Process pr = rt.exec("python -c \"from test import read_and_show; read_and_show()\" src/main/python");
BufferedReader bfr = new BufferedReader(new InputStreamReader(pr.getInputStream()));
BufferedReader bfre = new BufferedReader(new InputStreamReader(pr.getErrorStream()));
String outputStr = "";
while ((outputStr = bfr.readLine()) != null) {
System.out.println(outputStr);
}
while ((outputStr = bfre.readLine()) != null) {
System.out.println(outputStr);
}
}
```
in python file next code:
```
import os
from stat import *
def read_and_show():
print('worked!')
```
when i call this in terminal all worked correctly (before i cd to this directory):
```
MacBook-Pro-Nikita-2:python NG$ python -c "from test import read_and_show; read_and_show()"
worked!
```
when i run this code in my java code he return error:
```
File "<string>", line 1
"from
^
SyntaxError: EOL while scanning string literal
```
What i make wrong?
P.S.: i need run python method/class/file for read, parse and show graphical data. but for this need when java run python single method (def) | 2018/03/06 | [
"https://Stackoverflow.com/questions/49132008",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5417750/"
] | When executing other programs from java, I've found it's easier to keep it as simple as possible in java and instead execute a batch file
```
Runtime.getRuntime().exec("chrome.exe www.google.com");
```
Would instead become
```
Runtime.getRuntime().exec("openChrome.bat");
```
and openChrome.bat:
```
chrome.exe www.google.com
```
This makes it easier to test the command without recompiling, but may get complicated if you need to pass variables as arguments
To use shell built-ins like `echo` and `cd`, the batch file works wonders *(ie `echo test | program`)*
---
The major downside is you will have a floating .bat file next to your code
If packaging to a .jar, you may need to first [copy the .bat file out of the .jar](https://stackoverflow.com/a/44077426/5475891) before executing | You're missing the shebang statement that states where the python interpreter is. It should be line #1
```
#!/usr/bin/python
``` | 15,927 |
33,560,877 | I would like to convert my list(items) from string to int, therefore I can calculate the numbers in it. However, the python showed up the invalid literal for int() with base 10 error, and I've no idea what's wrong with it. (list: in one line only, separate by comma and no space before and after comma.)
list:
```
51,2,2,49,15,2,1,14
```
Code:
```
def parseCSV(file):
file_open = open(file)
print (file_open.read())
with open(file) as rd:
items = rd.readlines()
print (items)
items = list(map(int, items))
print (items)
def main():
file = str(input("Please enter the full name of the desired file (with extension) at the prompt below: \n"))
parseCSV(file)
main()
``` | 2015/11/06 | [
"https://Stackoverflow.com/questions/33560877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5532342/"
] | I modified according to your code. Please have try.
```
def main():
file = str(input("Please enter the full name of the desired file (with extension) at the prompt below: \n"))
parseCSV(file)
def parseCSV(file):
file_open = open(file)
print (file_open.read())
with open(file) as rd:
lines = rd.readlines()
for line in lines:
items = map(int, line.split(','))
print (items)
main()
``` | To answer your question, *'What's wrong with it?'*:
You are reading in your whole csv to a list containing one item that is the whole file as a long string. Even if your csv only contains integers the way you are parsing in all of the lines will not work. | 15,929 |
22,023,184 | I tried to subclass NSThread in order to operate a thread with some data. I want to simulate the join() in python, according to the doc:
>
> join(): Wait until the thread terminates. This blocks the calling thread until
> the thread whose join() method is called terminates
>
>
>
So I think using performSelector: onThread: withObject: waitUntilDone:YES would be fine, but it does not work. It just do nothing and would not exit, running like forever.
This is my code:
```
@interface MyClass : NSThread
@property (strong, nonatomic) NSMutableArray *msgQueue;
@property (assign, nonatomic) BOOL stop;
@end
@implementation MyClass
-(id)init
{
self = [super init];
if (self) {
self.msgQueue = [NSMutableArray array];
self.stop = NO;
[self start];
return self;
}
return nil;
}
-(void)myRun
{
while (!self.stop) {
NSLock *arrayLock = [[NSLock alloc] init];
[arrayLock lock];
NSArray *message = [self.msgQueue firstObject];
[self.msgQueue removeObjectAtIndex:0];
[arrayLock unlock];
NSLog(@"%@", message);
if ([message[0] isEqualToString:@"terminate"]) {
self.stop = YES;
}
}
}
-(void)join
{
[self performSelector:@selector(myRun) onThread:self withObject:nil waitUntilDone:YES];
}
@end
int main(int argc, const char * argv[])
{
@autoreleasepool {
MyClass *a = [[MyClass alloc] init];
[a.msgQueue addObject:@[@"terminate",@"hello world"]];
//[a myRun]; // this line works so the myRun method should be good,
[a join]; // but I want this line work, and I have no idea what the problem is.
}
return 0;
}
``` | 2014/02/25 | [
"https://Stackoverflow.com/questions/22023184",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2014948/"
] | From Apple's documentation on `performSelector:onThread:withObject:waitUntilDone:`:
>
> This method queues the message on the run loop of the target thread using the default run loop modes—that is, the modes associated with the NSRunLoopCommonModes constant. As part of its normal run loop processing, the target thread dequeues the message (assuming it is running in one of the default run loop modes) and invokes the desired method.
>
>
>
You probably never started a run loop on the thread, so it will never execute your `myRun` method, since it has no run loop to execute on.
As for Merlevede's answer, `myRun` is not enqueued on the same thread as `join`. `join` was called on your main thread, whereas you're trying to enqueue `myRun` on your secondary thread. So his theory is incorrect. Also from Apple's documentation regarding the wait parameter:
>
> If the current thread and target thread are the same, and you specify YES for this parameter, the selector is performed immediately on the current thread. If you specify NO, this method queues the message on the thread’s run loop and returns, just like it does for other threads. The current thread must then dequeue and process the message when it has an opportunity to do so.
>
>
>
So even if it was on the same thread, it wouldn't be stuck waiting, it would just execute it right away as if you had directly called the method instead of using `performSelector:` in the first place. | You're basically in a deadlock condition.
```
-(void)join
{
[self performSelector:@selector(myRun) onThread:self withObject:nil waitUntilDone:YES];
}
```
`join` is waiting for `myRun` to finish (waitUntilDone flag), but `myRun` is enqueued on the same thread as `join`, so it's also waiting for `join` to finish.
For `performSelector:onThread:withObject:waitUntilDone:` you would never pass the current thread as the thread parameter. | 15,932 |
56,143,264 | i upgrade pip. But after the upgrade have some syntax error.
i try install python 3.x but not fixed.
Traceback (most recent call last):
```
File "/usr/bin/pip", line 7, in <module>
from pip._internal import main
File "/usr/lib/python2.6/site-packages/pip/_internal/__init__.py", line 19, in <module>
from pip._vendor.urllib3.exceptions import DependencyWarning
File "/usr/lib/python2.6/site-packages/pip/_vendor/urllib3/__init__.py", line 8, in <module>
from .connectionpool import (
File "/usr/lib/python2.6/site-packages/pip/_vendor/urllib3/connectionpool.py", line 92
_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK}
```
I try to install pyopenssl
----------UPDATE----------------
i try to install python36u i got some errors
```
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5(XZ_5.0)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm_compat.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libcrypto.so.10(OPENSSL_1.0.2)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libffi.so.6()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libc.so.6(GLIBC_2.17)(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
``` | 2019/05/15 | [
"https://Stackoverflow.com/questions/56143264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7571523/"
] | python2.6 is not supported anymore, try to change you SYS PATH to point for new python and pip
check this : [Python ENV](https://www.tutorialspoint.com/python/python_environment.htm)
alternatively you can use the following:
```
/path/to/pip3 install ....
/path/to/python3 <NAME_OF_THE_SCRIPT>
``` | ----------UPDATE----------------
i try to install python36u i got some errors
```
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5(XZ_5.0)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm_compat.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libgdbm.so.4()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libcrypto.so.10(OPENSSL_1.0.2)(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: liblzma.so.5()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libffi.so.6()(64bit)
Error: Package: python36u-libs-3.6.8-1.el7.ius.x86_64 (ius)
Requires: libc.so.6(GLIBC_2.17)(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
``` | 15,933 |
43,773,802 | Using python and pandas I can easily construct a sparse DataFrame from a list of dictionary objects. The following code snippet shows how this can be done in pandas:
```
In [1]: import pandas as pd; (pd.DataFrame([{'a':1, 'b':10},
{'d':99, 'c':1},
{'b':1, 'd': 4}])
.fillna(0))
Out[1]:
a b c d
0 1.0 10.0 0.0 0.0
1 0.0 0.0 1.0 99.0
2 0.0 1.0 0.0 4.0
```
What if I want to easily reproduce this behavior in R? Let's suppose that I have the following variable:
```
values <- list(list(a = 1, b = 10),
list(d = 99, c = 1),
list(b = 1, d = 4))
```
Then, how the same result achieved in python can be obtained using R? | 2017/05/04 | [
"https://Stackoverflow.com/questions/43773802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4862483/"
] | We can use `melt` with `xtabs` in `R`
```
library(reshape2)
xtabs(value~L1 + L2, melt(values))
# L2
#L1 a b c d
# 1 1 10 0 0
# 2 0 0 1 99
# 3 0 1 0 4
``` | Here's a solution with `plyr` package:
```
ldply(values, data.frame)
a b d c
1 1 10 NA NA
2 NA NA 99 1
3 NA 1 4 NA
# mutate each to replace NA with 0
ldply(values, data.frame) %>%
mutate_each(funs(replace(., is.na(.), 0)))
a b d c
1 1 10 0 0
2 0 0 99 1
3 0 1 4 0
``` | 15,935 |
18,942,318 | I try to upload the data into datastore use remote\_api at my dev server, but I got the following error, the SDK version is 1.8.4. Is there anyone has the same error? It looks like the new datastore version 4 cause this?
```
Traceback (most recent call last):
File "D:\python-lib\google_appengine\appcfg.py", line 184, in <module>
run_file(__file__, globals())
File "D:\python-lib\google_appengine\appcfg.py", line 180, in run_file
execfile(script_path, globals_)
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 4
403, in <module>
main(sys.argv)
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 4
394, in main
result = AppCfgApp(argv).Run()
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 2
469, in Run
self.action(self)
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 4
103, in __call__
return method()
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 3
915, in PerformUpload
run_fn(args)
File "D:\python-lib\google_appengine\google\appengine\tools\appcfg.py", line 3
806, in RunBulkloader
sys.exit(bulkloader.Run(arg_dict))
File "D:\python-lib\google_appengine\google\appengine\tools\bulkloader.py", li
ne 4419, in Run
return _PerformBulkload(arg_dict)
File "D:\python-lib\google_appengine\google\appengine\tools\bulkloader.py", li
ne 4284, in _PerformBulkload
loader.finalize()
File "D:\python-lib\google_appengine\google\appengine\ext\bulkload\bulkloader_
config.py", line 382, in finalize
self.reserve_keys(self.keys_to_reserve)
File "D:\python-lib\google_appengine\google\appengine\tools\bulkloader.py", li
ne 1228, in ReserveKeys
datastore._GetConnection()._reserve_keys(ConvertKeys(keys))
File "D:\python-lib\google_appengine\google\appengine\datastore\datastore_rpc.
py", line 1880, in _reserve_keys
self._async_reserve_keys(None, keys).get_result()
File "D:\python-lib\google_appengine\google\appengine\datastore\datastore_rpc.
py", line 838, in get_result
results = self.__rpcs[0].get_result()
File "D:\python-lib\google_appengine\google\appengine\api\apiproxy_stub_map.py
", line 612, in get_result
return self.__get_result_hook(self)
File "D:\python-lib\google_appengine\google\appengine\datastore\datastore_rpc.
py", line 1921, in __reserve_keys_hook
self.check_rpc_success(rpc)
File "D:\python-lib\google_appengine\google\appengine\datastore\datastore_rpc.
py", line 1234, in check_rpc_success
rpc.check_success()
File "D:\python-lib\google_appengine\google\appengine\api\apiproxy_stub_map.py
", line 578, in check_success
self.__rpc.CheckSuccess()
File "D:\python-lib\google_appengine\google\appengine\api\apiproxy_rpc.py", li
ne 156, in _WaitImpl
self.request, self.response)
File "D:\python-lib\google_appengine\google\appengine\ext\remote_api\remote_ap
i_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "D:\python-lib\google_appengine\google\appengine\ext\remote_api\remote_ap
i_stub.py", line 234, in _MakeRealSyncCall
raise pickle.loads(response_pb.exception())
AssertionError: No api proxy found for service "datastore_v4"
``` | 2013/09/22 | [
"https://Stackoverflow.com/questions/18942318",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2345755/"
] | What about this?
```
MyBase * base = dynamic_cast<MyBase *>(clicked_shape);
base->SetText("too");
```
You might want to check for `base` being null, if the Shape you get isn't actually one of yours.
`MyBase` needs at least one virtual function for this - the destructor would do. | Shape class is a base class hence it provides an interface that can be overridden. E.g. there could be `draw()` method which is called to draw a shape. That one would be a good candidate to be overridden in your new class with text box. For example:
```
class SquareWithText: public Square {
void draw() {
Square::draw(); // call base method
... // add text box here
}
```
There is no multiple inheritance here, which gives you a simpler solution.
If single inheritance doesn't work for you you may go for a more complex solution.
```
class ShapeWithTextBox : public Shape {
void draw() {
... // add text box here
}
}
```
Now you will write your new classes as:
```
class MySquare : public Square, public ShapeWithTextBox
{
void draw() {
Square::draw();
ShapeWithTextBox::draw();
}
```
It is not necessary to derive `ShapeWithTextBox` from `Shape`. It depends on the interface of `Shape class`. | 15,940 |
28,962,266 | I had drawn up an UI using the QT Designer but found out that there are no parameters for me to set QLineEdit inputs to be uppercase.
After doing some online searching, I have only seen a very few handful of results that cater to my needs, however all are coded in Qt. Example, this [link](http://www.qtforum.org/article/16178/making-all-qlineedit-input-uppercase.html)
And so, are there ways for me to do this in the pythonic way? | 2015/03/10 | [
"https://Stackoverflow.com/questions/28962266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3212246/"
] | Try this,
I believe this serves your purpose. I won't call it much pythonic. More like PyQt override.
#minor code edit
```
from PyQt4 import QtGui
import sys
#===============================================================================
# MyEditableTextBox-
#===============================================================================
class MyEditableTextBox(QtGui.QLineEdit):
#|-----------------------------------------------------------------------------|
# Constructor
#|-----------------------------------------------------------------------------|
def __init__(self,*args):
#*args to set parent
QtGui.QLineEdit.__init__(self,*args)
#|-----------------------------------------------------------------------------|
# focusOutEvent :-
#|-----------------------------------------------------------------------------|
def focusOutEvent(self, *args, **kwargs):
text = self.text()
self.setText(text.__str__().upper())
return QtGui.QLineEdit.focusOutEvent(self, *args, **kwargs)
#|--------------------------End of focusOutEvent--------------------------------|
#|-----------------------------------------------------------------------------|
# keyPressEvent
#|-----------------------------------------------------------------------------|
def keyPressEvent(self, event):
if not self.hasSelectedText():
pretext = self.text()
self.setText(pretext.__str__().upper())
return QtGui.QLineEdit.keyPressEvent(self, event)
#|--------------------End of keyPressEvent-------------------------------------|
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
w = QtGui.QWidget()
lay = QtGui.QHBoxLayout()
w.setLayout(lay)
le1 = MyEditableTextBox()
lay.addWidget(le1)
le2 = MyEditableTextBox()
lay.addWidget(le2)
w.show()
sys.exit(app.exec_())
``` | Hey i know i am kind of late, but I hope this might help some one else like me who spent some time searching for this
**Mycase:**
I was trying to convert only the first letter to capital and this is what i ended up with and it worked (just a beginner in python so if you can make this more pythonic please let me know)
In the defining function: `line_edit_object.textChanged.connect(lambda:auto_capital(line_edit_object))`
the function auto\_capital:
```
def auto_capital(line_edit_object):
edit=line_edit_object
text=edit.text()
edit.text(text.title())
```
this shall fix every issue. Feel free to make it more pytonic. | 15,941 |
42,683,602 | I am writing a new Python application that I intend to distribute to several colleagues. Instead of my normal carefree attitude of just having everything self contained and run inside a folder in my home directory, this time I would like to broaden my horizon and actually try to utilize the Linux directory structure as it was intended (at least somewhat). Can you please read my breakdown below and comment and or make recommendations if this is not correct.
Lets call the application "narf"
```
/usr/narf - Install location for the actual python file(s).
/usr/bin/narf - Either a softlink to the main python file above or use this location instead.
/etc/narf - Any configuration files for app narf.
/var/log/narf - Any log files for app narf.
/usr/lib - Any required libraries for app narf.
/run/narf - Any persistent (across reboot), but still temp files for app narf.
/tmp/narf - Very temp files for app narf that go away with reboot
```
I assume I should stick to using /usr/X (for example /usr/bin instead of just /bin) since my application is not system critical and a mere addon.
I currently use Ubuntu 16 LTS, however part of this is intended as a way to try to standardize my app for any popular Linux distro.
Thanks for the help.
**\* UPDATE \***
I think I see the answer to at least part of my question. Looking in /usr, I now see that it is a pretty barebones directory and almost akin to user level root directory (ie has bin, lib, local, sbin, etc. but thats pretty much all). This leads me to believe my application should absolutely NOT live in /usr, and ONLY in /usr/bin. | 2017/03/08 | [
"https://Stackoverflow.com/questions/42683602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2272450/"
] | You could use the `map()` feature of the stream to convert each `User` instance in your list to a `UserWithAge` instance.
```
List<User> userList = ... // your list
List<UserWithAge> usersWithAgeList = userList.stream()
.map(user -> {
// create UserWithAge instance and copy user name
UserWithAge userWithAge = new UserWithAge();
userWithAge.setName(user.getName());
userWithAge.setAge(27);
return userWithAge;
})
.collect(Collectors.toList()); // return the UserWithAge's as a list
``` | While you could do this, You should not do like this.
```
List<UserWithAge> userWithAgeList = new ArrayList<UserWithAge>();
userList.stream().forEach(user -> {
UserWithAge userWithAge = new UserWithAge();
userWithAge.setName(user.getName());
userWithAge.setAge(27);
userWithAgeList.add(userWithAge);
});
``` | 15,946 |
70,600,154 | How can I implement a selection based on selecting the first 3n+1 elements from a tag in it's path? For example, let's say I have the following xpath:
```
//div[@class='ResultsSectionContainer-sc-gdhf14-0 kteggz']/div[@class='Wrapper-sc-11673k2-0 gIBPSk']//div/div/a
```
Taken from this url:
```
https://www.jobsite.co.uk/jobs?page=2&action=paging_next.html
```
Which selects the @href from each `a` tag. This website is dynamic, so I would need something that can generalise the path. However, for every three tags, two I do not need. Hence, I require all those hrefs belonging to 3n+1 selections when n = 0,1, 2, ...
Is there a way to implement this?
For example, the total number of hrefs I can select is 75, therefore I'm looking for 25 elements as 75/3 = 25, as we divide by the leading coefficient.
Can a selection like this be done in python? Otherwise, given a general selection:
```
a*n+c
```
How can I implement a selection like this when selecting for xpaths?
Although, if an easier alternative is available in selecting those href's from the url is also appreciated as I can only think of this way. Though, I presume this method can generalise more over different urls.
If there's a CSS-selector version, that would also be appreciated! | 2022/01/05 | [
"https://Stackoverflow.com/questions/70600154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15675231/"
] | The XPath filter predicate `[position() mod 3 = 1]` selects all elements whose 1-based position is 3n+1 for some integer n. | All you need here is to use a **correct** locator.
I guess you are trying to get all the job links?
If so, instead of this
`//div[@class='ResultsSectionContainer-sc-gdhf14-0 kteggz']/div[@class='Wrapper-sc-11673k2-0 gIBPSk']//div/div/a`
very long, complex and fragile XPath you can use this XPath:
```py
//a[@data-at='job-item-title']
```
In case you still need to select from the list some specific elements matching some locator you can collect all these elements into the list with `driver.find_elements_by_whatever` and then to iterate over this list in a loop with condition you need. | 15,949 |
34,791,797 | I would like to know how to determine the precise Linux distribution I am on (excluding version numbers), from within a Python script and define a variable as equal to it. Now, I should clarify and say that I have seen these two questions:
* [Python: What OS am I running on?](https://stackoverflow.com/questions/1854/python-what-os-am-i-running-on)
* [How can I find the current OS in Python?](https://stackoverflow.com/questions/110362/how-can-i-find-the-current-os-in-python)
and neither one was helpful to me as, the first of these questions had answers that were very generalized and merely returned `posix` for all Linux distributions. The second question's answers weren't helpful as I sometimes operate on some more obscure distributions like Manjaro Linux and Sabayon Linux. The most applicable answer to the second question, was `platform.linux_distribution()`, which on Manjaro, returns:
```
('', '', '')
```
which as you can see is not helpful. Now I know a way I can get half-way to an acceptable answer, as:
```
from subprocess import call
call(["lsb_release", "-si"])
```
returns the output (on Manjaro Linux, of course):
```
ManjaroLinux
0
```
but defining a variable:
```
a=call(["lsb_release", "-si"])
```
gives an a with the value:
```
>>> a
0
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34791797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876983/"
] | Replace call with check\_output.
```
from subprocess import check_output
a = check_output(["lsb_release", "-si"])
``` | You can also try subprocess.check\_output.
Based on docs: "Run command with arguments and return its output as a byte string." Docs: <https://docs.python.org/2/library/subprocess.html>
Code:
```
a = subprocess.check_output(["lsb_release", "-si"])
```
In my case, output was:
```
'Ubuntu\n'
``` | 15,950 |
49,105,070 | I'm a python newbie. I created a calculator program that will accept 2 number and a type of operation from user. I already have a working code for this but I want to further simplify the code by exploring and using function.
Here's the portion of the code:
```
def addition(num1,num2):
sum = num1 + num2
print('The sum is ', sum)
def subtraction(num1,num2):
sub = num1 - num2
print('The difference is ', sub)
def inputNumber():
num1 = float(input('Enter the first number: '))
num2 = float(input('Enter the second number: '))
return num1,num2
print('Enter the corresponding number to perform the operation:\n')
print('1 - addition')
print('2 - subtraction')
print('q - quit')
while True:
try:
operation = input('Select operation > ').lower()
if operation == 'q':
break
elif operation == '1':
addition(inputNumber())
elif operation == '2':
subtraction(inputNumber())
else:
print('Not valid. Try again.')
except:
print('Invalid!')
```
My problem is after entering 2 numbers it doesn't perform the operation. I think the problem is the 2 input values didn't return properly.
Thanks | 2018/03/05 | [
"https://Stackoverflow.com/questions/49105070",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9404668/"
] | You should write something like that.. the ?? is so that if it can't convert the argument into Int it will add 0 to your variable myInt..
```
let myInt:Int = Int("1234") ?? 0
``` | You can do it like this by creating an extension of String:
```
extension String {
var toInt: Int {
return Int(self) ?? 0
}
}
```
and use it like this
```
let preparationTimeInt = preparationTime.toInt
``` | 15,960 |
52,349,669 | In [windows server 2012 R2 x64, python 3.7 64x]
```
pip install opencv-contrib-python
```
installed without any error .
and when I try to import it
```
import cv2
```
show me this error :
```
Traceback (most recent call last):
File "test.py", line 1, in <module>
import cv2
File "C:\Program Files\Python37\lib\site-packages\cv2\__init__.py", line 3, in
<module>
from .cv2 import *
ImportError: DLL load failed: The specified module could not be found.
```
and I see another topic about this error but their solution does not work for me :
- [DLL load failed error when importing cv2](https://stackoverflow.com/questions/43184887/dll-load-failed-error-when-importing-cv2)
- [Failure to import cv2 in python : DLL load failed](http://answers.opencv.org/question/69587/failure-to-import-cv2-in-python-dll-load-failed/)
- and a lot of pages in [here](https://stackoverflow.com/search?q=DLL%20load%20failed.)
some of them say about that problem because anaconda, and I don't have anaconda even. and many say that problem 32 or 64 version, I try both and disappointment. many about system variables...
what `dll` failed to load? cv2? why?
what i try till now:
* change python version : 3.5, 3.7, 2.7 in 86x and 64x (uninstall complete last version and delete all remain files in program files ... then install new one)
* try with another packages like : opencv-python, opencv-contrib, try install specific versions with .whl (in some topic i read about this so install **numpy-1.14.5+mkl-cp37-cp37m-win\_amd64.whl** and **opencv\_python-3.4.3+contrib-cp37-cp37m-win\_amd64.whl** but anything not change)
* import numpy before cv2 for every test
* installed Microsoft Visual C++ 2015 Redistributable Update 3 , 2008 , 2010 , 2017 for 64x
**UPDATE**
**NEXT 2 DAY**
I try to manage missing `dll` base on *dependency walker* that mention in bellow post, download all `dll` and put them in system 32, its a good clue but not solve the problem.
**DAY 4,5**
my next step , trying compile OpenCV in my machine , install visual studio 2012 + cmake and base on [document](https://docs.opencv.org/3.4/d5/de5/tutorial_py_setup_in_windows.html) compile 2 version , 64x and 32 , compiling have a lot of problem itself (like missing some lib from c lang and need to install *Visual C++ Redistributable for Visual Studio 2012* ...) and I resolve all of them ,then I have 2 success compiled version but both have the same problem with missing `dll`, the compile process is so slow and give 2 day from me. so if fail on this mission :/ I search for a new way...
**DAY 6**
my next step is to try to run OpenCV in my machine with docker ( I try but docker windows just install on windows 10 and windows server 2016) so I search for an alternative. I found Vagrant that work like docker but with VM (visual box) today my time spend for this articles, it's amazing, the concept of docker and vagrant is so Attractive, and I play with this like that I have new puzzle :)
first error show on ... and limitless errors go on :
**Day 7**
Vagrant tests failed too because my windows server 2012 is a vb machine (visual server) and run a visual box in another visual box impossible!
so I need new clues :
**Day 8-9**
try install vagrant with VMware , I download VMware, unfortunately this plugin is not free (79$) so I try to work with [pythonanywhere.com](http://pythonanywhere.com) that *'gives you access to machines with a full Python environment already installed.'* in this free account, you can't openurl with urllib , and more limit...
so I wanna try to install ubuntu in VMware my self... I read here results after work. it's Impossible with same reason vb (visual box) | 2018/09/15 | [
"https://Stackoverflow.com/questions/52349669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7407809/"
] | I have faced the similar issue in Windows Server 2012 r2. After lot of findings I found that mfplat.dll was missing which is related to Window Media Service.
Hence you have to manually install the features so that you can get dll related to window media service.
1. Turn windows features on or off
2. Skip the roles screen and directly go to Feature screen
3. Select "Desktop Experience" under "User Interfaces and Infrastructure"
After this all required dll of opencv would be available. | I had the same problem on Windows Server 2012 R2 x64. I was creating executable file using PyInstaller and got error in runtime:
```
ImportError: DLL load failed: The specified module could not be found.
```
After installing "Visual C++ redistributable" 2015 and enabling "Media Foundation" feature my problem was resolved.
There is more informations in documentation: [OpenCV Documentation](https://pypi.org/project/opencv-python/) | 15,961 |
24,944,627 | I'm using the Canopy distribution and when I try to install pymatbridge using 'pip install pymatbridge' I get an error saying that pymatbridge does not work on win32. I've got the 64-bit version of Canopy so I don't understand what that means.
<http://arokem.github.io/python-matlab-bridge/>
```
Downloading/unpacking pymatbridge
Running setup.py (path:C:\Users\user\AppData\Local\Enthought\Canopy\User\build\pymatbridge\setup.py) egg_info for package pymatbridge
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "C:\Users\user\AppData\Local\Enthought\Canopy\User\build\pymatbridge\setup.py", line 25, in <module>
raise ValueError("pymatbridge does not work on win32")
ValueError: pymatbridge does not work on win32
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "C:\Users\user\AppData\Local\Enthought\Canopy\User\build\pymatbridge\setup.py", line 25, in <module>
raise ValueError("pymatbridge does not work on win32")
ValueError: pymatbridge does not work on win32
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in C:\Users\user\AppData\Local\Enthought\Canopy\User\build\pymatbridge
Storing debug log for failure in C:\Users\user\pip\pip.log
``` | 2014/07/24 | [
"https://Stackoverflow.com/questions/24944627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334059/"
] | I am the developer of this software. This should work now (since version 0.4), but I don't have a Windows machine to test this one. I have had help from Windows users in developing the patches to make this Windows-functional. Though, I am not always able to solve issues, I am happy to receive suggestions/complaints/praise through our github page:
<https://github.com/arokem/python-matlab-bridge/issues> | "Win32" in this context means Windows 32- or 64-bit, as distinct from Cygwin.
The developer of pymatbridge introduced this explicit restriction in May 2014:
<https://github.com/arokem/python-matlab-bridge/commit/a6fd3cc3adf5ef2b5e3d9b83a8050d783c76d48f>
I don't know why. Perhaps, like many small developers, he found the idiosyncrasies of Windows frustrating, and did not have the time to handle them, at least for now. ***[EDIT: See package developer comment below; this restriction was not intentional and has been removed.]***
Is it an option for you to run Matlab in a Linux VM on your Windows machine, or directly on Linux or Mac, in order to use this package? | 15,971 |
60,119,580 | I am building HR app using python with Django framework, I am having issue to calculation retirement year of an employee, for example if an employee enters his/her date of birth let the system calculate his/her retirement year or how many years remaining to retire. staff retire at 60 years
Am getting this error:
```
TypeError at /staffprofile/profile_entry/
unsupported operand type(s) for -: 'datetime.date' and 'datetime.datetime'
Request Method: POST
Request URL: http://0.0.0.0:8080/staffprofile/profile_entry/
Django Version: 1.8
Exception Type: TypeError
Exception Value:
unsupported operand type(s) for -: 'datetime.date' and 'datetime.datetime'
Exception Location: /home/bakceesay/djangoapps/NAO/venv/src/staffprofile/views.py in profile_entry, line 57
```
this is my code in views.py
```
from __future__ import unicode_literals
from django.shortcuts import get_object_or_404, render, redirect
from django.contrib.auth.models import User
from django.http import HttpResponse, HttpResponseRedirect # HttpResponse allows the get_absolute_url to work ## and HttpresponseRedirect redirects page after a process
from .models import *
from .forms import *
from django.contrib import messages
from datetime import datetime, timedelta
def profile_entry(request):
title = 'ADD YOUR PROFILE INFORMATION'
form = ProfileForm(request.POST or None, request.FILES or None)
if form.is_valid():
instance = form.save(commit=False)
date_of_birth = instance.date_of_birth
age_days = (datetime.now().date() - date_of_birth)
age = (age_days/365)
rem_yrs = (60 - int(age))
instance.date_of_retirement = rem_yrs
instance.save()
messages.success(request, 'Successfully Saved')
return redirect('/profile/profile_detail')
context = {
"title": title,
"form": form,
}
return render(request, "profile_entry.html",context)
```
models.py
```
[address = models.CharField(max_length=30, blank=True, null=True)
date_of_birth = models.DateTimeField(blank=True, null=True)
email_address = models.CharField(max_length=30, blank=True, null=True)
phone_number = models.CharField(max_length=30, blank=True, null=True)
date_of_hire = models.DateTimeField(blank=True, null=True)
date_of_termination = models.DateField(blank=True, null=True)
date_of_retirement = models.CharField(max_length=30, blank=True, null=True)][1]
``` | 2020/02/07 | [
"https://Stackoverflow.com/questions/60119580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12860292/"
] | Here's a full working example of what you want to achieve:
```
import pandas as pd
import matplotlib.pyplot as plt
import pandas as pd
df_1 = pd.DataFrame({'2010':[10,11,12,13],'2011':[14,18,14,15],'2012':[12,13,14,13]})
df_2 = pd.DataFrame({'2010':[10,11,12,13],'2011':[14,18,14,15],'2012':[12,13,14,13]})
df_3 = pd.DataFrame({'2010':[10,11,12,13],'2011':[14,18,14,15],'2012':[12,13,14,13]})
list_df = [df_1,df_2,df_3]
for i in range(len(list_df)):
ax = plt.subplot(len(list_df[i]),len(list(list_df[i])),i+1)
for j in list_df[i]:
ax.plot(list_df[i].index,list_df[i][j])
```
### Edit:
Given your answer which seems to fully address the issue, I would change a few lines to make it a bit more efficient. Below the code, I will upload some information regarding performance:
for i in range(df1.shape[1](https://i.stack.imgur.com/mWlvN.png)): `for i in range(len(df.columns)):`
```
leg_text = df1.columns.tolist() #What's the use of this line?
```
Speed comparison betweens `shape`, `len` and some other methods:
```
ns = np.power(10, np.arange(5))
results = pd.DataFrame(
columns=ns,
index=pd.MultiIndex.from_product(
[['len', 'len(tolist)', 'len(values)', 'shape'],
ns]))
dfs = {(n, m): pd.DataFrame(np.zeros((n, m))) for n in ns for m in ns}
for n, m in dfs.keys():
df = dfs[(n, m)]
results.loc[('len',n),m] = timeit('len(df.columns)', 'from __main__ import df', number=10000)
results.loc[('len(tolist)', n), m] = timeit('len(df.columns.tolist())', 'from __main__ import df', number=10000)
results.loc[('len(values)', n), m] = timeit('len(df.columns.values)', 'from __main__ import df', number=10000)
results.loc[('shape', n), m] = timeit('df.values.shape[1]', 'from __main__ import df', number=10000)
fig, axes = plt.subplots(2, 3, figsize=(9, 6), sharex=True, sharey=True)
for i, (m, col) in enumerate(results.iteritems()):
r, c = i // 3, i % 3
col.unstack(0).plot.bar(ax=axes[r, c], title=m)
```
Output:
[](https://i.stack.imgur.com/mWlvN.png)
```
1 10 100 1000 10000
len 1 0.0038 0.0046 0.0032 0.0037 0.0035
len 10 0.0032 0.0032 0.0032 0.0034 0.0035
len 100 0.0032 0.0052 0.0052 0.0053 0.0035
len 1000 0.0037 0.0036 0.0041 0.0039 0.0043
len 10000 0.0040 0.0038 0.0045 0.0043 0.0123
len(tolist) 1 0.0051 0.0075 0.0175 0.1629 1.6579
len(tolist) 10 0.0051 0.0059 0.0175 0.1588 1.9253
len(tolist) 100 0.0049 0.0097 0.0196 0.1635 1.7422
len(tolist) 1000 0.0053 0.0065 0.0198 0.1831 1.9897
len(tolist) 10000 0.0057 0.0069 0.0218 0.1995 2.2426
len(values) 1 0.0083 0.0097 0.0073 0.0074 0.0074
len(values) 10 0.0073 0.0072 0.0073 0.0107 0.0087
len(values) 100 0.0075 0.0094 0.0109 0.0072 0.0081
len(values) 1000 0.0081 0.0082 0.0081 0.0085 0.0088
len(values) 10000 0.0087 0.0084 0.0103 0.0101 0.0327
shape 1 0.1108 0.0838 0.0789 0.0779 0.0780
shape 10 0.0764 0.0770 0.0771 0.1118 0.0806
shape 100 0.0952 0.0826 0.1013 0.0800 0.0889
shape 1000 0.0881 0.0863 0.0867 0.0938 0.1063
shape 10000 0.0905 0.0999 0.1043 0.1013 0.2384
``` | I figured out the solution for this, hope it will be helpful to others. Since the input is list of dataframe, it is easier to do as follow:
```
import matplotlib.pyplot as plt
from matplotlib.pyplot import cm
from itertools import cycle
df1, df2 = list_of_df[0], list_of_df[1]
colors=cm.tab10(np.linspace(0, 1,len(df1.columns)))
lines = ["-","--","-.",":"]
linecycler = cycle(lines)
leg_text = df1.columns.tolist()
marker = cycle(('+', 'o', '*', 'v','^','<','>'))
fig,(ax1,ax2) = plt.subplots(nrows=2,ncols=1,sharex=True,squeeze=True,figsize=(10,8))
for i in range(df1.shape[1]):
ax1.plot(df1.index, df1.columns[i], next(linecycler), marker = next(marker), data=df1, color=colors[i], linewidth=3)
ax2.plot(df2.index, df2.columns[i], next(linecycler), marker = next(marker),data=df2, color=colors[i], linewidth=3)
plt.tight_layout(rect=[0, 0, 0.85, 1])
plt.gcf().autofmt_xdate()
plt.style.use('ggplot')
plt.xticks(rotation=0)
plt.show()
```
I get my expected output. I may need to come even more efficient code, so anyone has a better idea, please let me know. Thanks | 15,972 |
59,207,859 | I've found related question, mine about GitLab, those about GitHub: [How to remove extra line space on GitHub markdown bullets/lists?](https://stackoverflow.com/questions/45113083/how-to-remove-extra-line-space-on-github-markdown-bullets-lists) and one answer is to use 4 spaces and another:
>
> You cannot do this ... This is due to
> GitHub's styling of the markdown, not an inherent issue with the
> markdown itself.
>
>
>
I've made a list and in GitLab GUI I see it like that:
```
o ABC
o ABC
o ABC
o ABC
o ABC
```
Extra empty line before secondary list but no line after. I prefer to add space after secondary too, however deleting all empty lines in list would look good too. Below is my markdown, it has 4 spaces for secondary list and I've tried to add `<br/>` as per [Create two blank lines in Markdown](https://stackoverflow.com/questions/20543454/create-two-blank-lines-in-markdown) but it breaks list structure after that (which also results in disappearing empty lines between primary list entries btw).
```
- datapool
- specific
- jmx
- config
<br/> (that is temp try, see above text)
- python
```
Could it be done in GitLab? | 2019/12/06 | [
"https://Stackoverflow.com/questions/59207859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5499118/"
] | I found that alternating between the `+` and `-` unordered list indicators produces the desired result.
```
- Item 1
- SubItem 1
- SubItem 2
+ Item 2
+ SubItem 1
+ SubItem 2
- Item 3
- SubItem 1
- SubItem 2
``` | Small mistake: should have put `<br/>` on same level as secondary list (indent with 4 spaces) - then extra line is seen in GUI. | 15,975 |
8,673,035 | I know feature hashing (hashing-trick) is used to reduce the dimensionality and handle sparsity of bit vectors but I don't understand how it really works. Can anyone explain this to me.Is there any python library available to do feature hashing?
Thank you. | 2011/12/29 | [
"https://Stackoverflow.com/questions/8673035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/585329/"
] | On Pandas, you could use something like this:
```
import pandas as pd
import numpy as np
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
data = pd.DataFrame(data)
def hash_col(df, col, N):
cols = [col + "_" + str(i) for i in range(N)]
def xform(x): tmp = [0 for i in range(N)]; tmp[hash(x) % N] = 1; return pd.Series(tmp,index=cols)
df[cols] = df[col].apply(xform)
return df.drop(col,axis=1)
print hash_col(data, 'state',4)
```
The output would be
```
pop year state_0 state_1 state_2 state_3
0 1.5 2000 0 1 0 0
1 1.7 2001 0 1 0 0
2 3.6 2002 0 1 0 0
3 2.4 2001 0 0 0 1
4 2.9 2002 0 0 0 1
```
Also on Series level, you could
import numpy as np, os
import sys, pandas as pd
```
def hash_col(df, col, N):
df = df.replace('',np.nan)
cols = [col + "_" + str(i) for i in range(N)]
tmp = [0 for i in range(N)]
tmp[hash(df.ix[col]) % N] = 1
res = df.append(pd.Series(tmp,index=cols))
return res.drop(col)
a = pd.Series(['new york',30,''],index=['city','age','test'])
b = pd.Series(['boston',30,''],index=['city','age','test'])
print hash_col(a,'city',10)
print hash_col(b,'city',10)
```
This will work per single Series, column name will be assumed to be a Pandas index. It also replaces blank strings with nan, and floats everything.
```
age 30
test NaN
city_0 0
city_1 0
city_2 0
city_3 0
city_4 0
city_5 0
city_6 0
city_7 1
city_8 0
city_9 0
dtype: object
age 30
test NaN
city_0 0
city_1 0
city_2 0
city_3 0
city_4 0
city_5 1
city_6 0
city_7 0
city_8 0
city_9 0
dtype: object
```
If, however, there is a vocabulary, and you simply want to one-hot-encode, you could use
```
import numpy as np
import pandas as pd, os
import scipy.sparse as sps
def hash_col(df, col, vocab):
cols = [col + "=" + str(v) for v in vocab]
def xform(x): tmp = [0 for i in range(len(vocab))]; tmp[vocab.index(x)] = 1; return pd.Series(tmp,index=cols)
df[cols] = df[col].apply(xform)
return df.drop(col,axis=1)
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
df = pd.DataFrame(data)
df2 = hash_col(df, 'state', ['Ohio','Nevada'])
print sps.csr_matrix(df2)
```
which will give
```
pop year state=Ohio state=Nevada
0 1.5 2000 1 0
1 1.7 2001 1 0
2 3.6 2002 1 0
3 2.4 2001 0 1
4 2.9 2002 0 1
```
I also added sparsification of the final dataframe as well. In incremental setting where we might not have encountered all values beforehand (but we somehow obtained the list of all possible values somehow), the approach above can be used. Incremental ML methods would need the same number of features at each increment, hence one-hot encoding must produce the same number of rows at each batch. | [Here](http://metaoptimize.com/qa/questions/6943/what-is-the-hashing-trick#6945) (sorry I cannot add this as a comment for some reason.) Also, the first page of [Feature Hashing for Large Scale Multitask Learning](http://arxiv.org/pdf/0902.2206) explains it nicely. | 15,976 |
11,511,080 | I am a beginner at python (one week). Here I am trying print the list of all the prime factor of 60. But for line 19, I am getting following error:
*TypeError: unsupported operand type(s) for %: 'float' and 'list'*
The code:
```
whylist = []
factor = []
boom = []
primefactor = []
n = 60
j = (list(range(1, n, 1)))
for numbers in j:
if n%numbers == 0:
whylist.append(numbers)
for everynumber in whylist:
factor.append(everynumber)
for things in factor:
u = (list(range(1, things, 1)))
d = float(things)
if d%u == 0:
boom.append(things)
if len(boom) == 1:
for every in boom:
primefactor.append(every)
print(primefactor)
```
What am I doing wrong? | 2012/07/16 | [
"https://Stackoverflow.com/questions/11511080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1526409/"
] | To apply a math operation to every element in a list you can use a list-comprehension:
```
new_list = [ x%num for x in old_list]
```
There are other ways to do it as well. Sometimes people will use `map`
```
new_list = map(lambda x: x%num, old_list)
```
but most people prefer the first form which is generally more efficient and clear than using `lambda` (which can be a little confusing when you're just starting to learn python).
**EDIT**
Here's a recursive implementation of what you were trying:
```
def factorize(n):
out=[]
for i in range(2,n):
if(n%i == 0): #first thing to hit this is always prime
out.append(i) #add it to the list
out+=factorize(n/i) #get the list of primes from the other factor and append to this list.
return out
else:
return [n] # n%i was never 0, must be prime.
print factorize(2000)
``` | Another option is to use numpy arrays instead of lists.
```
import numpy as np
j = np.arange(1,n,1)
rem = np.mod(j,num)
```
and numpy will take care of broadcasting operations for you. It should also be faster than list comprehensions or map. | 15,979 |
55,235,230 | I get this warning most of the time when i define a model using Keras. It seems to somehow come from tensorflow though:
```
WARNING:tensorflow:From C:\Users\lenik\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
```
Is this warning something to worry about?
If yes, how do i solve this problem? | 2019/03/19 | [
"https://Stackoverflow.com/questions/55235230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8104036/"
] | This depreciation warning is due to the Dropout layer in `tf.keras.layers.Dropout`.
To avoid this warning, you need to clearly specify `rate=` in Dropout as: `Dropout(rate=0.2)`.
Earlier it was `keep_prob` and it is now deprecated to `rate` i.e. rate = 1-keep\_prob.
For more, you can check out this tensorflow [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout). | Tensorflow is telling you that the argument `keep_prob` is deprecated and that it has been replaced by the argument `rate`.
Now, to achieve the same behavior you have now and remove the warning, you need to replace every occurrence of the `keep_prob` argument with `rate` argument, and pass the value `1-keep_prob`. | 15,980 |
4,341,206 | When trying to authenticate via OAuth in Django Piston, the following exception is thrown:
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/oauth/request_token/?oauth_nonce=32921052&oauth_timestamp=1291331173&oauth_consumer_key=ghof7av2vu8hal2hek&oauth_signature_method=HMAC-SHA1&oauth_version=1.0&oauth_signature=
Python Version:
Traceback:
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
100. response = callback(request, *callback_args, **callback_kwargs)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/authentication.py" in oauth_request_token
130. token = oauth_server.fetch_request_token(oauth_request)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/oauth.py" in fetch_request_token
302. self._check_signature(oauth_request, consumer, None)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/oauth.py" in _check_signature
393. valid_sig = signature_method.check_signature(oauth_request, consumer, token, signature)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/oauth.py" in check_signature
482. built = self.build_signature(oauth_request, consumer, token)
File "/Users/derek/.virtualenvs/optimal-rest/lib/python2.7/site-packages/piston/oauth.py" in build_signature
513. hashed = hmac.new(key, raw, sha)
File "/usr/local/Cellar/python/2.7/lib/python2.7/hmac.py" in new
133. return HMAC(key, msg, digestmod)
File "/usr/local/Cellar/python/2.7/lib/python2.7/hmac.py" in __init__
72. self.outer.update(key.translate(trans_5C))
Exception Type: TypeError at /api/oauth/request_token/?oauth_nonce=32921052&oauth_timestamp=1291331173&oauth_consumer_key=ghof7av2vu8hal2hek&oauth_signature_method=HMAC-SHA1&oauth_version=1.0&oauth_signature=
Exception Value: character mapping must return integer, None or unicode
```
Can't tell if it's a bug in Piston or if I can't use the oauth2 lib.
Consumer Code:
```
import os
import cgi
import oauth2 as oauth
# settings for the local test consumer
CONSUMER_SERVER = os.environ.get("CONSUMER_SERVER") or 'localhost'
CONSUMER_PORT = os.environ.get("CONSUMER_PORT") or '8000'
print CONSUMER_SERVER , CONSUMER_PORT
# fake urls for the test server (matches ones in server.py)
REQUEST_TOKEN_URL = 'http://%s:%s/api/oauth/request_token/' % (CONSUMER_SERVER, CONSUMER_PORT)
ACCESS_TOKEN_URL = 'http://%s:%s/api/oauth/access_token/' % (CONSUMER_SERVER, CONSUMER_PORT)
AUTHORIZE_URL = 'http://%s:%s/api/oauth/authorize/' % (CONSUMER_SERVER, CONSUMER_PORT)
# key and secret granted by the service provider for this consumer application - same as the MockOAuthDataStore
CONSUMER_KEY = 'ghof7av2vu8hal2hek'
CONSUMER_SECRET = 'ohhey'
consumer = oauth.Consumer(CONSUMER_KEY, CONSUMER_SECRET)
client = oauth.Client(consumer)
# Step 1: Get a request token. This is a temporary token that is used for
# having the user authorize an access token and to sign the request to obtain
# said access token.
resp, content = client.request(REQUEST_TOKEN_URL, "GET")
if resp['status'] != '200':
raise Exception("Invalid response %s." % resp['status'])
```
Referencing <https://github.com/clemesha/django-piston-oauth-example> for the consumer code. | 2010/12/02 | [
"https://Stackoverflow.com/questions/4341206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/186101/"
] | This is a piston problem that comes from an encoding problem of the key/secret of the consumer.
The solution is to force the encoding of the key/secret returned from the database to ASCII.
In the `store.py` file of Piston, modify the `lookup_consumer` so it look like this:
```
def lookup_consumer(self, key):
try:
self.consumer = Consumer.objects.get(key=key)
self.consumer.key = self.consumer.key.encode('ascii')
self.consumer.secret = self.consumer.secret.encode('ascii')
return self.consumer
except Consumer.DoesNotExist:
return None
```
[Here](https://github.com/nbarraille/django-piston-nbarraille) is my fork of django-piston fixing this problem. | This problem also occurs inside Piston's "oauth.py" module's "build\_signature()" method if a unicode key value is passed in. I discovered this issue while using the clemesha/django-piston-oauth-example client code mentioned above because it kept failing after the prompt for the "PIN Code".
The underlying problem is documented as issue #169 here:
<https://bitbucket.org/jespern/django-piston/issue/169/oauth-request-token-error-with-hmac>
The re-encoding solution described above is also applicable in this case:
```
def build_signature(self, oauth_request, consumer, token):
"""Builds the base signature string."""
key, raw = self.build_signature_base_string(oauth_request, consumer,
token)
#BUG: character mapping must return integer, None or unicode
#FIX:
key = key.encode('ascii')
# HMAC object.
try:
import hashlib # 2.5
hashed = hmac.new(key, raw, hashlib.sha1)
except:
import sha # Deprecated
hashed = hmac.new(key, raw, sha)
# Calculate the digest base 64.
return binascii.b2a_base64(hashed.digest())[:-1]
``` | 15,981 |
62,555,213 | I am having two dicts, one in list:
```
var_a = [{'name':"John",'number':21},{'name':"Kevin",'number':23}]
var_b = {'21':"yes"},{'24':"yes"}
```
I need to compare var\_a and var\_b with the key from var\_b with the number value in var\_a.
I have tried this and got the output:
```
for key, value in var_b.iteritems():
for l in var_a:
if l['number'] == key:
print l
```
This needs two loops to get the output. So is there any way to finish this is single loop with python 2.7? | 2020/06/24 | [
"https://Stackoverflow.com/questions/62555213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8741562/"
] | You can use `map` to create a keys set from `var_b` keys and then loop only over `var_a` to check if the number value exists in the `var_b` keys set
```
var_a = [{'name':"John",'number':21},{'name':"Kevin",'number':23}]
var_b = [{'21':"yes"},{'23':"no"}]
keys_set = set(map(lambda x: int(list(x.keys())[0]), var_b))
for i in var_a:
if i['number'] in keys_set:
print(i)
```
Output
```
{'name': 'John', 'number': 21}
{'name': 'Kevin', 'number': 23}
``` | I think you need to use the lambda function with one for-loop:
```
for key, value in var_b.iteritems():
result = filter(lambda d: d['id'] == key, var_a)
```
The result will give you the output for sure. | 15,982 |
35,823,709 | I have read the article "Ubuntu Installation --Guide for Ubuntu 14.04 with a 64 bit processor." from Github website (<https://github.com/tiangolo/caffe/blob/ubuntu-tutorial-b/docs/install_apt2.md>).
And now, I open IPython to test that PyCaffe is working. I input "ipython" command, and enter to the ipython page.
Then, I input the command "import caffe", but I got below warnings:
>
> /root/code/caffe-master/python/caffe/pycaffe.py:13:RuntimeWarning: to-Python converter for boost::shared\_ptr > already registered; second conversion method ignored. from .\_caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
>
>
> /root/code/caffe-master/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared\_ptr > already registered; second conversion method ignored. from .\_caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
>
>
> /root/code/caffe-master/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared\_ptr > already registered; second conversion method ignored. from .\_caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
>
>
>
Would you please help to give me some suggestions to resolve it? | 2016/03/06 | [
"https://Stackoverflow.com/questions/35823709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4159177/"
] | I found this:
<https://groups.google.com/forum/#!topic/caffe-users/C_air48cISU>
Claiming that this is a non-error, cause by mis-matched versions of Boost. You can safely ignore it. They've promised to clean up the warning (at some point not yet specified) | You can edit /caffe/python/caffe/\_caffe.cpp . There are four places need to change,like this
```
bp::register_ptr_to_python<shared_ptr<Layer<Dtype> > >();
```
to
```
const boost::python::type_info cinfo = boost::python::type_id<shared_ptr<Blob<Dtype> > >();
const boost::python::converter::registration* creg = boost::python::converter::registry::query(cinfo);
if (creg == NULL)
{
bp::register_ptr_to_python<shared_ptr<Layer<Dtype> > >();
}
else if ((*creg).m_to_python == NULL) {
bp::register_ptr_to_python<shared_ptr<Layer<Dtype> > >();
}
```
then compile use 'make pycaffe' ,it will be ok. | 15,983 |
49,963,862 | I have a dictionary that has tuple keys and numpy array values. I tried saving it using h5 and pickle but I get error messages. what is the best way to save this object to file?
```
import numpy as np
from collections import defaultdict
Q =defaultdict(lambda: np.zeros(2))
Q[(1,2,False)] = np.array([1,2])
Q[(1,3,True)] = np.array([3,4])
>>> Q
defaultdict(<function <lambda> at 0x10c51ce18>, {(1, 2, False): array([1, 2]), (1, 3, True): array([3, 4])})
```
np.save traceback:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-99-a071e1561501> in <module>()
----> 1 np.save('Q.npy', Q)
~/anaconda3_420/lib/python3.5/site-packages/numpy/lib/npyio.py in save(file, arr, allow_pickle, fix_imports)
509 arr = np.asanyarray(arr)
510 format.write_array(fid, arr, allow_pickle=allow_pickle,
--> 511 pickle_kwargs=pickle_kwargs)
512 finally:
513 if own_fid:
~/anaconda3_420/lib/python3.5/site-packages/numpy/lib/format.py in write_array(fp, array, version, allow_pickle, pickle_kwargs)
584 if pickle_kwargs is None:
585 pickle_kwargs = {}
--> 586 pickle.dump(array, fp, protocol=2, **pickle_kwargs)
587 elif array.flags.f_contiguous and not array.flags.c_contiguous:
588 if isfileobj(fp):
AttributeError: Can't pickle local object 'mc_control_epsilon_greedy.<locals>.<lambda>'
``` | 2018/04/22 | [
"https://Stackoverflow.com/questions/49963862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7091646/"
] | How about saving it as a plain dictionary? You don't need the `defaultdict` behavior during saving.
```
In [126]: from collections import defaultdict
In [127]: Q =defaultdict(lambda: np.zeros(2))
...: Q[(1,2,False)] = np.array([1,2])
...: Q[(1,3,True)] = np.array([3,4])
...: Q[(3,4,False)]
...:
Out[127]: array([0., 0.])
In [128]: Q
Out[128]:
defaultdict(<function __main__.<lambda>>,
{(1, 2, False): array([1, 2]),
(1, 3, True): array([3, 4]),
(3, 4, False): array([0., 0.])})
```
We can pull it out of the `defaultdict` wrapping with:
```
In [130]: dict(Q)
Out[130]:
{(1, 2, False): array([1, 2]),
(1, 3, True): array([3, 4]),
(3, 4, False): array([0., 0.])}
```
Then we can pickle it (I'm using `np.save` as a pickle shortcut)
```
In [131]: np.save('stack49963862', np.array(dict(Q)))
```
`load` gives an object array containing this dictionary:
```
In [132]: P = np.load('stack49963862.npy')
In [133]: P
Out[133]:
array({(1, 2, False): array([1, 2]), (1, 3, True): array([3, 4]), (3, 4, False): array([0., 0.])},
dtype=object)
In [138]: P.item()
Out[138]:
{(1, 2, False): array([1, 2]),
(1, 3, True): array([3, 4]),
(3, 4, False): array([0., 0.])}
```
We can easily recreate the defaultdict with an update:
```
In [134]: Q1 =defaultdict(lambda: np.zeros(2))
In [139]: Q1.update(P.item())
In [140]: Q1
Out[140]:
defaultdict(<function __main__.<lambda>>,
{(1, 2, False): array([1, 2]),
(1, 3, True): array([3, 4]),
(3, 4, False): array([0., 0.])})
``` | I don't see any problems using `pickle`
```
import pickle
import numpy as np
x = {(1,2,False): np.array([1,4]), (1,3,False): np.array([4,5])}
with open('filename.pickle', 'wb') as handle:
pickle.dump(x, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('filename.pickle', 'rb') as handle:
y = pickle.load(handle)
print x
print y
```
---
**After the edit:**
What you actually have is a `lambda`, and that cannot be pickled by default. You need to install `dill` and import it for it to work (see [this answer](https://stackoverflow.com/a/25353243/3512538))
This is how it should look like:
```
import pickle
import numpy as np
from collections import defaultdict
import dill # doesn't come with default anaconda. Install with "conda install dill"
x = defaultdict(lambda: np.zeros(2))
with open('filename.pickle', 'wb') as handle:
pickle.dump(x, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('filename.pickle', 'rb') as handle:
y = pickle.load(handle)
print x
print y
```
**Output:**
```
# no errors :-)
defaultdict(<function <lambda> at 0x000000000CD0C898>, {})
defaultdict(<function <lambda> at 0x0000000002614C88>, {})
```
---
***OP's solution:***
your edited solution still generated the same error for me but this works fine:
```
import pickle
import dill
dill_file = open("Q.pickle", "wb")
dill_file.write(dill.dumps(Q))
dill_file.close()
```
*On my machine (Win 8.1 64 bit, using Spyder), I had no errors when using simple `dill`.* | 15,984 |
55,647,936 | I am porting the application from python 2 to python 3 and encountered the following problem: `random.randint` returns different result according to used Python version. So
```
import random
random.seed(1)
result = random.randint(1, 100)
```
On Python 2.x result will be 14 and on Python 3.x: 18
Unfortunately, I need to have the same output on python3 to have backward compatibility of service.
Now I have only working idea of usage `subprocess` module from Python 3.x to execute Python 2.x code
```
result = subprocess.check_output(
'''python2 -c "import random; random.seed('%s'); print(random.randint(1, 100))"''' % seed,
shell=True
)
```
But such an approach is slower approx. in 1000 times than execute just `random.randint(1, 100)`.
Maybe there are other approaches to do this? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55647936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542977/"
] | The difference is caused by two things:
1. You should use `random.seed(42, version=1)`
2. In python 3.2 there was a change to `random.randrange`, which is called by `random.randint` and probably add to above [issue](https://docs.python.org/3/library/random.html#random.randrange).
So use something like:
```
try: random.seed(42, version=1) # Python 3
except TypeError: random.seed(42) # Python 2
```
and `int(1+random.random()*99)`.
More detail
-----------
Backward compatibility was on purpose dropped with the change of `randrange`, see the original [issue](https://bugs.python.org/issue9025).
See this [reddit](https://www.reddit.com/r/pythontips/comments/5pp5mz/the_random_module_with_the_same_seed_produces_a/) post.
If possible use `numpy.random`like is proposed in the reddit post.
Use of `random.seed(42, version=1)` as described in the [documentation](https://docs.python.org/3/library/random.html#bookkeeping-functions) will cause `random.random()` to deliver the same result but give a different result for `random.randint(1,100)` (because in python 3.2 some problem with the old implementation was fixed). You may opt to only rely on something like `int(1+random.random()*99)`.
(Python 2 will run out of support very [soon](https://www.python.org/dev/peps/pep-0373/), [soon2](https://github.com/python/devguide/pull/344) or [here](https://pythonclock.org). If possible check, if backward compatibility is really needed.)
My current tests:
```
import random
try: random.seed(42, version=1) # Python 3
except TypeError: random.seed(42) # Python 2
print(random.random())
print(int(1+99*random.random()))
print(random.randint(1,99))
```
Results on Python 2
```
0.639426798458
3
28
```
and Python 3
```
0.6394267984578837
3
36
``` | You can specify which version to use for the seed: `random.seed(1, version=1)`. However, as stated by Sparky05, you are probably better off using `numpy.random` instead. | 15,985 |
30,772,068 | I have the following string object (its json) in Java (its pretty printed so it is legible):
```
{
name: John,
age: {
years:18
},
computer_skills: {
years:4
},
mile_runner: {
years:2
}
}
```
I have an array with 100 people with the same structure.
What is the best way to go through all 100 people and make it such that there is no more "years"? In other words, each object in the 100 would look something like:
```
{
name: John,
age:18,
computer_skills:4,
mile_runner:2
}
```
In python, I can do something trivial like this in pseudocode:
```
// Iterates through each person
for person in persons:
// Iterates through each key/value pair k,v
for k, v in person.items():
// check if the value is an object {}
if isinstance(v, dict) and 'years' in v:
if the value is an object, and there is 'years' as a key nested within, then set the key k to have the value of the nested 'years' field
person[k] = v['years']
```
In the end, I want to be able to write out that "persons" object back out into a string form (of json).
Is there a simple yet efficient way to do this in java? I would prefer not to create a model object that represents every single field within my object if possible as I like how I dont have to do that in python (unless I have to because its Java).
I like how you can iterate, check if an object has a nested field 'years', if it does, then uplevel it! It looks straightforward. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30772068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | It is "nil coalescing operator" (also called "default operator"). `a ?? b` is value of `a` (i.e. `a!`), unless `a` is `nil`, in which case it yields `b`. I.e. if `favouriteSnacks[person]` is missing, return assign `"Candy Bar"` in its stead. | This:
```
let snackName = favoriteSnacks[person] ?? "Candy Bar"
```
Is equals this:
```
if favoriteSnacks[person] != nil {
let snackName = favoriteSnacks[person]
} else {
let snackName = "Candy Bar"
}
```
Explaining in words, if the `let` statement fail to grab `person` from `favoriteSnacks` it will assigned Candy Bar to the `snackName` | 15,989 |
54,064,946 | I am working in jupyter with python in order to clean a set of data that I have retrieved from an analysis software and I would like to have an equal number of samples that pass and fail. Basically my dataframe in pandas looks like this:
```
grade section area_steel Nx Myy utilisation Accceptable
0 C16/20 STD R 700 350 4534 -310000 240000 0.313 0
1 C90/105 STD R 400 600 4248 -490000 270000 0.618 0
3 C35/45 STD R 550 400 1282 580000 810000 7.049 1
4 C12/15 STD R 350 750 2386 960000 610000 5.180 1
```
However the results which are not acceptable (1) are double the results which are acceptable.
no\_pass = 8589 no\_fail = 16999
ratio = 1.979159389917336
I would like a new dataframe with the same ratio of pass and fails
I tried the following but it doesn't seem to work:
```
import random
new_data = data[data.Accceptable <= random.random()*1/ratio]
```
It would seem that only one random value is computed and all the rows checked against that, while I would like one value per row.
Any suggestion? | 2019/01/06 | [
"https://Stackoverflow.com/questions/54064946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10876004/"
] | Using formatting string and assuming that `optimal_system` is your dictionary:
```
with open('output.txt', 'w') as f:
for k in optimal_system.keys():
f.write("{}: {}\n".format(k, optimal_system[k]))
```
**EDIT**
As pointed by @wwii, the code above can be also written as:
```
with open('output.txt', 'w') as f:
for k, v in optimal_system.items():
f.write("{}: {}\n".format(k, v))
```
And the string can be formatted using [formatted string literals](https://docs.python.org/3.7/reference/lexical_analysis.html#f-strings), available since python 3.6, hence `f'{k}: {v}\n'` instead of `"{}: {}\n".format(k, v)`. | You can use json.dumps() to do this with the indent parameter. For example:
```
import json
dictionary_variable = {'employee_01': {'fname': 'John', 'lname': 'Doe'},
'employee_02': {'fname': 'Jane', 'lname': 'Doe'}}
with open('output.txt', 'w') as f:
f.write(json.dumps(dictionary_variable, indent=4))
``` | 15,998 |
9,164,176 | >
> **Possible Duplicate:**
>
> [Good Primer for Python Slice Notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation)
>
>
>
I have a string and I'm splitting in a `;` character, I would like to associate this string with variables, but for me just the first x strings is useful, the other is redundant;
I wanted to use this code below, but if there is more than 4 coma than this raise an exception. Is there any simply way?
```
az1, el1, az2, el2, rfsspe = data_point.split(";")
``` | 2012/02/06 | [
"https://Stackoverflow.com/questions/9164176",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/737640/"
] | Yes! Use [slicing](https://stackoverflow.com/q/509211/21475):
```
az1, el1, az2, el2, rfsspe = data_point.split(";")[:5]
```
That "slices" the list to get the first 5 elements only. | The way, I do this is usually to add all the variables to a list(var\_list) and then when I'm processsing the list I do something like
```
for x in var_list[:5]:
print x #or do something
``` | 16,001 |
58,414,350 | Is there a way for Airflow to skip current task from the PythonOperator? For example:
```py
def execute():
if condition:
skip_current_task()
task = PythonOperator(task_id='task', python_callable=execute, dag=some_dag)
```
And also marking the task as "Skipped" in Airflow UI? | 2019/10/16 | [
"https://Stackoverflow.com/questions/58414350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7730549/"
] | Figured it out! Skipping task is as easy as:
```py
def execute():
if condition:
raise AirflowSkipException
task = PythonOperator(task_id='task', python_callable=execute, dag=some_dag)
``` | The easiest solution to skip a task:
```py
def execute():
if condition:
return
task = PythonOperator(task_id='task', python_callable=execute, dag=some_dag)
```
Unfortunately, it will mark task as `DONE` | 16,002 |
49,145,328 | I am new to using google colaboratory (colab) and pydrive along with it. I am trying to load data in 'CAS\_num\_strings' which was written in a pickle file in a specific directory on my google drive using colab as:
```
pickle.dump(CAS_num_strings,open('CAS_num_strings.p', 'wb'))
dump_meta = {'title': 'CAS.pkl', 'parents': [{'id':'1UEqIADV_tHic1Le0zlT25iYB7T6dBpBj'}]}
pkl_dump = drive.CreateFile(dump_meta)
pkl_dump.SetContentFile('CAS_num_strings.p')
pkl_dump.Upload()
print(pkl_dump.get('id'))
```
Where 'id':'1UEqIADV\_tHic1Le0zlT25iYB7T6dBpBj' makes sure that it has a specific parent folder with this given by this id. The last print command gives me the output:
```
'1ZgZfEaKgqGnuBD40CY8zg0MCiqKmi1vH'
```
Hence, I am able to create and dump the pickle file whose id is '1ZgZfEaKgqGnuBD40CY8zg0MCiqKmi1vH'. Now, I want to load this pickle file in another colab script for a different purpose. In order to load, I use the command set:
```
cas_strings = drive.CreateFile({'id':'1ZgZfEaKgqGnuBD40CY8zg0MCiqKmi1vH'})
print('title: %s, mimeType: %s' % (cas_strings['title'], cas_strings['mimeType']))
print('Downloaded content "{}"'.format(cas_strings.GetContentString()))
```
This gives me the output:
```
title: CAS.pkl, mimeType: text/x-pascal
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-9-a80d9de0fecf> in <module>()
30 cas_strings = drive.CreateFile({'id':'1ZgZfEaKgqGnuBD40CY8zg0MCiqKmi1vH'})
31 print('title: %s, mimeType: %s' % (cas_strings['title'], cas_strings['mimeType']))
---> 32 print('Downloaded content "{}"'.format(cas_strings.GetContentString()))
33
34
/usr/local/lib/python3.6/dist-packages/pydrive/files.py in GetContentString(self, mimetype, encoding, remove_bom)
192 self.has_bom == remove_bom:
193 self.FetchContent(mimetype, remove_bom)
--> 194 return self.content.getvalue().decode(encoding)
195
196 def GetContentFile(self, filename, mimetype=None, remove_bom=False):
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
```
As you can see, it finds the file CAS.pkl but cannot decode the data. I want to be able to resolve this error. I understand that the normal utf-8 encoding/decoding works smoothly during normal pickle dumping and loading with the 'wb' and 'rb' options. However in the present case, after dumping I can't seem to load it from the pickle file in google drive created in the previous step. The error exists somewhere in me not being able to specify how to decode the data at "return self.content.getvalue().decode(encoding)". I can't seem to find from here (<https://developers.google.com/drive/v2/reference/files#resource-representations>) which keywords/metadata tags to modify. Any help is appreciated. Thanks | 2018/03/07 | [
"https://Stackoverflow.com/questions/49145328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9407842/"
] | Apply the `click` event for `<tr>` and pass the current reference `this` to the calling function like `<tr onclick="callme(this)">`. From the javascript get the current row reference and find all the `td` inside that. Now get the values using `innerHTML` and assign it to the respective input fields("id\_type" , "event\_category" , "description"). Look at the following example.
```js
function callme(e)
{
var tds=e.getElementsByTagName('td');
document.getElementById("id_type").value = tds[0].innerHTML.trim();
document.getElementById("event_category").value = tds[1].innerHTML.trim();
document.getElementById("description").value = tds[2].innerHTML.trim();
}
```
```html
<table>
<tr onclick="callme(this)">
<td>test1</td>
<td>something1</td>
<td>content1</td>
</tr>
<tr onclick="callme(this)">
<td>test2</td>
<td>something2</td>
<td>content2</td>
</tr>
<tr onclick="callme(this)">
<td>test3</td>
<td>something3</td>
<td>content3</td>
</tr>
</table>
<input type="text" id="id_type" />
<input type="text" id="event_category" />
<input type="text" id="description" />
```
**Note:** As per my comment, don't use the same `id` for all your `td`. You can try to use `class` instead of `td`. For this current solution it is not affecting but in feature it will give you the wrong information as like your code. It is important `id` should be unique. | According to HTML spec `id` attribute should be unique in a page,
so if you have multiple elements with same `id`, your HTML is not valid.
`getElementById()` should only ever return one element. You can't make it return multiple elements.
So you can use unique `id` for each row or try using `class` | 16,003 |
62,328,382 | I'm new to python and plotly.graph\_objects. I created some maps similar to the example found here: [United States Choropleth Map](https://plotly.com/python/choropleth-maps/#united-states-choropleth-map)
I'd like to combine the maps into one figure with a common color scale. I've looked at lots of examples of people using shared scales on subplots but they are using different graphing libraries. Is the functionality I want supported? If so, how is it done?
Here is the code I am using:
```
import plotly.graph_objects as go
import pandas as pd
df_shootings = pd.read_csv('https://raw.githubusercontent.com/washingtonpost/data-police-shootings/master/fatal-police-shootings-data.csv')
state_count = df_shootings.groupby(['state', 'race']).size().reset_index(name='total')
races = pd.DataFrame({'W': 'White, non-Hispanic',
'B': 'Black, non-Hispanic',
'A': 'Asian',
'N': 'Native American',
'H': 'Hispanic'}, index=[0])
for race in races:
result = state_count[['state', 'total']][state_count.race == race]
fig = go.Figure(data=go.Choropleth(
locations=result.state,
z = result.total,
locationmode = 'USA-states', # set of locations match entries in `locations`
marker_line_color='white',
colorbar_title = "Shooting deaths",
))
fig.update_layout(
title_text = races[race][0],
geo_scope='usa', # limite map scope to USA
)
fig.data[0].hovertemplate = 'State: %{location}<br>Shooting deaths: %{z:.2f}<extra></extra>'
fig.show()
```
This is what I would like to get:
[](https://i.stack.imgur.com/Hfmo6.png)
Right now I get individual maps with their own color scale which is different for each map. | 2020/06/11 | [
"https://Stackoverflow.com/questions/62328382",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1373313/"
] | The solution is to not use the `$_COOKIE` array, but a variable
```php
<?php
// Use a variable
$cookieValue = 1;
// Check the cookie
if ((isset($_COOKIE["i"])) && !empty($_COOKIE["i"])) {
$cookieValue = (int)$_COOKIE["i"] + 1;
}
// Push the cookie
setcookie("i", $cookieValue);
// Use the variable
echo $cookieValue;
``` | ```
else{
setcookie("i",1);
header("Refresh:0");
}
``` | 16,004 |
57,464,273 | I have a dataframe with a columns that contain GPS coordinates. I want to convert the columns that are in degree seconds to degree decimals. For example, I have a 2 columns named "lat\_sec" and "long\_sec" that are formatted with values like 186780.8954N. I tried to write a function that saves the last character in the string as the direction, divide the number part of it to get the degree decimal, and then concatenate the two together to have the new format. I then tried to find the column by its name in the data frame and apply the function to it.
New to python and can't find other resources on this. I don't think I created my function properly. I have the word 'coordinate' in it because I did not know what to call the value that I am breaking down.
My data looks like this:
```
long_sec
635912.9277W
555057.2000W
581375.9850W
581166.2780W
df = pd.DataFrame(my_array)
def convertDec(coordinate):
decimal = float(coordinate[:-1]/3600)
direction = coordinate[-1:]
return str(decimal) + str(direction)
df['lat_sec'] = df['lat_sec'].apply(lambda x: x.convertDec())
My error looks like this:
Traceback (most recent call last):
File "code.py", line 44, in <module>
df['lat_sec'] = df['lat_sec'].apply(lambda x: x.convertDec())
File "C:\Python\Python37\lib\site-packages\pandas\core\frame.py", line 2917, in __getitem__
indexer = self.columns.get_loc(key)
File "C:\Python\Python37\lib\site-packages\pandas\core\indexes\base.py", line 2604, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 129, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index_class_helper.pxi", line 91, in pandas._libs.index.Int64Engine._check_type
KeyError: 'lat_sec'
``` | 2019/08/12 | [
"https://Stackoverflow.com/questions/57464273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11771163/"
] | By doing `float(coordinate[:-1]/3600)` you are dividing `str` by `int` which is not possible, what you can do is convert the `str` into `float` than divide it by integer `3600` which gives you `float` output.
Second you are not using `apply` properly and there is no `lat_sec` column to which you are applying your function
```
import pandas as pd
df = pd.DataFrame(['635912.9277W','555057.2000W','581375.9850W','581166.2780W'],columns=['long_sec'])
#function creation
def convertDec(coordinate):
decimal = float(coordinate[:-1])/3600
direction = coordinate[-1:]
return str(decimal) + str(direction)
#if you just want to update the existing column
df['long_sec'] = df.apply(lambda row: convertDec(row['long_sec']), axis=1)
#if you want to create a new column, just change to the name that you want
df['lat_sec'] = df.apply(lambda row: convertDec(row['long_sec']), axis=1)
#OUTPUT
long_sec
0 176.64247991666667W
1 154.18255555555555W
2 161.49332916666665W
3 161.43507722222225W
```
if you don't want output in float but in integer just change `float(coordinate[:-1])/3600` to `int(float(coordinate[:-1])/3600)` | In your code above, inside `convertDec` method, there is also an error in :
```
decimal = float(coordinate[:-1]/3600)
```
You need to convert the `coordinate` to float first before divide it with 3600.
So, your code above should look like this :
```
import pandas as pd
# Your example dataset
dictCoordinates = {
"long_sec" : ["111111.1111W", "222222.2222W", "333333.3333W", "444444.4444W"],
"lat_sec" : ["555555.5555N", "666666.6666N", "777777.7777N", "888888.8888N"]
}
# Insert your dataset into Pandas DataFrame
df = pd.DataFrame(data = dictCoordinates)
# Your conversion method here
def convertDec(coordinate):
decimal = float(coordinate[:-1]) / 3600 # Eliminate last character, then convert to float, then divide it with 3600
decimal = format(decimal, ".4f") # To make sure the output has 4 digits after decimal point
direction = coordinate[-1] # Extract direction (N or W) from content
return str(decimal) + direction # Return your desired output
# Do the conversion for your "long_sec"
df["long_sec"] = df.apply(lambda x : convertDec(x["long_sec"]), axis = 1)
# Do the conversion for your "lat_sec"
df["lat_sec"] = df.apply(lambda x : convertDec(x["lat_sec"]), axis = 1)
print(df)
```
That's it. Hope this helps. | 16,005 |
37,947,178 | I am using python and I have to write a program to create files of a total of 160 GB. I ran the program overnight and it was able to create files of 100 GB. However, after that it stopped running and gave an error saying "No space left on device".
QUESTION : I wanted to ask if it was possible to start running the program from where it stopped so I don't have to create those 100 GB files again. | 2016/06/21 | [
"https://Stackoverflow.com/questions/37947178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6465134/"
] | Steps to fix this error in windows 10/8/7
1.Check your javac path on Windows using Windows Explorer C:\Program Files\Java\jdk1.7.0\_02\bin and copy the address.
2.Go to Control Panel. Environment Variables and Insert the address at the beginning of var. Path followed by semicolon. i.e C:\Program Files\Java\jdk1.7.0\_02\bin; . Do not delete the path existent, just click in and go to the left end and paste the line above. Do not try anything else, because you just need to link your code to "javac.exe" and you just need to locate it.
3.Close your command prompt and reopen it,and write the code for compile and execution.
[](https://i.stack.imgur.com/w7GDm.png) | You need to add the location of your JDK to your PATH variable, if you wish to call javac.exe without the path.
```
set PATH=%PATH%;C:\path\to\your\JDK\bin\dir
```
Then...
```
javac.exe MyFirstProgram.java
```
OR, you can simply call it via the full path to javac.exe from your JDK installation e.g.
```
C:\path\to\your\JDK\bin\javac.exe MyFirstProgram.java
``` | 16,006 |
74,188,813 | In practicing python, I've come across the sliding window technique but don't quite understand the implementation. Given a string k and integer N, the code is to loop through, thereby moving the window from left to right. However, the capture of the windowed elements as well as how the window grows is fuzzy to me.
These sliding window questions on Leetcode are similar but do not have the alphabetic aspect.
1. Fruits into baskets : <https://leetcode.com/problems/fruit-into-baskets/>
2. Longest substring without repeating characters : <https://leetcode.com/problems/longest-substring-without-repeating-characters/>
3. Longest substring after k replacements : <https://leetcode.com/problems/longest-repeating-character-replacement/>
4. Permutation in string: <https://leetcode.com/problems/permutation-in-string/>
5. String anagrams: <https://leetcode.com/problems/find-all-anagrams-in-a-string/>
6. Average of any contiguous subarray of size k : <https://leetcode.com/problems/maximum-average-subarray-i/>
7. Maximum sum of any contiguous subarray of size k : <https://leetcode.com/problems/maximum-subarray/>
8. Smallest subarray with a given sum : <https://leetcode.com/problems/minimum-size-subarray-sum/>
9. Longest substring with k distinct characters : <https://leetcode.com/problems/longest-substring-with-at-most-k-distinct-characters/>
Most occurring contiguous sub-string here defined as three letters in growing sequence. For example, for an input string k of 'cdegoxyzcga' and length N of 3, the output would be [cde, xyz]. | 2022/10/25 | [
"https://Stackoverflow.com/questions/74188813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10047888/"
] | Instead of trying to solve =b) it might be easier to look at  and just solve this iteratively, taking advantage of Python's integer type. This way you can avoid the float domain, and its associated precision loss, entirely.
Here's a rough attempt:
```
def ilog(a: int, p: int) -> tuple[int, bool]:
"""
find the largest b such that p ** b <= a
return tuple of (b, exact)
"""
if p == 1:
return a, True
b = 0
x = 1
while x < a:
x *= p
b += 1
if x == a:
return b, True
else:
return b - 1, False
```
There are plenty of opportunities for optimization if this is too slow (consider Newton's method, binary search...) | You can use decimals and play with precision and rounding instead of floats in this case
Like this:
```
from decimal import Decimal, Context, ROUND_HALF_UP, ROUND_HALF_DOWN
ctx1 = Context(prec=20, rounding=ROUND_HALF_UP)
ctx2 = Context(prec=20, rounding=ROUND_HALF_DOWN)
ctx1.divide(Decimal(243).ln( ctx1) , Decimal(3).ln( ctx2))
```
Output:
```
Decimal('5')
```
First, the rounding works like the epsilon - the numerator is rounded up and denominator down. You always get a slightly higher answer
Second, you can adjust precision you need
However, fundamentally the problem is unsolvable. | 16,007 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.