qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
list | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
29,773,574 |
I am getting the following error when i perform classification of new data with the following command in Python:
```
classifier.predict(new_data)
```
AttributeError: python 'SVC' object has no attribute \_dual\_coef\_
In my laptop though, the command works fine! What's wrong?
|
2015/04/21
|
[
"https://Stackoverflow.com/questions/29773574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2987488/"
] |
I had this exact error
`AttributeError: python 'SVC' object has no attribute _dual_coef_`
with a model trained using scikit-learn version 0.15.2, when I tried to run it in scikit-learn version 0.16.1. I did solve it by re-training the model in the latest scikit-learn 0.16.1.
Make sure you are loading the right version of the package.
|
I had the same problem,I use Sklearn version 0.23.02 but I was trying to run an archive trained with a version 0.18... and my error said: "'SVC' object has no attribute 'break\_ties'", I just retrained the model with my version and fix the problem I generate other svc.pickle to run with the 0.23.02 version and replace the oldie.
|
29,773,574 |
I am getting the following error when i perform classification of new data with the following command in Python:
```
classifier.predict(new_data)
```
AttributeError: python 'SVC' object has no attribute \_dual\_coef\_
In my laptop though, the command works fine! What's wrong?
|
2015/04/21
|
[
"https://Stackoverflow.com/questions/29773574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2987488/"
] |
I had the same problem,I use Sklearn version 0.23.02 but I was trying to run an archive trained with a version 0.18... and my error said: "'SVC' object has no attribute 'break\_ties'", I just retrained the model with my version and fix the problem I generate other svc.pickle to run with the 0.23.02 version and replace the oldie.
|
```
"""
X = X_train
y = y_train
"""
X = X_test
y = y_test
# Instantiate and train the classifier
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=1)
clf.fit(X, y)
# Check the results using metrics
from sklearn import metrics
y_pred = clf.predict(X)
print(metrics.confusion_matrix(y_pred, y))
```
|
54,587,206 |
I am using colorama to change the colour of my text and it works fines, but now I want to change the colour of the whole background instead of just my text.
I am using Windows:
```
import colorama
from colorama import Fore, Back, Style
colorama.init()
print(Back.GREEN)
print(Fore.RED)
```
But, that code only makes the text coloured. Is there a way to do that in python? I want it like CMD where you can have the background of it a colour. I cannot use the OS module as I do not have admin rights, but I'm open to using any other module.
How do I solve this problem?
|
2019/02/08
|
[
"https://Stackoverflow.com/questions/54587206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10584721/"
] |
After a while in playing with it i figured it out. Just forgot about this post. Here is what I did.
```
import colorama
from colorama import Back as bg
colorama.init()
print(bg.RED)
print(colorama.ansi.clear_screen())
```
I think clearing the screen fixed the Issue
|
After playing with colorama on my Windows 10 box it seems it's only used to change the text but not the console/terminal background. I was however able to change the background using this standard library solution:
```
import ctypes
try:
ctypes.windll.msvcrt.system(b"color 4f")
print("Hello World!")
except ValueError:
pass
```
The terminal background will change to red with white text.
|
63,395,613 |
I'm trying to scrape the web data using python selenium, but when I am running the py code able to open the browser however it is not passing the url into web browser, below is the sample code I'm using.
```
import requests
from selenium import webdriver
Chromepath = r"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"
url = "https://climate.weather.gc.ca/historical_data/search_historic_data_e.html"
driver = webdriver.Chrome(Chromepath)
driver.get(url)
```
Thanks in advance.
|
2020/08/13
|
[
"https://Stackoverflow.com/questions/63395613",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11946540/"
] |
The website is using an API that is visible if you look at the requests made by your web browser when you open <https://www.samtrygg.se/RentalObject/NewSearch>
API URL: <https://www.samtrygg.se/RentalObject/SearchResult?search=sverige&neLat=&neLng=&swLat=&swLng=>
You could just make a single request to the API URL with Scrapy to get all the listings.
It seems that the website doesn't have any actual "Pagination". It just loads all the data on the first request and then does some frontend manipulation to show a partial amount of the results depending on the "page" that the user is on.
|
I checked if there's an API and I didn't find it.
So in that case, if you're using Selenium you need to check if the next page button is available, if yes then you click on it and then insert the HTML markup to an array.
Example:
```
responses = []
next = driver.find_elements_by_xpath("XPATH")
while len(next) > 0:
next.click()
responses.append(driver.page_source)
```
Kind regards,
Ahmed
|
63,395,613 |
I'm trying to scrape the web data using python selenium, but when I am running the py code able to open the browser however it is not passing the url into web browser, below is the sample code I'm using.
```
import requests
from selenium import webdriver
Chromepath = r"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"
url = "https://climate.weather.gc.ca/historical_data/search_historic_data_e.html"
driver = webdriver.Chrome(Chromepath)
driver.get(url)
```
Thanks in advance.
|
2020/08/13
|
[
"https://Stackoverflow.com/questions/63395613",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11946540/"
] |
The website is using an API that is visible if you look at the requests made by your web browser when you open <https://www.samtrygg.se/RentalObject/NewSearch>
API URL: <https://www.samtrygg.se/RentalObject/SearchResult?search=sverige&neLat=&neLng=&swLat=&swLng=>
You could just make a single request to the API URL with Scrapy to get all the listings.
It seems that the website doesn't have any actual "Pagination". It just loads all the data on the first request and then does some frontend manipulation to show a partial amount of the results depending on the "page" that the user is on.
|
```
import scrapy
import json
NIFTY_FIFTY = "https://www.samtrygg.se/RentalObject/SearchResult?search=sverige&neLat=&neLng=&swLat=&swLng="
class LiveSpider(scrapy.Spider):
name = "esos_nortes"
start_urls = [NIFTY_FIFTY]
allowed_domains = ["www.samtrygg.se"]
# Custom Settings are needed to send the User Agent.
custom_settings = {
'USER_AGENT' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'
}
def parse(self, response):
json_response = json.loads(response.body.decode("utf-8"))
# We want the full first 25 addresses, for example:
for firsts_25 in range(24):
print(json_response['SearchResult'][firsts_25]['FullAddress'])
```
NIFTY\_FIFTY url is obtained as explained by AaronS, observing the tools of your browser
|
63,395,613 |
I'm trying to scrape the web data using python selenium, but when I am running the py code able to open the browser however it is not passing the url into web browser, below is the sample code I'm using.
```
import requests
from selenium import webdriver
Chromepath = r"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"
url = "https://climate.weather.gc.ca/historical_data/search_historic_data_e.html"
driver = webdriver.Chrome(Chromepath)
driver.get(url)
```
Thanks in advance.
|
2020/08/13
|
[
"https://Stackoverflow.com/questions/63395613",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11946540/"
] |
Approaching Dynamic Content in Scrapy
--------------------------------------
What Ryan is saying is correct. To expand on this, dynamic content can be grabbed in a few ways.
1. By re-engineering HTTP requests
This is by far the best way to grab dynamic content if possible, its the most efficient and less brittle than selenium. This is based on whether the javascript is triggering an HTTP request to grab data for the webpage. In this case it is and should be tried first before resorting to other means
2. Using Splash (Browser Activity)
Scrapy has a middleware which integrates splash. Splash pre-renders pages so enables access to javascript loaded HTML. It also has some browser activity functionality. Less labour intensive than selenium but still it's browser activity.
3. Using selenium\_scrapy (Browser Activity)
This is the solution you are trying here, the problem is, it doesn't really give a lot of options to do complex browser activity. So it's real purpose is in being able to grab HTML that has been javascript loaded really.
4. Using selenium in the middlewares (Browser Activity)
You can use middlewares to filter the requests, using the full selenium package. This is okay for when there is no alternative and you want something for every request. Or you want to customise it based on the type of request you're making
5. Using selenium straight in the spider script. (Browser Activity)
This is the last resort in my opinion when all other options are not available and you really need complex browser activity for specific parts of your script and can't blanket do it using requests.
Re-engineering the requests
----------------------------
So now you have a basic understanding of what it is. Your browser (I prefer chrome) has access to all the requests the browser makes to display the site you see. If you inspect the page --> network tools --> XHR you will see all the AJAX requests (typically where the API endpoints live).
[](https://i.stack.imgur.com/47kTw.png)
You can see all the request, sorting by size tends to work as typically the data will be a larger request. When you click the request, you get access to the headers it sends, a preview of the response and the response.
[](https://i.stack.imgur.com/LkI7M.png)
So here we have the preview of the data you probably want. I will then copy this request in the form of cURL and input it into a website like curl.trillworks.com.
[](https://i.stack.imgur.com/gLpRf.png)
This gives you the headers, parameters and cookies if necessary to make the correct Scrapy Request. In this case, you actually only need one of the parameters to mimic the HTTP request. I tend to use the requests package to play about with what I actually need as copying the request gives you everything in the request, some if which you wont need.
|
I checked if there's an API and I didn't find it.
So in that case, if you're using Selenium you need to check if the next page button is available, if yes then you click on it and then insert the HTML markup to an array.
Example:
```
responses = []
next = driver.find_elements_by_xpath("XPATH")
while len(next) > 0:
next.click()
responses.append(driver.page_source)
```
Kind regards,
Ahmed
|
63,395,613 |
I'm trying to scrape the web data using python selenium, but when I am running the py code able to open the browser however it is not passing the url into web browser, below is the sample code I'm using.
```
import requests
from selenium import webdriver
Chromepath = r"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"
url = "https://climate.weather.gc.ca/historical_data/search_historic_data_e.html"
driver = webdriver.Chrome(Chromepath)
driver.get(url)
```
Thanks in advance.
|
2020/08/13
|
[
"https://Stackoverflow.com/questions/63395613",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11946540/"
] |
Approaching Dynamic Content in Scrapy
--------------------------------------
What Ryan is saying is correct. To expand on this, dynamic content can be grabbed in a few ways.
1. By re-engineering HTTP requests
This is by far the best way to grab dynamic content if possible, its the most efficient and less brittle than selenium. This is based on whether the javascript is triggering an HTTP request to grab data for the webpage. In this case it is and should be tried first before resorting to other means
2. Using Splash (Browser Activity)
Scrapy has a middleware which integrates splash. Splash pre-renders pages so enables access to javascript loaded HTML. It also has some browser activity functionality. Less labour intensive than selenium but still it's browser activity.
3. Using selenium\_scrapy (Browser Activity)
This is the solution you are trying here, the problem is, it doesn't really give a lot of options to do complex browser activity. So it's real purpose is in being able to grab HTML that has been javascript loaded really.
4. Using selenium in the middlewares (Browser Activity)
You can use middlewares to filter the requests, using the full selenium package. This is okay for when there is no alternative and you want something for every request. Or you want to customise it based on the type of request you're making
5. Using selenium straight in the spider script. (Browser Activity)
This is the last resort in my opinion when all other options are not available and you really need complex browser activity for specific parts of your script and can't blanket do it using requests.
Re-engineering the requests
----------------------------
So now you have a basic understanding of what it is. Your browser (I prefer chrome) has access to all the requests the browser makes to display the site you see. If you inspect the page --> network tools --> XHR you will see all the AJAX requests (typically where the API endpoints live).
[](https://i.stack.imgur.com/47kTw.png)
You can see all the request, sorting by size tends to work as typically the data will be a larger request. When you click the request, you get access to the headers it sends, a preview of the response and the response.
[](https://i.stack.imgur.com/LkI7M.png)
So here we have the preview of the data you probably want. I will then copy this request in the form of cURL and input it into a website like curl.trillworks.com.
[](https://i.stack.imgur.com/gLpRf.png)
This gives you the headers, parameters and cookies if necessary to make the correct Scrapy Request. In this case, you actually only need one of the parameters to mimic the HTTP request. I tend to use the requests package to play about with what I actually need as copying the request gives you everything in the request, some if which you wont need.
|
```
import scrapy
import json
NIFTY_FIFTY = "https://www.samtrygg.se/RentalObject/SearchResult?search=sverige&neLat=&neLng=&swLat=&swLng="
class LiveSpider(scrapy.Spider):
name = "esos_nortes"
start_urls = [NIFTY_FIFTY]
allowed_domains = ["www.samtrygg.se"]
# Custom Settings are needed to send the User Agent.
custom_settings = {
'USER_AGENT' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'
}
def parse(self, response):
json_response = json.loads(response.body.decode("utf-8"))
# We want the full first 25 addresses, for example:
for firsts_25 in range(24):
print(json_response['SearchResult'][firsts_25]['FullAddress'])
```
NIFTY\_FIFTY url is obtained as explained by AaronS, observing the tools of your browser
|
70,356,876 |
I'm trying to use multiprocessing for a function that can potentially return a segfault (I have no control over this ATM). In cases where the child process hits a segfault, I want only that child to fail, but all other child tasks to continue/return their results.
I've already switched from `multiprocessing.Pool` to `concurrent.futures.ProcessPoolExecutor` avoid the issue of the child process hanging forever (or until an arbitrary timeout) as documented in this bug: <https://bugs.python.org/issue22393>.
However the issue I face now, is that when the first child task hits a segfault, all in-flight child processes get marked as broken (`concurrent.futures.process.BrokenProcessPool`).
Is there a way to only mark actually broken child processes as broken?
Code I'm running in `Python 3.7.4`:
```
import concurrent.futures
import ctypes
from time import sleep
def do_something(x):
print(f"{x}; in do_something")
sleep(x*3)
if x == 2:
# raise a segmentation fault internally
return x, ctypes.string_at(0)
return x, x-1
nums = [1, 2, 3, 1.5]
executor = concurrent.futures.ProcessPoolExecutor()
result_futures = []
for num in nums:
# Using submit with a list instead of map lets you get past the first exception
# Example: https://stackoverflow.com/a/53346191/7619676
future = executor.submit(do_something, num)
result_futures.append(future)
# Wait for all results
concurrent.futures.wait(result_futures)
# After a segfault is hit for any child process (i.e. is "terminated abruptly"), the process pool becomes unusable
# and all running/pending child processes' results are set to broken
for future in result_futures:
try:
print(future.result())
except concurrent.futures.process.BrokenProcessPool:
print("broken")
```
Result:
```
(1, 0)
broken
broken
(1.5, 0.5)
```
Desired result:
```
(1, 0)
broken
(3, 2)
(1.5, 0.5)
```
|
2021/12/14
|
[
"https://Stackoverflow.com/questions/70356876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7619676/"
] |
`multiprocessing.Pool` and `concurrent.futures.ProcessPoolExecutor` both make assumptions about how to handle the concurrency of the interactions between the workers and the main process that are violated if any one process is killed or segfaults, so they do the safe thing and mark the whole pool as broken. To get around this, you will need to build up your own pool with different assumptions directly using `multiprocessing.Process` instances.
This might sound intimidating but a `list` and a `multiprocessing.Manager` will get you pretty far:
```py
import multiprocessing
import ctypes
import queue
from time import sleep
def do_something(job, result):
while True:
x=job.get()
print(f"{x}; in do_something")
sleep(x*3)
if x == 2:
# raise a segmentation fault internally
return x, ctypes.string_at(0)
result.put((x, x-1))
nums = [1, 2, 3, 1.5]
if __name__ == "__main__":
# you ARE using the spawn context, right?
ctx = multiprocessing.get_context("spawn")
manager = ctx.Manager()
job_queue = manager.Queue(maxsize=-1)
result_queue = manager.Queue(maxsize=-1)
pool = [
ctx.Process(target=do_something, args=(job_queue, result_queue), daemon=True)
for _ in range(multiprocessing.cpu_count())
]
for proc in pool:
proc.start()
for num in nums:
job_queue.put(num)
try:
while True:
# Timeout is our only signal that no more results coming
print(result_queue.get(timeout=10))
except queue.Empty:
print("Done!")
print(pool) # will see one dead Process
for proc in pool:
proc.kill() # avoid stderr spam
```
This "Pool" is a little inflexible, and you will probably want to customize it for your application's specific needs. But you can definitely skip right over segfaulting workers.
When I went down this rabbit hole, where I was interested in cancelling specific submissions to a worker pool, I eventually wound up writing a whole library to integrate into Trio async apps: [trio-parallel](https://trio-parallel.readthedocs.org). Hopefully you won't need to go that far!
|
Based on @Richard Sheridan's answer, I ended up using the code below. This version doesn't require setting a timeout, which is something I couldn't do for my use case.
```
import ctypes
import multiprocessing
from typing import List
from time import sleep
def do_something(x, result):
print(f"{x} starting")
sleep(x * 3)
if x == 2:
# raise a segmentation fault internally
y = ctypes.string_at(0)
y = x
print(f"{x} done")
results_queue.put(y)
def wait_for_process_slot(
processes: List,
concurrency: int = multiprocessing.cpu_count() - 1,
wait_sec: int = 1,
) -> int:
"""Blocks main process if `concurrency` processes are already running.
Alternative to `multiprocessing.Semaphore.acquire`
useful for when child processes might fail and not be able to signal.
Relies instead on the main's (parent's) tracking of `multiprocessing.Process`es.
"""
counter = 0
while True:
counter = sum([1 for i, p in processes.items() if p.is_alive()])
if counter < concurrency:
return counter
sleep(wait_sec)
if __name__ == "__main__":
# "spawn" results in an OSError b/c pickling a segfault fails?
ctx = multiprocessing.get_context()
manager = ctx.Manager()
results_queue = manager.Queue(maxsize=-1)
concurrency = multiprocessing.cpu_count() - 1 # reserve 1 CPU for waiting
nums = [3, 1, 2, 1.5]
all_processes = {}
for idx, num in enumerate(nums):
num_running_processes = wait_for_process_slot(all_processes, concurrency)
p = ctx.Process(target=do_something, args=(num, results_queue), daemon=True)
all_processes.update({idx: p})
p.start()
# Wait for the last batch of processes not blocked by wait_for_process_slot to finish
for p in all_processes.values():
p.join()
# Check last batch of processes for bad processes
# Relies on all processes having finished (the p.joins above)
bad_nums = [idx for idx, p in all_processes.items() if p.exitcode != 0]
```
|
58,902,486 |
I need to chceck if some binary numbers are divisible by 2 or 8 and tell how many there is. I now that binary number is divisible by 2 when the last digit is 0 and by 8 when last 3 are 0, so this is my mode of action
```
twos = 0
eights = 0
file = 'numbers.txt'
with open(file) as fin:
for line in fin:
if line[-2:] == '0':
twos += 1
elif line[-3:] == '000':
eights +=1
print(twos)
print(eights)
```
tbh i don't now why this isn't working, my bet it's because of different data types but im new to python and can't figure where the error is
sample for numbers.txt
<http://collabedit.com/5u4xa>
|
2019/11/17
|
[
"https://Stackoverflow.com/questions/58902486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11868577/"
] |
* The last "digit" is `line[-1]` (or `line[-1:]`), not `line[-2:]` (according to the same logic by which `line[-3:]` is the last 3 "digits", not the last 2).
* Your code does not attempt to deal with newline characters at all.
* Your algorithm will miss numbers that are divisible by both 2 and 8 because of the `elif`.
```
if line[-2:] == '0':
twos += 1
elif line[-3:] == '000':
eights +=1
```
should become:
```
line = line.strip()
if line[-1] == '0':
twos += 1
else:
continue # a micro optimization.
# If it does not end with '0', obviously it can't end with '000'
if line[-3:] == '000':
eights +=1
```
|
as you said a binary numer is divisible by 2 when last digit is 0 therefore:
```
line = line.strip() // The strip() method returns a copy of the string with both leading and trailing characters removed
if line[-1] == '0':
twos += 1
```
|
56,222,981 |
I have installed Python at `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32` location and the version is Python 3.7.3.
To install pip, I downloaded `pip-19.1.1-py2.py3-none-any.whl (1.4 MB)` from pypi.org website and placed it in `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`. In the command prompt I entered
`cd C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`
`python pip-19.1.1-py2.py3-none-any.whl`
I am getting an error as below: `can't find '__main__' module in 'pip-19.1.1-py2.py3-none-any.whl'`
any help on how this error can be resolved?
TIA!
|
2019/05/20
|
[
"https://Stackoverflow.com/questions/56222981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9773920/"
] |
Hover on to the `StatelessWidget` class and use
### Android Studio:
* **Mac**: `option` + `enter`
* **Windows**: `alt` + `enter`
### Visual Studio Code:
* **Mac**: `cmd` + `.`
* **Windows**: `ctrl` + `.`
[](https://i.stack.imgur.com/Fwgk2.png)
---
Output (Your solution):
[](https://i.stack.imgur.com/vcpsk.gif)
Here is the working code.
```
class DetailPage extends StatefulWidget {
final Activity activity;
DetailPage({Key key, this.activity}) : super(key: key);
@override
_DetailPageState createState() => _DetailPageState();
}
class _DetailPageState extends State<DetailPage> {
bool _tracking = false, _mapped = false; // you need this
String _schedule;
@override
Widget build(BuildContext context) {
final levelIndicator = Container(
child: Container(
child: LinearProgressIndicator(backgroundColor: Color.fromRGBO(209, 224, 224, 0.2), value: 2.0, valueColor: AlwaysStoppedAnimation(Colors.green)),
),
);
final topContentText = Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
SizedBox(height: 120.0),
Container(
width: 90.0,
child: Divider(color: Colors.green),
),
SizedBox(height: 10.0),
Text(
widget.activity.activityName,
style: TextStyle(color: Colors.white, fontSize: 45.0),
),
SizedBox(height: 30.0),
Row(
mainAxisAlignment: MainAxisAlignment.start,
children: <Widget>[
Expanded(
flex: 6,
child: Padding(
padding: EdgeInsets.only(left: 10.0),
child: Text(
"Last Run: 3-2-19\n" + "Last Avg Strain: 34%\n" + "Last Run Time: 00:45:23",
style: TextStyle(color: Colors.white),
))),
// Expanded(flex: 1, child: newRow)
],
),
],
);
final topContent = Stack(
children: <Widget>[
Container(
height: MediaQuery.of(context).size.height * 0.45,
padding: EdgeInsets.all(40.0),
width: MediaQuery.of(context).size.width,
decoration: BoxDecoration(
gradient: LinearGradient(
begin: Alignment.centerLeft,
end: Alignment.centerRight,
colors: [Color.fromRGBO(33, 147, 176, 100), Color.fromRGBO(109, 213, 237, 100)],
),
),
child: Center(
child: topContentText,
),
),
Positioned(
left: 235.0,
top: 180.0,
child: InkWell(
onTap: () {
Navigator.pop(context);
},
child: CircleAvatar(
radius: 80.0,
backgroundColor: Colors.white,
),
),
),
Positioned(
left: 8.0,
top: 60.0,
child: InkWell(
onTap: () {
Navigator.pop(context);
},
child: Icon(Icons.arrow_back, color: Colors.white),
),
)
],
);
final bottomContentText = Text(
"Config:",
style: TextStyle(fontSize: 18.0),
);
final mappedCheckbox = CheckboxListTile(
title: Text("Mapped"),
value: _mapped,
onChanged: (newValue) => setState(() => _mapped = newValue),
controlAffinity: ListTileControlAffinity.leading, // <-- leading Checkbox
);
final rtCheckBox = CheckboxListTile(
title: Text("Real-time Tracking"),
value: _tracking,
onChanged: (newValue) => setState(() => _tracking = newValue),
controlAffinity: ListTileControlAffinity.leading, // <-- leading Checkbox
);
final descriptionText = Text(
"Description:",
style: TextStyle(fontSize: 12.0),
);
final description = TextFormField(
decoration: InputDecoration(
hintText: 'Enter an activity description',
),
);
final scheduledFor = Text(
"Scheduled for:",
style: TextStyle(fontSize: 12.0),
);
final dropdown = DropdownButton<String>(
value: _schedule,
items: <String>['Now (Default)', 'B', 'C', 'D'].map((String value) {
return DropdownMenuItem<String>(
value: value,
child: Text(value),
);
}).toList(),
hint: Text("Now (Default)"),
onChanged: (newValue) {
setState(() {
_schedule = newValue;
});
},
);
final readButton = Container(
padding: EdgeInsets.symmetric(vertical: 16.0),
width: 170, //MediaQuery.of(context).size.width,
child: RaisedButton(
onPressed: () => {},
color: Colors.lightBlue,
child: Text("Start", style: TextStyle(color: Colors.white, fontSize: 20)),
));
final bottomContent = Container(
width: MediaQuery.of(context).size.width,
padding: EdgeInsets.all(40.0),
child: Center(
child: Column(
children: <Widget>[bottomContentText, mappedCheckbox, rtCheckBox, descriptionText, description, Text("\n"), scheduledFor, dropdown, readButton],
),
),
);
return Scaffold(
body: Column(
children: <Widget>[topContent, bottomContent],
),
);
}
}
```
|
You could use intellij's or vscode shortcut by hitting alt + enter or selecting the bulb icon while your cursor is on the name of the stateless widget then select convert to stateful widget
|
56,222,981 |
I have installed Python at `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32` location and the version is Python 3.7.3.
To install pip, I downloaded `pip-19.1.1-py2.py3-none-any.whl (1.4 MB)` from pypi.org website and placed it in `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`. In the command prompt I entered
`cd C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`
`python pip-19.1.1-py2.py3-none-any.whl`
I am getting an error as below: `can't find '__main__' module in 'pip-19.1.1-py2.py3-none-any.whl'`
any help on how this error can be resolved?
TIA!
|
2019/05/20
|
[
"https://Stackoverflow.com/questions/56222981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9773920/"
] |
For VSCode (Visual Studio Code) use ctrl + '.' keys while the cursor on the stateless widget to convert it to stateful widget.
|
You could use intellij's or vscode shortcut by hitting alt + enter or selecting the bulb icon while your cursor is on the name of the stateless widget then select convert to stateful widget
|
56,222,981 |
I have installed Python at `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32` location and the version is Python 3.7.3.
To install pip, I downloaded `pip-19.1.1-py2.py3-none-any.whl (1.4 MB)` from pypi.org website and placed it in `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`. In the command prompt I entered
`cd C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`
`python pip-19.1.1-py2.py3-none-any.whl`
I am getting an error as below: `can't find '__main__' module in 'pip-19.1.1-py2.py3-none-any.whl'`
any help on how this error can be resolved?
TIA!
|
2019/05/20
|
[
"https://Stackoverflow.com/questions/56222981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9773920/"
] |
You could use intellij's or vscode shortcut by hitting alt + enter or selecting the bulb icon while your cursor is on the name of the stateless widget then select convert to stateful widget
|
Adding a solution for Android Studio since I didn't find one here.
1. Place the marker on the Stateless widget name:
[](https://i.stack.imgur.com/5FITf.png)
2. Hit Alt+Enter
3. Select Convert to Stateful widget
4. Configure your Stateful widget based on your requirements
|
56,222,981 |
I have installed Python at `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32` location and the version is Python 3.7.3.
To install pip, I downloaded `pip-19.1.1-py2.py3-none-any.whl (1.4 MB)` from pypi.org website and placed it in `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`. In the command prompt I entered
`cd C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`
`python pip-19.1.1-py2.py3-none-any.whl`
I am getting an error as below: `can't find '__main__' module in 'pip-19.1.1-py2.py3-none-any.whl'`
any help on how this error can be resolved?
TIA!
|
2019/05/20
|
[
"https://Stackoverflow.com/questions/56222981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9773920/"
] |
Hover on to the `StatelessWidget` class and use
### Android Studio:
* **Mac**: `option` + `enter`
* **Windows**: `alt` + `enter`
### Visual Studio Code:
* **Mac**: `cmd` + `.`
* **Windows**: `ctrl` + `.`
[](https://i.stack.imgur.com/Fwgk2.png)
---
Output (Your solution):
[](https://i.stack.imgur.com/vcpsk.gif)
Here is the working code.
```
class DetailPage extends StatefulWidget {
final Activity activity;
DetailPage({Key key, this.activity}) : super(key: key);
@override
_DetailPageState createState() => _DetailPageState();
}
class _DetailPageState extends State<DetailPage> {
bool _tracking = false, _mapped = false; // you need this
String _schedule;
@override
Widget build(BuildContext context) {
final levelIndicator = Container(
child: Container(
child: LinearProgressIndicator(backgroundColor: Color.fromRGBO(209, 224, 224, 0.2), value: 2.0, valueColor: AlwaysStoppedAnimation(Colors.green)),
),
);
final topContentText = Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
SizedBox(height: 120.0),
Container(
width: 90.0,
child: Divider(color: Colors.green),
),
SizedBox(height: 10.0),
Text(
widget.activity.activityName,
style: TextStyle(color: Colors.white, fontSize: 45.0),
),
SizedBox(height: 30.0),
Row(
mainAxisAlignment: MainAxisAlignment.start,
children: <Widget>[
Expanded(
flex: 6,
child: Padding(
padding: EdgeInsets.only(left: 10.0),
child: Text(
"Last Run: 3-2-19\n" + "Last Avg Strain: 34%\n" + "Last Run Time: 00:45:23",
style: TextStyle(color: Colors.white),
))),
// Expanded(flex: 1, child: newRow)
],
),
],
);
final topContent = Stack(
children: <Widget>[
Container(
height: MediaQuery.of(context).size.height * 0.45,
padding: EdgeInsets.all(40.0),
width: MediaQuery.of(context).size.width,
decoration: BoxDecoration(
gradient: LinearGradient(
begin: Alignment.centerLeft,
end: Alignment.centerRight,
colors: [Color.fromRGBO(33, 147, 176, 100), Color.fromRGBO(109, 213, 237, 100)],
),
),
child: Center(
child: topContentText,
),
),
Positioned(
left: 235.0,
top: 180.0,
child: InkWell(
onTap: () {
Navigator.pop(context);
},
child: CircleAvatar(
radius: 80.0,
backgroundColor: Colors.white,
),
),
),
Positioned(
left: 8.0,
top: 60.0,
child: InkWell(
onTap: () {
Navigator.pop(context);
},
child: Icon(Icons.arrow_back, color: Colors.white),
),
)
],
);
final bottomContentText = Text(
"Config:",
style: TextStyle(fontSize: 18.0),
);
final mappedCheckbox = CheckboxListTile(
title: Text("Mapped"),
value: _mapped,
onChanged: (newValue) => setState(() => _mapped = newValue),
controlAffinity: ListTileControlAffinity.leading, // <-- leading Checkbox
);
final rtCheckBox = CheckboxListTile(
title: Text("Real-time Tracking"),
value: _tracking,
onChanged: (newValue) => setState(() => _tracking = newValue),
controlAffinity: ListTileControlAffinity.leading, // <-- leading Checkbox
);
final descriptionText = Text(
"Description:",
style: TextStyle(fontSize: 12.0),
);
final description = TextFormField(
decoration: InputDecoration(
hintText: 'Enter an activity description',
),
);
final scheduledFor = Text(
"Scheduled for:",
style: TextStyle(fontSize: 12.0),
);
final dropdown = DropdownButton<String>(
value: _schedule,
items: <String>['Now (Default)', 'B', 'C', 'D'].map((String value) {
return DropdownMenuItem<String>(
value: value,
child: Text(value),
);
}).toList(),
hint: Text("Now (Default)"),
onChanged: (newValue) {
setState(() {
_schedule = newValue;
});
},
);
final readButton = Container(
padding: EdgeInsets.symmetric(vertical: 16.0),
width: 170, //MediaQuery.of(context).size.width,
child: RaisedButton(
onPressed: () => {},
color: Colors.lightBlue,
child: Text("Start", style: TextStyle(color: Colors.white, fontSize: 20)),
));
final bottomContent = Container(
width: MediaQuery.of(context).size.width,
padding: EdgeInsets.all(40.0),
child: Center(
child: Column(
children: <Widget>[bottomContentText, mappedCheckbox, rtCheckBox, descriptionText, description, Text("\n"), scheduledFor, dropdown, readButton],
),
),
);
return Scaffold(
body: Column(
children: <Widget>[topContent, bottomContent],
),
);
}
}
```
|
For VSCode (Visual Studio Code) use ctrl + '.' keys while the cursor on the stateless widget to convert it to stateful widget.
|
56,222,981 |
I have installed Python at `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32` location and the version is Python 3.7.3.
To install pip, I downloaded `pip-19.1.1-py2.py3-none-any.whl (1.4 MB)` from pypi.org website and placed it in `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`. In the command prompt I entered
`cd C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`
`python pip-19.1.1-py2.py3-none-any.whl`
I am getting an error as below: `can't find '__main__' module in 'pip-19.1.1-py2.py3-none-any.whl'`
any help on how this error can be resolved?
TIA!
|
2019/05/20
|
[
"https://Stackoverflow.com/questions/56222981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9773920/"
] |
Hover on to the `StatelessWidget` class and use
### Android Studio:
* **Mac**: `option` + `enter`
* **Windows**: `alt` + `enter`
### Visual Studio Code:
* **Mac**: `cmd` + `.`
* **Windows**: `ctrl` + `.`
[](https://i.stack.imgur.com/Fwgk2.png)
---
Output (Your solution):
[](https://i.stack.imgur.com/vcpsk.gif)
Here is the working code.
```
class DetailPage extends StatefulWidget {
final Activity activity;
DetailPage({Key key, this.activity}) : super(key: key);
@override
_DetailPageState createState() => _DetailPageState();
}
class _DetailPageState extends State<DetailPage> {
bool _tracking = false, _mapped = false; // you need this
String _schedule;
@override
Widget build(BuildContext context) {
final levelIndicator = Container(
child: Container(
child: LinearProgressIndicator(backgroundColor: Color.fromRGBO(209, 224, 224, 0.2), value: 2.0, valueColor: AlwaysStoppedAnimation(Colors.green)),
),
);
final topContentText = Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
SizedBox(height: 120.0),
Container(
width: 90.0,
child: Divider(color: Colors.green),
),
SizedBox(height: 10.0),
Text(
widget.activity.activityName,
style: TextStyle(color: Colors.white, fontSize: 45.0),
),
SizedBox(height: 30.0),
Row(
mainAxisAlignment: MainAxisAlignment.start,
children: <Widget>[
Expanded(
flex: 6,
child: Padding(
padding: EdgeInsets.only(left: 10.0),
child: Text(
"Last Run: 3-2-19\n" + "Last Avg Strain: 34%\n" + "Last Run Time: 00:45:23",
style: TextStyle(color: Colors.white),
))),
// Expanded(flex: 1, child: newRow)
],
),
],
);
final topContent = Stack(
children: <Widget>[
Container(
height: MediaQuery.of(context).size.height * 0.45,
padding: EdgeInsets.all(40.0),
width: MediaQuery.of(context).size.width,
decoration: BoxDecoration(
gradient: LinearGradient(
begin: Alignment.centerLeft,
end: Alignment.centerRight,
colors: [Color.fromRGBO(33, 147, 176, 100), Color.fromRGBO(109, 213, 237, 100)],
),
),
child: Center(
child: topContentText,
),
),
Positioned(
left: 235.0,
top: 180.0,
child: InkWell(
onTap: () {
Navigator.pop(context);
},
child: CircleAvatar(
radius: 80.0,
backgroundColor: Colors.white,
),
),
),
Positioned(
left: 8.0,
top: 60.0,
child: InkWell(
onTap: () {
Navigator.pop(context);
},
child: Icon(Icons.arrow_back, color: Colors.white),
),
)
],
);
final bottomContentText = Text(
"Config:",
style: TextStyle(fontSize: 18.0),
);
final mappedCheckbox = CheckboxListTile(
title: Text("Mapped"),
value: _mapped,
onChanged: (newValue) => setState(() => _mapped = newValue),
controlAffinity: ListTileControlAffinity.leading, // <-- leading Checkbox
);
final rtCheckBox = CheckboxListTile(
title: Text("Real-time Tracking"),
value: _tracking,
onChanged: (newValue) => setState(() => _tracking = newValue),
controlAffinity: ListTileControlAffinity.leading, // <-- leading Checkbox
);
final descriptionText = Text(
"Description:",
style: TextStyle(fontSize: 12.0),
);
final description = TextFormField(
decoration: InputDecoration(
hintText: 'Enter an activity description',
),
);
final scheduledFor = Text(
"Scheduled for:",
style: TextStyle(fontSize: 12.0),
);
final dropdown = DropdownButton<String>(
value: _schedule,
items: <String>['Now (Default)', 'B', 'C', 'D'].map((String value) {
return DropdownMenuItem<String>(
value: value,
child: Text(value),
);
}).toList(),
hint: Text("Now (Default)"),
onChanged: (newValue) {
setState(() {
_schedule = newValue;
});
},
);
final readButton = Container(
padding: EdgeInsets.symmetric(vertical: 16.0),
width: 170, //MediaQuery.of(context).size.width,
child: RaisedButton(
onPressed: () => {},
color: Colors.lightBlue,
child: Text("Start", style: TextStyle(color: Colors.white, fontSize: 20)),
));
final bottomContent = Container(
width: MediaQuery.of(context).size.width,
padding: EdgeInsets.all(40.0),
child: Center(
child: Column(
children: <Widget>[bottomContentText, mappedCheckbox, rtCheckBox, descriptionText, description, Text("\n"), scheduledFor, dropdown, readButton],
),
),
);
return Scaffold(
body: Column(
children: <Widget>[topContent, bottomContent],
),
);
}
}
```
|
Adding a solution for Android Studio since I didn't find one here.
1. Place the marker on the Stateless widget name:
[](https://i.stack.imgur.com/5FITf.png)
2. Hit Alt+Enter
3. Select Convert to Stateful widget
4. Configure your Stateful widget based on your requirements
|
56,222,981 |
I have installed Python at `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32` location and the version is Python 3.7.3.
To install pip, I downloaded `pip-19.1.1-py2.py3-none-any.whl (1.4 MB)` from pypi.org website and placed it in `C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`. In the command prompt I entered
`cd C:\Users\xxxx\AppData\Local\Programs\Python\Python37-32`
`python pip-19.1.1-py2.py3-none-any.whl`
I am getting an error as below: `can't find '__main__' module in 'pip-19.1.1-py2.py3-none-any.whl'`
any help on how this error can be resolved?
TIA!
|
2019/05/20
|
[
"https://Stackoverflow.com/questions/56222981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9773920/"
] |
For VSCode (Visual Studio Code) use ctrl + '.' keys while the cursor on the stateless widget to convert it to stateful widget.
|
Adding a solution for Android Studio since I didn't find one here.
1. Place the marker on the Stateless widget name:
[](https://i.stack.imgur.com/5FITf.png)
2. Hit Alt+Enter
3. Select Convert to Stateful widget
4. Configure your Stateful widget based on your requirements
|
57,798,943 |
I have a decision tree classifier and while importing export\_text from sklearn.tree.export I am getting error as below ,
ImportError: cannot import name 'export\_text' from 'sklearn.tree.export' (C:\ProgramData\Anaconda3\lib\site-packages\sklearn\tree\export.py)
Is there any way to resolve this
I have tried with the code mentioned in the [documentation](https://scikit-earn.org/stable/modules/generated/sklearn.tree.export_text.html), as below:
```
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree.export import export_text
iris = load_iris()
X = iris['data']
y = iris['target']
decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
decision_tree = decision_tree.fit(X, y)
r = export_text(decision_tree, feature_names=iris['feature_names'])
print(r)
|--- petal width (cm) <= 0.80
| |--- class: 0
|--- petal width (cm) > 0.80
| |--- petal width (cm) <= 1.75
| | |--- class: 1
| |--- petal width (cm) > 1.75
| | |--- class: 2
```
I am using my python-3.7.3 and other associated libraries version as
```
import sklearn
import numpy
import scipy
import joblib
print(sklearn.__version__) == > 0.20.3
print(numpy.__version__) === > 1.16.2
print(scipy.__version__) ==> 1.2.1
print(joblib.__version__) ==> 0.13.2
```
Error as below :
```
ImportError: cannot import name 'export_text' from 'sklearn.tree.export' (C:\ProgramData\Anaconda3\lib\site-packages\sklearn\tree\export.py)
```
|
2019/09/05
|
[
"https://Stackoverflow.com/questions/57798943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10323797/"
] |
I think this is just
```
from scikit.tree import export_text
```
|
The issue is with the sklearn version.
Updated sklearn would solve this.
sklearn.**version** of 0.21.3 will resolve this.
Any other method/code to print as requested except using sklearn.
|
57,798,943 |
I have a decision tree classifier and while importing export\_text from sklearn.tree.export I am getting error as below ,
ImportError: cannot import name 'export\_text' from 'sklearn.tree.export' (C:\ProgramData\Anaconda3\lib\site-packages\sklearn\tree\export.py)
Is there any way to resolve this
I have tried with the code mentioned in the [documentation](https://scikit-earn.org/stable/modules/generated/sklearn.tree.export_text.html), as below:
```
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree.export import export_text
iris = load_iris()
X = iris['data']
y = iris['target']
decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
decision_tree = decision_tree.fit(X, y)
r = export_text(decision_tree, feature_names=iris['feature_names'])
print(r)
|--- petal width (cm) <= 0.80
| |--- class: 0
|--- petal width (cm) > 0.80
| |--- petal width (cm) <= 1.75
| | |--- class: 1
| |--- petal width (cm) > 1.75
| | |--- class: 2
```
I am using my python-3.7.3 and other associated libraries version as
```
import sklearn
import numpy
import scipy
import joblib
print(sklearn.__version__) == > 0.20.3
print(numpy.__version__) === > 1.16.2
print(scipy.__version__) ==> 1.2.1
print(joblib.__version__) ==> 0.13.2
```
Error as below :
```
ImportError: cannot import name 'export_text' from 'sklearn.tree.export' (C:\ProgramData\Anaconda3\lib\site-packages\sklearn\tree\export.py)
```
|
2019/09/05
|
[
"https://Stackoverflow.com/questions/57798943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10323797/"
] |
I think this is just
```
from scikit.tree import export_text
```
|
`from sklearn.tree import export_text`
instead of
`from sklearn.tree.export import export_text`
it works for me.
The issue is with the sklearn version.
[here](https://scikit-learn.org/stable/modules/generated/sklearn.tree.export_text.html)
|
57,798,943 |
I have a decision tree classifier and while importing export\_text from sklearn.tree.export I am getting error as below ,
ImportError: cannot import name 'export\_text' from 'sklearn.tree.export' (C:\ProgramData\Anaconda3\lib\site-packages\sklearn\tree\export.py)
Is there any way to resolve this
I have tried with the code mentioned in the [documentation](https://scikit-earn.org/stable/modules/generated/sklearn.tree.export_text.html), as below:
```
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree.export import export_text
iris = load_iris()
X = iris['data']
y = iris['target']
decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
decision_tree = decision_tree.fit(X, y)
r = export_text(decision_tree, feature_names=iris['feature_names'])
print(r)
|--- petal width (cm) <= 0.80
| |--- class: 0
|--- petal width (cm) > 0.80
| |--- petal width (cm) <= 1.75
| | |--- class: 1
| |--- petal width (cm) > 1.75
| | |--- class: 2
```
I am using my python-3.7.3 and other associated libraries version as
```
import sklearn
import numpy
import scipy
import joblib
print(sklearn.__version__) == > 0.20.3
print(numpy.__version__) === > 1.16.2
print(scipy.__version__) ==> 1.2.1
print(joblib.__version__) ==> 0.13.2
```
Error as below :
```
ImportError: cannot import name 'export_text' from 'sklearn.tree.export' (C:\ProgramData\Anaconda3\lib\site-packages\sklearn\tree\export.py)
```
|
2019/09/05
|
[
"https://Stackoverflow.com/questions/57798943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10323797/"
] |
`from sklearn.tree import export_text`
instead of
`from sklearn.tree.export import export_text`
it works for me.
The issue is with the sklearn version.
[here](https://scikit-learn.org/stable/modules/generated/sklearn.tree.export_text.html)
|
The issue is with the sklearn version.
Updated sklearn would solve this.
sklearn.**version** of 0.21.3 will resolve this.
Any other method/code to print as requested except using sklearn.
|
64,489,263 |
I'm getting an error while trying to deploy to DigitalOcean through github actions and I'm not sure why it is yelling about python. Is it due to the docker images I'm using? I've tried installing the versions of python on the droplet, even though that probably has nothing to do with it.
Here is the dockerfile in question:
```
FROM node:alpine3.12 as build
WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn cache clean && yarn --update-checksums
COPY . ./
RUN yarn && yarn build
# Stage - Production
FROM nginx:1.17-alpine
COPY --from=build /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
```
Here is the deploy.yml file for the actions:
```
name: build
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Use Node.js 13.10
uses: actions/setup-node@v1
with:
node-version: '13.10'
- name: Install yarn and run unittest
run: |
yarn
yarn test
env:
CI: true
- name: Publish to Github Packages Registry
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: fanzypantz/onepole/onepole
registry: docker.pkg.github.com
username: ${{ secrets.USERNAME }}
password: ${{ secrets.GITHUB_TOKEN }}
dockerfile: Dockerfile-prod
tags: latest
- name: Deploy package to digitalocean
uses: appleboy/ssh-action@master
env:
GITHUB_USERNAME: ${{ secrets.USERNAME }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PASSWORD }}
port: ${{ secrets.PORT }}
envs: GITHUB_USERNAME, GITHUB_TOKEN
script: |
docker stop $(docker ps -a -q)
docker login docker.pkg.github.com -u $GITHUB_USERNAME -p $GITHUB_TOKEN
docker pull docker.pkg.github.com/fanzypantz/onepole/onepole:latest
docker run -dit -p 80:80 docker.pkg.github.com/fanzypantz/onepole/onepole:latest
```
Error from github actions
```
Step 1/10 : FROM node:alpine3.12 as build
alpine3.12: Pulling from library/node
188c0c94c7c5: Pulling fs layer
9e35e0178777: Pulling fs layer
90f5366a8ab8: Pulling fs layer
39095b5b3547: Pulling fs layer
39095b5b3547: Waiting
188c0c94c7c5: Verifying Checksum
188c0c94c7c5: Download complete
188c0c94c7c5: Pull complete
90f5366a8ab8: Download complete
9e35e0178777: Verifying Checksum
9e35e0178777: Download complete
39095b5b3547: Verifying Checksum
39095b5b3547: Download complete
9e35e0178777: Pull complete
90f5366a8ab8: Pull complete
39095b5b3547: Pull complete
Digest: sha256:96683bc4c05c2cb2c3bee957cdf7b967051323dafb937301d4b9fac30e529d7a
Status: Downloaded newer image for node:alpine3.12
---> 8f99f65ba94a
Step 2/10 : WORKDIR /usr/src/app
---> Running in 8fa7d7cacf45
Removing intermediate container 8fa7d7cacf45
---> 9d3dddf518f7
Step 3/10 : COPY package*.json ./
---> ee091cf8a93a
Step 4/10 : RUN yarn cache clean && yarn --update-checksums
---> Running in d03b8ed6e52a
yarn cache v1.***.5
success Cleared cache.
Done in 0.04s.
yarn install v1.***.5
info No lockfile found.
[1/4] Resolving packages...
warning @testing-library/jest-dom > css > urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
warning @testing-library/jest-dom > css > source-map-resolve > urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
warning @testing-library/jest-dom > css > source-map-resolve > resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
warning @testing-library/react > @types/testing-library__react > @types/testing-library__dom@7.5.0: This is a stub types definition. testing-library__dom provides its own type definitions, so you do not need this installed.
warning node-sass > request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
warning node-sass > node-gyp > request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
warning node-sass > request > har-validator@5.1.5: this library is no longer supported
warning react-scripts > eslint-loader@3.0.3: This loader has been deprecated. Please use eslint-webpack-plugin
warning react-scripts > jest-environment-jsdom-fourteen > jsdom > request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
warning react-scripts > webpack-dev-server > chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
warning react-scripts > webpack-dev-server > chokidar > fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
warning react-scripts > babel-jest > @jest/transform > jest-haste-map > fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
warning react-scripts > jest-environment-jsdom-fourteen > jsdom > request-promise-native@1.0.9: request-promise-native has been deprecated because it extends the now deprecated request package, see https://github.com/request/request/issues/3142
warning react-scripts > workbox-webpack-plugin > workbox-build > @hapi/joi@15.1.1: joi is leaving the @hapi organization and moving back to 'joi' (https://github.com/sideway/joi/issues/2411)
warning react-scripts > webpack > watchpack > watchpack-chokidar2 > chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
warning react-scripts > jest > jest-cli > jest-config > jest-environment-jsdom > jsdom > request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
warning react-scripts > jest > jest-cli > jest-config > jest-environment-jsdom > jsdom > request-promise-native@1.0.9: request-promise-native has been deprecated because it extends the now deprecated request package, see https://github.com/request/request/issues/3142
warning react-scripts > workbox-webpack-plugin > workbox-build > @hapi/joi > @hapi/topo@3.1.6: This version has been deprecated and is no longer supported or maintained
warning react-scripts > workbox-webpack-plugin > workbox-build > @hapi/joi > @hapi/hoek@8.5.1: This version has been deprecated and is no longer supported or maintained
warning react-scripts > workbox-webpack-plugin > workbox-build > @hapi/joi > @hapi/topo > @hapi/hoek@8.5.1: This version has been deprecated and is no longer supported or maintained
warning react-scripts > workbox-webpack-plugin > workbox-build > @hapi/joi > @hapi/bourne@1.3.2: This version has been deprecated and is no longer supported or maintained
warning react-scripts > workbox-webpack-plugin > workbox-build > @hapi/joi > @hapi/address@2.1.4: This version has been deprecated and is no longer supported or maintained
warning react-scripts > jest > jest-cli > jest-config > jest-environment-jsdom > jsdom > left-pad@1.3.0: use String.prototype.padStart()
warning react-scripts > workbox-webpack-plugin > workbox-build > strip-comments > babel-plugin-transform-object-rest-spread > babel-runtime > core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3.
[2/4] Fetching packages...
info fsevents@2.1.2: The platform "linux" is incompatible with this module.
info "fsevents@2.1.2" is an optional dependency and failed compatibility check. Excluding it from installation.
info fsevents@1.2.13: The platform "linux" is incompatible with this module.
info "fsevents@1.2.13" is an optional dependency and failed compatibility check. Excluding it from installation.
info fsevents@2.1.3: The platform "linux" is incompatible with this module.
info "fsevents@2.1.3" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
warning " > @testing-library/user-event@7.2.1" has unmet peer dependency "@testing-library/dom@>=5".
warning "react-scripts > @typescript-eslint/eslint-plugin > tsutils@3.17.1" has unmet peer dependency "typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta".
[4/4] Building fresh packages...
error /usr/src/app/node_modules/node-sass: Command failed.
Exit code: 1
Command: node scripts/build.js
Arguments:
Directory: /usr/src/app/node_modules/node-sass
Output:
Building: /usr/local/bin/node /usr/src/app/node_modules/node-gyp/bin/node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library=
gyp info it worked if it ends with ok
gyp verb cli [
gyp verb cli '/usr/local/bin/node',
gyp verb cli '/usr/src/app/node_modules/node-gyp/bin/node-gyp.js',
gyp verb cli 'rebuild',
gyp verb cli '--verbose',
gyp verb cli '--libsass_ext=',
gyp verb cli '--libsass_cflags=',
gyp verb cli '--libsass_ldflags=',
gyp verb cli '--libsass_library='
gyp verb cli ]
gyp info using node-gyp@3.8.0
gyp info using node@15.0.0 | linux | x64
gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
gyp verb command configure []
gyp verb check python checking for Python executable "python2" in the PATH
gyp verb `which` failed Error: not found: python2
gyp verb `which` failed at getNotFoundError (/usr/src/app/node_modules/which/which.js:13:12)
gyp verb `which` failed at F (/usr/src/app/node_modules/which/which.js:68:19)
gyp verb `which` failed at E (/usr/src/app/node_modules/which/which.js:80:29)
gyp verb `which` failed at /usr/src/app/node_modules/which/which.js:89:16
gyp verb `which` failed at /usr/src/app/node_modules/isexe/index.js:42:5
gyp verb `which` failed at /usr/src/app/node_modules/isexe/mode.js:8:5
gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:183:21)
gyp verb `which` failed python2 Error: not found: python2
gyp verb `which` failed at getNotFoundError (/usr/src/app/node_modules/which/which.js:13:12)
gyp verb `which` failed at F (/usr/src/app/node_modules/which/which.js:68:19)
gyp verb `which` failed at E (/usr/src/app/node_modules/which/which.js:80:29)
gyp verb `which` failed at /usr/src/app/node_modules/which/which.js:89:16
gyp verb `which` failed at /usr/src/app/node_modules/isexe/index.js:42:5
gyp verb `which` failed at /usr/src/app/node_modules/isexe/mode.js:8:5
gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:183:21) {
gyp verb `which` failed code: 'ENOENT'
gyp verb `which` failed }
gyp verb check python checking for Python executable "python" in the PATH
gyp verb `which` failed Error: not found: python
gyp verb `which` failed at getNotFoundError (/usr/src/app/node_modules/which/which.js:13:12)
gyp verb `which` failed at F (/usr/src/app/node_modules/which/which.js:68:19)
gyp verb `which` failed at E (/usr/src/app/node_modules/which/which.js:80:29)
gyp verb `which` failed at /usr/src/app/node_modules/which/which.js:89:16
gyp verb `which` failed at /usr/src/app/node_modules/isexe/index.js:42:5
gyp verb `which` failed at /usr/src/app/node_modules/isexe/mode.js:8:5
gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:183:21)
gyp verb `which` failed python Error: not found: python
gyp verb `which` failed at getNotFoundError (/usr/src/app/node_modules/which/which.js:13:12)
gyp verb `which` failed at F (/usr/src/app/node_modules/which/which.js:68:19)
gyp verb `which` failed at E (/usr/src/app/node_modules/which/which.js:80:29)
gyp verb `which` failed at /usr/src/app/node_modules/which/which.js:89:16
gyp verb `which` failed at /usr/src/app/node_modules/isexe/index.js:42:5
gyp verb `which` failed at /usr/src/app/node_modules/isexe/mode.js:8:5
gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:183:21) {
gyp verb `which` failed code: 'ENOENT'
gyp verb `which` failed }
gyp ERR! configure error
gyp ERR! stack Error: Can't find Python executable "python", you can set the PYTHON env variable.
gyp ERR! stack at PythonFinder.failNoPython (/usr/src/app/node_modules/node-gyp/lib/configure.js:484:19)
gyp ERR! stack at PythonFinder.<anonymous> (/usr/src/app/node_modules/node-gyp/lib/configure.js:406:16)
gyp ERR! stack at F (/usr/src/app/node_modules/which/which.js:68:16)
gyp ERR! stack at E (/usr/src/app/node_modules/which/which.js:80:29)
gyp ERR! stack at /usr/src/app/node_modules/which/which.js:89:16
gyp ERR! stack at /usr/src/app/node_modules/isexe/index.js:42:5
gyp ERR! stack at /usr/src/app/node_modules/isexe/mode.js:8:5
gyp ERR! stack at FSReqCallback.oncomplete (node:fs:183:21)
gyp ERR! System Linux 5.4.0-1031-azure
gyp ERR! command "/usr/local/bin/node" "/usr/src/app/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
gyp ERR! cwd /usr/src/app/node_modules/node-sass
gyp ERR! node -v v15.0.0
gyp ERR! node-gyp -v v3.8.0
gyp ERR! not ok
Build failed with error code: 1
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
The command '/bin/sh -c yarn cache clean && yarn --update-checksums' returned a non-zero code: 1
```
|
2020/10/22
|
[
"https://Stackoverflow.com/questions/64489263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6422461/"
] |
This was fixed in Python 3.9, see <https://bugs.python.org/issue38438> and [commit `a0ed99bc`](https://github.com/python/cpython/commit/a0ed99bca8475cbc82e9202aa354faba2a4620f4) that fixed it.
Your code produces the usage message you expect if run on 3.9:
```py
Python 3.9.0 (default, Oct 12 2020, 02:44:01)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('files', help='file(s) to parse instead of stdin', nargs='*')
_StoreAction(option_strings=[], dest='files', nargs='*', const=None, default=None, type=None, choices=None, help='file(s) to parse instead of stdin', metavar=None)
>>> parser.print_help()
usage: [-h] [files ...]
```
|
**For all versions prior to Python 3.9:**
You may override the usage line by passing a `usage="%(prog)s [options]"` string [when instantiating ArgumentParser](https://docs.python.org/3/library/argparse.html#usage) or by updating the `usage` property of the existing instance.
You can combine this with [`ArgumentParser.format_usage()`](https://docs.python.org/3/library/argparse.html#printing-help) if you want to to let it generate a usage string, but use regex or string replacement to replace only `nargs` options.
For example:
```
import argparse
import re
parser = argparse.ArgumentParser()
parser.add_argument(
'files',
help='file(s) to parse instead of stdin',
nargs='*',
)
usage = parser.format_usage()[7:] # remove "usage: " prefix
parser.usage = re.sub(r'\[(.+?) \[\1 ...\]\]', r'[\1 ...]', usage)
parser.parse_args()
```
Produces:
```
usage: test.py [-h] [files ...]
positional arguments:
files file(s) to parse instead of stdin
optional arguments:
-h, --help show this help message and exit
```
|
11,111,434 |
I understand how to create a virtualenv, activate that virtualenv, and then install django while the virtualenv is activated. Ex:
```
virtualenv ~/narnia
source ~/narnia/bin/activate
pip install django
```
From what I understand, when you "activate" a virtual environment it just changes some path variables while you're in a shell, nothing more. So when django is installed with pip while narnia is activated, it simply installs things into locations relative to ~/narnia instead of relative to /usr (ex. ~/narnia/lib/python2.6/site-packages instead of /usr/lib/python2.6/site-packages).
And now let's say I want to create a django project with that virtualenv. I don't even have to be activated, correct? I could just do:
```
deactivate
cd ~
narnia/bin/django-admin.py startproject myproject
```
So, if I have a web server set up with "myproject"...how is that django project tied to the narnia virtualenv at this point? Won't the system path variables simply be the normal systemwide python paths? How does "myproject" know to use the narnia environment?
I am a little confused on how that works. Any help is appreciated.
Thanks!
|
2012/06/20
|
[
"https://Stackoverflow.com/questions/11111434",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1467836/"
] |
Web servers typically have a way to configure their Python use. You can set environment variables, paths, and so on. Use these tools to point to the virtualenv.
For example, in my Apache httpd.conf:
```
WSGIDaemonProcess myapp processes=2 threads=12 python-path=/home/nedbat/webapps/myapp/server:/home/nedbat/webapps/myapp/lib/python2.7
WSGIProcessGroup myapp
WSGIRestrictEmbedded On
WSGILazyInitialization On
WSGIScriptAlias / /home/nedbat/webapps/myapp/myapp.wsgi
```
and I have an myapp.wsgi file:
```
import os
import site
import sys
VE = '/home/nedbat/webapps/myapp/ve'
site.addsitedir(VE + '/lib/python2.7/site-packages')
os.environ['DJANGO_SETTINGS_MODULE'] = 'prod_settings'
os.environ['USE_PYPY'] = 'y'
os.environ['TMPDIR'] = '/home/nedbat/webapps/myapp/tmp'
from django.core.handlers.wsgi import WSGIHandler
application = WSGIHandler()
```
|
If you wish to use a virtualenv with Django in a production or even staging environment, I'd expect you to activate the environment and, depending on whether you do it manually or use e.g. virtualenvwrapper, set the appropriate environment variables. I'm not sure what purpose it would have to install Django in a virtual environment and then not use it.
By the way, just as I look at this I see the related question [Django and VirtualEnv Development/Deployment Best Practices](https://stackoverflow.com/questions/2241055/django-and-virtualenv-development-deployment-best-practices) show up in the sidebar. This seems to be rather relevant for your set-up.
|
45,560,006 |
Python newbie here. I am on windows. I installed anaconda a while back for my own user account (not for all users on windows). It installed into c:\users...\local..anaconda and all is good. I might have recently updated anaconda and installed VS 2017.
Now I seem to have 4 python environments.
I tried uninstalling from add-remove programs but some how the environments did not go away.
[](https://i.stack.imgur.com/ns4Fm.png)
My questions:
1. Where does Visual Studio get the list of environment names?
2. How do I delete these environments? I just want to keep one and delete all the others
|
2017/08/08
|
[
"https://Stackoverflow.com/questions/45560006",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1222349/"
] |
I deleted the unwanted registry keys from (what solved the problem for me):
```
HKEY_CURRENT_USER\Software\Python\PythonCore\
```
The source mentions also the following keys, but they did not exist in my computer:
```
HKEY_LOCAL_MACHINE\Software\Python\PythonCore\
HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\
```
Source: <https://github.com/Microsoft/PTVS/issues/18>
|
This is [the link](https://learn.microsoft.com/en-us/visualstudio/python/python-environments) to read
I'll quote you what I think is relevant for your case, but if it's not, just read it all and find what you need
>
> Any global environment can be added as a project environment by
> right-clicking Python Environments and selecting Add/Remove Python
> Environments.... From the displayed list you can select or deselect
> those environments that are available in your project.+
>
>
>
And...
>
> Once a virtual environment is added to your project, it appears in the
> Python Environments window, you can activate it like any other
> environment, and you can manage its packages. Right-clicking it and
> selecting Remove either removes the reference to the environment, or
> deletes the environment and all its files on disk (but not the base
> interpreter).
>
>
>
|
45,560,006 |
Python newbie here. I am on windows. I installed anaconda a while back for my own user account (not for all users on windows). It installed into c:\users...\local..anaconda and all is good. I might have recently updated anaconda and installed VS 2017.
Now I seem to have 4 python environments.
I tried uninstalling from add-remove programs but some how the environments did not go away.
[](https://i.stack.imgur.com/ns4Fm.png)
My questions:
1. Where does Visual Studio get the list of environment names?
2. How do I delete these environments? I just want to keep one and delete all the others
|
2017/08/08
|
[
"https://Stackoverflow.com/questions/45560006",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1222349/"
] |
**I found my answer to this question here : <https://learn.microsoft.com/en-us/visualstudio/python/managing-python-environments-in-visual-studio?view=vs-2019#fix-or-delete-invalid-environments>**
* First click this button
[](https://i.stack.imgur.com/UdEjV.png)
* You will then see this kind of screen
[](https://i.stack.imgur.com/hgHcK.png)
* Then click the dropdown menu *where it shows - packages*. Then click the overview tab.
* Click the delete/remove the enviroment.
* Your enviroment should now be removed.
|
This is [the link](https://learn.microsoft.com/en-us/visualstudio/python/python-environments) to read
I'll quote you what I think is relevant for your case, but if it's not, just read it all and find what you need
>
> Any global environment can be added as a project environment by
> right-clicking Python Environments and selecting Add/Remove Python
> Environments.... From the displayed list you can select or deselect
> those environments that are available in your project.+
>
>
>
And...
>
> Once a virtual environment is added to your project, it appears in the
> Python Environments window, you can activate it like any other
> environment, and you can manage its packages. Right-clicking it and
> selecting Remove either removes the reference to the environment, or
> deletes the environment and all its files on disk (but not the base
> interpreter).
>
>
>
|
45,560,006 |
Python newbie here. I am on windows. I installed anaconda a while back for my own user account (not for all users on windows). It installed into c:\users...\local..anaconda and all is good. I might have recently updated anaconda and installed VS 2017.
Now I seem to have 4 python environments.
I tried uninstalling from add-remove programs but some how the environments did not go away.
[](https://i.stack.imgur.com/ns4Fm.png)
My questions:
1. Where does Visual Studio get the list of environment names?
2. How do I delete these environments? I just want to keep one and delete all the others
|
2017/08/08
|
[
"https://Stackoverflow.com/questions/45560006",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1222349/"
] |
I deleted the unwanted registry keys from (what solved the problem for me):
```
HKEY_CURRENT_USER\Software\Python\PythonCore\
```
The source mentions also the following keys, but they did not exist in my computer:
```
HKEY_LOCAL_MACHINE\Software\Python\PythonCore\
HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\
```
Source: <https://github.com/Microsoft/PTVS/issues/18>
|
**to Question 1:**
Visual Studio searches the registry to detect global installed python environments by following [PEP 514](https://www.python.org/dev/peps/pep-0514/ "Python registration in the Windows registry") as described in [Install Python interpreters](https://learn.microsoft.com/en-us/visualstudio/python/installing-python-interpreters?view=vs-2017).
**to Question 2:**
Open the *Visual Studio Installer* again and change the current installation by selecting only the python interpreters you need (you will find they in the *Components* tab).
|
45,560,006 |
Python newbie here. I am on windows. I installed anaconda a while back for my own user account (not for all users on windows). It installed into c:\users...\local..anaconda and all is good. I might have recently updated anaconda and installed VS 2017.
Now I seem to have 4 python environments.
I tried uninstalling from add-remove programs but some how the environments did not go away.
[](https://i.stack.imgur.com/ns4Fm.png)
My questions:
1. Where does Visual Studio get the list of environment names?
2. How do I delete these environments? I just want to keep one and delete all the others
|
2017/08/08
|
[
"https://Stackoverflow.com/questions/45560006",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1222349/"
] |
**I found my answer to this question here : <https://learn.microsoft.com/en-us/visualstudio/python/managing-python-environments-in-visual-studio?view=vs-2019#fix-or-delete-invalid-environments>**
* First click this button
[](https://i.stack.imgur.com/UdEjV.png)
* You will then see this kind of screen
[](https://i.stack.imgur.com/hgHcK.png)
* Then click the dropdown menu *where it shows - packages*. Then click the overview tab.
* Click the delete/remove the enviroment.
* Your enviroment should now be removed.
|
**to Question 1:**
Visual Studio searches the registry to detect global installed python environments by following [PEP 514](https://www.python.org/dev/peps/pep-0514/ "Python registration in the Windows registry") as described in [Install Python interpreters](https://learn.microsoft.com/en-us/visualstudio/python/installing-python-interpreters?view=vs-2017).
**to Question 2:**
Open the *Visual Studio Installer* again and change the current installation by selecting only the python interpreters you need (you will find they in the *Components* tab).
|
72,167,725 |
This issue is coming right now. why i don't know. But i was not facing this issue before 2 / 3 days.
This error is coming, when my 'import request' starts running. I have tried every single solution on internet but nothing seems to worked.
```py
"C:\Program Files\Python310\python.exe"
"E:/IT Vedant/Rough Work/1mg.py"
Traceback (most recent call last):
File "E:\IT Vedant\Rough Work\1mg.py", line 2, in <module>
import requests
File "C:\Program Files\Python310\lib\site-packages\requests\__init__.py", line 58, in <module>
from . import utils
File "C:\Program Files\Python310\lib\site-packages\requests\utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "C:\Program Files\Python310\lib\site-packages\requests\compat.py", line 7, in <module>
from .packages import chardet
File "C:\Program Files\Python310\lib\site-packages\requests\packages\__init__.py", line 3, in <module>
from . import urllib3
File "C:\Program Files\Python310\lib\site-packages\requests\packages\urllib3\__init__.py", line 10, in <module>
from .connectionpool import (
File "C:\Program Files\Python310\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 38, in <module>
from .response import HTTPResponse
File "C:\Program Files\Python310\lib\site-packages\requests\packages\urllib3\response.py", line 5, in <module>
from ._collections import HTTPHeaderDict
File "C:\Program Files\Python310\lib\site-packages\requests\packages\urllib3\_collections.py", line 1, in <module>
from collections import Mapping, MutableMapping
ImportError: cannot import name 'Mapping' from 'collections' (C:\Program Files\Python310\lib\collections\__init__.py)
Process finished with exit code 1
```
|
2022/05/09
|
[
"https://Stackoverflow.com/questions/72167725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17424124/"
] |
You are using python 3.10
try changing from
`from collections import Mapping`
to `from collections.abc import Mapping`
|
in You python 3.10
go to this link
```
C:\Program Files\Python310\lib\collections\
```
and open ficher `__init__.py`
in this ficher change :
```
from collections import Mapping
to
from collections.abc import Mapping
```
|
72,167,725 |
This issue is coming right now. why i don't know. But i was not facing this issue before 2 / 3 days.
This error is coming, when my 'import request' starts running. I have tried every single solution on internet but nothing seems to worked.
```py
"C:\Program Files\Python310\python.exe"
"E:/IT Vedant/Rough Work/1mg.py"
Traceback (most recent call last):
File "E:\IT Vedant\Rough Work\1mg.py", line 2, in <module>
import requests
File "C:\Program Files\Python310\lib\site-packages\requests\__init__.py", line 58, in <module>
from . import utils
File "C:\Program Files\Python310\lib\site-packages\requests\utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "C:\Program Files\Python310\lib\site-packages\requests\compat.py", line 7, in <module>
from .packages import chardet
File "C:\Program Files\Python310\lib\site-packages\requests\packages\__init__.py", line 3, in <module>
from . import urllib3
File "C:\Program Files\Python310\lib\site-packages\requests\packages\urllib3\__init__.py", line 10, in <module>
from .connectionpool import (
File "C:\Program Files\Python310\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 38, in <module>
from .response import HTTPResponse
File "C:\Program Files\Python310\lib\site-packages\requests\packages\urllib3\response.py", line 5, in <module>
from ._collections import HTTPHeaderDict
File "C:\Program Files\Python310\lib\site-packages\requests\packages\urllib3\_collections.py", line 1, in <module>
from collections import Mapping, MutableMapping
ImportError: cannot import name 'Mapping' from 'collections' (C:\Program Files\Python310\lib\collections\__init__.py)
Process finished with exit code 1
```
|
2022/05/09
|
[
"https://Stackoverflow.com/questions/72167725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17424124/"
] |
You are using python 3.10
try changing from
`from collections import Mapping`
to `from collections.abc import Mapping`
|
My `C:\Program Files\Python310\lib\collections\__init__.py` *from* section didn't seem to have the required entries.
To resolve this, I added the following to that file:
```
from collections.abc import Mapping
from collections.abc import MutableMapping
from collections.abc import Sequence
```
Additionally my project .py file still had the legacy code line `import collections` which I replaced with the new code line `from collections.abc import Mapping`
|
28,250,089 |
I am trying to pass an entire, full length/structure URL within my app as an argument inside the app's URL. For example, I want to be able to do something like:
`myapp.com/https://www.youtube.com/watch?v=dQw4w9WgXcQ` so that I can take the URL entered after my app's home page and store it. However, I think the app is getting confused when the URL pasted has fragments and query arguments (ie: contains # and/or ?) My urls.py looks like this:
```
url(r'^(?P<url_param>[a-zA-Z0-9_.-/:?=#]*)/$', views.anywebsiteentered, name='anywebsiteentered')
```
When I try to write a view that looks like below to take the inputted URL and save it inside a model object, I always get the URL truncated before the query and fragment characters, what can I do so that my application picks up the entire URL string?
```
def anywebsiteentered(request, url_param = 'url_param'):
UrlBlob.objects.create(fullurl=url_param)
```
For example, the above object created if my app is at `myapp.com/https://www.youtube.com/watch?v=dQw4w9WgXcQ` only returns `https://www.youtube.com/watch` and not the query part of the URL. I suspect it is something I am doing with the passing of the URL because when I create this model object manually inside the python-django shell there is no problems at all.
Thanks for any help and hints. I really appreciate it.
|
2015/01/31
|
[
"https://Stackoverflow.com/questions/28250089",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3977047/"
] |
On post execute get JSONObject first then array. If still same message then your server is not returning correct json object.
```
JSONObject jobj = new JSONObject(result);
JSONArray Jarray = jobj.getJSONArray("output");
```
|
If your result is array list then You can do this using following code
```
ArrayList<String> list = new ArrayList<String>();
list.add("blah");
list.add("bleh");
JSONArray jsArray = new JSONArray(list);
```
|
28,250,089 |
I am trying to pass an entire, full length/structure URL within my app as an argument inside the app's URL. For example, I want to be able to do something like:
`myapp.com/https://www.youtube.com/watch?v=dQw4w9WgXcQ` so that I can take the URL entered after my app's home page and store it. However, I think the app is getting confused when the URL pasted has fragments and query arguments (ie: contains # and/or ?) My urls.py looks like this:
```
url(r'^(?P<url_param>[a-zA-Z0-9_.-/:?=#]*)/$', views.anywebsiteentered, name='anywebsiteentered')
```
When I try to write a view that looks like below to take the inputted URL and save it inside a model object, I always get the URL truncated before the query and fragment characters, what can I do so that my application picks up the entire URL string?
```
def anywebsiteentered(request, url_param = 'url_param'):
UrlBlob.objects.create(fullurl=url_param)
```
For example, the above object created if my app is at `myapp.com/https://www.youtube.com/watch?v=dQw4w9WgXcQ` only returns `https://www.youtube.com/watch` and not the query part of the URL. I suspect it is something I am doing with the passing of the URL because when I create this model object manually inside the python-django shell there is no problems at all.
Thanks for any help and hints. I really appreciate it.
|
2015/01/31
|
[
"https://Stackoverflow.com/questions/28250089",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3977047/"
] |
```
<?php
try {
$conn = new PDO('mysql:host=localhost;dbname=beanbag', **********, ********);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
} catch(PDOException $e) {
echo 'ERROR: ' . $e->getMessage();
}
$sql=mysql_query("select * from Recipes");
while($row=mysql_fetch_assoc($sql))
$output[]=$row;
print(json_encode($output));
mysql_close();
?>
```
This doesn'tdo anything. You're mixing up mysql\_\* and PDO which is **[not](https://stackoverflow.com/a/12551932/1544337) [possible](https://stackoverflow.com/a/2316697/1544337)**. If you make a connection with PDO, you can't use mysql\_\* queries to access the database.
Because of this, you get errors and your page is not valid Json, which causes the error.
Instead, use only PDO (mysql\_\* is deprecated due to security issues):
```
<?php
try {
$conn = new PDO('mysql:host=localhost;dbname=beanbag', **********, ********);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
} catch(PDOException $e) {
echo 'ERROR: ' . $e->getMessage();
}
$sql = $conn->query("select * from Recipes");
while (($row = $sql->fetch(PDO::FETCH_ASSOC)) !== false)
$output[]=$row;
print(json_encode($output));
?>
```
|
If your result is array list then You can do this using following code
```
ArrayList<String> list = new ArrayList<String>();
list.add("blah");
list.add("bleh");
JSONArray jsArray = new JSONArray(list);
```
|
35,846,278 |
I'm new to python!
```
players = ["Matt","Joe", "Barry","Billy"]
numbers = [1,2,3,4]
def assignment(players, numbers):
for i in players:
for j in numbers:
print i,j
assignment(players,numbers)
```
My attempt is above.
I want to try make the answer =
```
Matt 1
Joe 2
Barry 3
Billy 4
```
but at the minute each name is being assigned each number! Any advice on how to fix this?
|
2016/03/07
|
[
"https://Stackoverflow.com/questions/35846278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5904219/"
] |
I think it's because Objective C objects cannot have arrays of things that are not NSObjects.
Edit for future searchers:
Erasing the type using AnyObject will make the warning disappear, however you will then need to be careful with casting.
|
The reason you get this error (even in a class that inherits from `NSObject`) is because some Swift types cannot be represented in Objective-C, such as an [array of enums](https://stackoverflow.com/q/30453961/2415822).
`[[NSLayoutAttribute]]` will not work but `[[Int]]` will because while `Array<SomeEnumType>` cannot be bridged, an array of `Int`s are implicitly bridged to an `NSArray` of `NSNumber` objects through Foundation.
Yoav Schwartz's answer is wrong because Objective-C objects **can** have arrays of things that are not `NSObject`s. Look at C arrays: `NSUInteger someNums[5] = {1, 2, 3, 4, 5};` is valid in Objective-C. `NSArray`s however, cannot hold value types and must hold objects.
|
17,081,204 |
Considering that [the `bytes` type is not necessarily a string](https://stackoverflow.com/questions/17063502/python-strings-and-str-method-encoding-and-decoding), how can one see the actual bytes (ones and zeros, or octal/hexadecimal representation of such) of a `bytes` object? Trying to `print()` or `pprint()` such an object results in printing the string representation of the object (assuming some encoding, probably ASCII or UTF-8) preceded by the `b` character to indicate that the datatype is in fact bytes:
```
$ python3
Python 3.2.3 (default, Oct 19 2012, 19:53:16)
>>> from pprint import pprint
>>> s = 'hi'
>>> print(str(type(s)))
<class 'str'>
>>> se = s.encode('utf-8')
>>> print(str(type(se)))
<class 'bytes'>
>>> print(se)
b'hi'
>>> pprint(se)
b'hi'
>>>
```
Note that I am specifically referring to Python 3. Thanks!
|
2013/06/13
|
[
"https://Stackoverflow.com/questions/17081204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343302/"
] |
Use `bin`, `oct` or `hex` and access the byte using bracket notation:
```
>>> print(hex(se[0]))
0x68
>>> print(hex(se[1]))
0x69
```
Obviously a cycle will be better:
```
for a_byte in se:
print (bin(a_byte))
```
|
Use [Python string formatting](http://docs.python.org/2/library/stdtypes.html#string-formatting-operations) to show the hexadecimal values of your bytes:
```
>>> se = b'hi'
>>> ["{0:0>2X}".format(b) for b in se]
['68', '69']
```
|
17,081,204 |
Considering that [the `bytes` type is not necessarily a string](https://stackoverflow.com/questions/17063502/python-strings-and-str-method-encoding-and-decoding), how can one see the actual bytes (ones and zeros, or octal/hexadecimal representation of such) of a `bytes` object? Trying to `print()` or `pprint()` such an object results in printing the string representation of the object (assuming some encoding, probably ASCII or UTF-8) preceded by the `b` character to indicate that the datatype is in fact bytes:
```
$ python3
Python 3.2.3 (default, Oct 19 2012, 19:53:16)
>>> from pprint import pprint
>>> s = 'hi'
>>> print(str(type(s)))
<class 'str'>
>>> se = s.encode('utf-8')
>>> print(str(type(se)))
<class 'bytes'>
>>> print(se)
b'hi'
>>> pprint(se)
b'hi'
>>>
```
Note that I am specifically referring to Python 3. Thanks!
|
2013/06/13
|
[
"https://Stackoverflow.com/questions/17081204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343302/"
] |
Use `bin`, `oct` or `hex` and access the byte using bracket notation:
```
>>> print(hex(se[0]))
0x68
>>> print(hex(se[1]))
0x69
```
Obviously a cycle will be better:
```
for a_byte in se:
print (bin(a_byte))
```
|
```
>>> s = b'hi'
>>> s
b'hi'
>>> print(s)
b'hi'
>>> for i in s:
print(i)
104
105
>>> y = 'hi'
>>> for i in y:
print(i)
h
i
>>>
```
|
17,081,204 |
Considering that [the `bytes` type is not necessarily a string](https://stackoverflow.com/questions/17063502/python-strings-and-str-method-encoding-and-decoding), how can one see the actual bytes (ones and zeros, or octal/hexadecimal representation of such) of a `bytes` object? Trying to `print()` or `pprint()` such an object results in printing the string representation of the object (assuming some encoding, probably ASCII or UTF-8) preceded by the `b` character to indicate that the datatype is in fact bytes:
```
$ python3
Python 3.2.3 (default, Oct 19 2012, 19:53:16)
>>> from pprint import pprint
>>> s = 'hi'
>>> print(str(type(s)))
<class 'str'>
>>> se = s.encode('utf-8')
>>> print(str(type(se)))
<class 'bytes'>
>>> print(se)
b'hi'
>>> pprint(se)
b'hi'
>>>
```
Note that I am specifically referring to Python 3. Thanks!
|
2013/06/13
|
[
"https://Stackoverflow.com/questions/17081204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343302/"
] |
Use [Python string formatting](http://docs.python.org/2/library/stdtypes.html#string-formatting-operations) to show the hexadecimal values of your bytes:
```
>>> se = b'hi'
>>> ["{0:0>2X}".format(b) for b in se]
['68', '69']
```
|
```
>>> s = b'hi'
>>> s
b'hi'
>>> print(s)
b'hi'
>>> for i in s:
print(i)
104
105
>>> y = 'hi'
>>> for i in y:
print(i)
h
i
>>>
```
|
55,408,502 |
I'm trying to use Python Selenium to download an Excel file by clicking "Export to Excel" in this page:
<https://www.hkex.com.hk/Market-Data/Futures-and-Options-Prices/Equity-Index/Hang-Seng-Index-Futures-and-Options?sc_lang=en#&product=HSI>
In Chrome Inspect mode, I think the name of the element is "ete"
```
<div class="ete title_right" id="ete">Export to Excel</div>
```
Here is my code:
```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import time
options = Options()
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("--no-sandbox")
driver = webdriver.Chrome('/usr/local/bin/chromedriver', chrome_options=options)
driver.get("https://www.hkex.com.hk/Market-Data/Futures-and-Options-Prices/Equity-Index/Hang-Seng-Index-Futures-and-Options?sc_lang=en#&product=HSI")
driver.find_element_by_id('ete').click()
html = driver.page_source
print(html)
driver.close()
```
However, element not interactable exception is returned when running my code:
```
selenium.common.exceptions.ElementNotVisibleException: Message: element not interactable
```
[Updates]
I used Debanjan's method but TimeoutException is returned:
```
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("--no-sandbox")
#options.add_argument("--disable-dev-shm-usage")
driver = webdriver.Chrome('/usr/local/bin/chromedriver', chrome_options=options)
driver.get('https://www.hkex.com.hk/Market-Data/Futures-and-Options-Prices/Equity-Index/Hang-Seng-Index-Futures-and-Options?sc_lang=en#&product=HSI')
WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.XPATH, "//div[@class='textrow' and text()='FUTURES']//following::div[@class='ete title_right' and text()='Export to Excel']"))).click()
html = driver.page_source
print(html)
driver.close()
```
```
[root@mybox python-learning]# python3.6 web4.py
Traceback (most recent call last):
File "web4.py", line 16, in <module>
WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.XPATH, "//div[@class='textrow' and text()='FUTURES']//following::div[@class='ete title_right' and text()='Export to Excel']"))).click()
File "/usr/lib/python3.6/site-packages/selenium/webdriver/support/wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
```
|
2019/03/28
|
[
"https://Stackoverflow.com/questions/55408502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9477275/"
] |
You should normalize (divide) by the sum of the weights, not the sum of the input pixel values (or matrix values if you prefer).
For example the first case would be:
```
newPic[h][w] = (2 * pic[h][w] + 2 * pic[h][w-1] + 2 * pic[h+1][w] + 1 * pic[h+1][w-1])
/ (2 + 2 + 2 + 1);
```
|
You're dividing by a number that's too large. For the general case, you should be dividing by 14.
```
else
newPic[h][w] = (1 * pic[h-1][w-1] + 2 * pic[h-1][w] + 1 * pic[h-1][w+1] + 2 * pic[h][w-1]
+ 2 * pic[h][w] + 2 * pic[h][w+1] + 1 * pic[h+1][w-1] + 2 * pic[h+1][w] + 1
* pic[h+1][w+1]) / 14;
```
(This is because 14 = 1+2+1+2+2+2+1+2+1)
For the other cases, divide by whatever the sum of the coefficients is for that case.
|
9,625,028 |
This is really starting to get on my nerves, which is weird because it's such a small issue.
Let's say I start (g)vim 7.3 (windows OR linux) with no plugins/no vimrc. I open a longer file, scroll down halfway. I position my cursor at the bottom of the main window. I open a vertical split window (CTRL+W v).
What happens next is killing me. The cursor and the line it's on (previously at the bottom of the original window) *jump up* to the top of the window. The new window on the right has the original position where the line the cursor is on was in the original window.

My question: How do I keep vim from changing the cursor position of the window I split? Honestly, I wouldn't care at all if the position changed in the *new* window, but the fact that splitting changes the original window is making me want to tear my hair out.
It used to be completely ignorable. Then I got kind of annoyed. Then I slowly started getting really annoyed. Now it's starting to make me furious. Any help/tricks/hacks are greatly appreciated!!
EDIT: My vim build options for both Windows and Linux (happening on both):
Windows:
```
+arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent
+clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments
+conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs
-dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path
+find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn
+insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent
+listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape
+multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl
+persistent_undo -postscript +printer -profile -python -python3 +quickfix
+reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +startuptime
+statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white
-tcl -tgetent -termresponse +textobjects +title -toolbar +user_commands
+vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore
+wildmenu +windows +writebackup -xfontset -xim -xterm_save -xpm_w32
system vimrc file: "$VIM\vimrc"
user vimrc file: "$HOME\_vimrc"
2nd user vimrc file: "$VIM\_vimrc"
user exrc file: "$HOME\_exrc"
2nd user exrc file: "$VIM\_exrc"
Compilation: cl -c /W3 /nologo -I. -Iproto -DHAVE_PATHDEF -DWIN32 -DFEAT_CSCOPE -DWINVER=0x0400 -D_WIN32_WINNT=0x0400 /Fo.\ObjC/ /Ox /GL -DNDEBUG /Zl /MT -DDYNAMIC_ICONV -DDYNAMIC_GETTEXT -DFEAT_BIG /Fd.\ObjC/ /Zi
Linking: link /RELEASE /nologo /subsystem:console /LTCG:STATUS oldnames.lib kernel32.lib advapi32.lib shell32.lib gdi32.lib comdlg32.lib ole32.lib uuid.lib /machine:i386 /nodefaultlib libcmt.lib user32.lib /PDB:vim.pdb -debug
```
Linux:
```
-arabic +autocmd +balloon_eval +browse +builtin_terms +byte_offset +cindent
+clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments
-conceal +cryptv -cscope +cursorbind +cursorshape +dialog_con_gui +diff
+digraphs +dnd -ebcdic -emacs_tags +eval +ex_extra +extra_search -farsi
+file_in_path +find_in_path +float +folding -footer +fork() +gettext
-hangul_input +iconv +insert_expand +jumplist -keymap -langmap +libcall
+linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname
+mouse +mouseshape -mouse_dec +mouse_gpm -mouse_jsbterm -mouse_netterm
-mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg
-osfiletype +path_extra -perl +persistent_undo +postscript +printer -profile
+python -python3 +quickfix +reltime -rightleft -ruby +scrollbind +signs
+smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary
+tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title
+toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo
+vreplace +wildignore +wildmenu +windows +writebackup +X11 -xfontset +xim
+xsmp_interact +xterm_clipboard -xterm_save
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
user exrc file: "$HOME/.exrc"
system gvimrc file: "$VIM/gvimrc"
user gvimrc file: "$HOME/.gvimrc"
system menu file: "$VIMRUNTIME/menu.vim"
fall-back for $VIM: "/usr/local/share/vim"
Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-2.0 -I/usr/lib64/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libpng14 -g -O2 -D_FORTIFY_SOURCE=1
Linking: gcc -L/usr/local/lib -o vim -lgtk-x11-2.0 -lgdk-x11-2.0 -latk-1.0 -lgio-2.0 -lpangoft2-1.0 -lgdk_pixbuf-2.0 -lpangocairo-1.0 -lcairo -lpango-1.0 -lfreetype -lfontconfig -lgobject-2.0 -lgmodule-2.0 -lglib-2.0 -lXt -lncurses -lacl -lgpm -L/usr/lib64/python2.6/config -lpython2.6 -lpthread -lutil -lm -Xlinker -export-dynamic
```
|
2012/03/08
|
[
"https://Stackoverflow.com/questions/9625028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236331/"
] |
I can reproduce the behaviour, which with your description would (at first sight) *appear* to be a bug, indeed. But I noticed the following things:
* `:vert split` has the same behaviour as `^W``v`
* `:debug ver split` (... `cont`) confirmed there wasn't an obvious script/autocommand interfering
* It only happens on the first time split. In other words, this is a workaround: `^W``v``^W``c``^W``v`
* the cursor does in fact *not* move in the *original* window. The 'new window' appears on the left (which you name the original window). This cannot be shown with `:echo winnr()` or similar, but you can make it more apparent by doing e.g. `:vert new` instead of `:vert split`: the new, empty window appears on the left side.
Instead of this, you might trick the split to have 'second split' behaviour by doing something 'useless' before
```
:tabnew|bwipeout
```
Now, `^W``v` has the desired behaviour first time around.
### TL;DR
1. it was not a bug (your expectations on what was the *new* window were off)
2. you can work around it by creating another window before.
---
### *Other background*
1. The `splitright`, `splitbelow` options
>
> can be used to control (to an extent) where newly created (split) windows appear
>
>
>
2. The `winrestview()` function
>
> Can be used to explicitely restore the exact view of a window. Use it like so:
>
>
>
> ```
> :let savex=winsaveview()
>
> ```
>
> `savex` now contains something like *`{'lnum': 1, 'leftcol': 0, 'col': 0, 'topfill': 0, 'topline': 1, 'coladd': 0, 'skipcol': 0, 'curswant': 0}`* describing the state of the current view.
>
>
> *(do stuff, like `^W``v`)*
>
>
>
> ```
> :call winrestview(savex)
>
> ```
>
> This is obviously a lot more flexible but you might not need it.
>
>
>
|
I tested it on a Windows machine with `gvim` and could reproduce your problem, here is a workaround that worked for me adding next commands to `vimrc` (I got it with `:e $MYVIMRC`):
```
set splitright
function MySplit()
vsplit
execute "normal \<C-w>\<C-w>"
endfunction
nmap <C-w>v :call MySplit()<CR>
```
|
52,751,675 |
**There is problem**
I am creating a project, where you can make topics, that can be private or public to unauthenticated users. In every topic, you can then make several entries, applying to that topic. Now I'm trying to make a checkbox in my new\_topic.html, where if you check it, it evaluates to True, if not, to False
But I can't see the topic without logging in
**What I want**
I would like to show public subjects to users who have not logged in by changing the public property to True in the view, but I do not know how to implement them
**What I've tried**
I played with the query set topic.public == True
but I don't know how to use that
**The Code**
My /learning\_logs/modles.py looks like this:
```html
from django.db import models
from django.contrib.auth.models import User
class Topic(models.Model):
"""topic은 사용자가 공부하고 있는 주제이다."""
text = models.CharField(max_length=200)
date_added = models.DateTimeField(auto_now_add=True)
owner = models.ForeignKey(User, on_delete = models.CASCADE)
public = models.BooleanField(default=False)
def __str__(self):
"""모델에 관한 정보를 문자열 형태로 변환한다."""
return self.text
class Entry(models.Model):
"""주제에 관해 공부한 내용"""
topic = models.ForeignKey(Topic, on_delete = models.CASCADE)
text = models.TextField()
date_added = models.DateTimeField(auto_now_add=True)
class Meta:
verbose_name_plural = 'entries'
def __str__(self):
"""모델에 관한 정보를 문자열 형태로 반환한다."""
if self.text[:] > self.text[:50]:
return self.text[:50] + "..."
else:
return self.text[:]
```
My /learning\_logs/views.py looks like this:
```html
from django.shortcuts import render, get_object_or_404
from django.http import HttpResponseRedirect, Http404
from django.urls import reverse
from .models import Topic, Entry
from .forms import TopicForm, EntryForm
from django.contrib.auth.decorators import login_required
from .models import Topic, Entry
from .forms import TopicForm, EntryForm
def index(request):
"""학습 로그 홈페이지"""
return render(request, 'learning_logs/index.html')
@login_required
def topics(request):
"""주제를 표시한다."""
topics = Topic.objects.filter(owner=request.user).order_by('date_added')
context = {'topics': topics}
return render(request, 'learning_logs/topics.html', context)
@login_required
def topic(request, topic_id):
"""주제 하나와 연결된 모든 항목을 표시한다."""
topic = get_object_or_404(Topic, id=topic_id)
# 주제가 현재 사용자의 것인지 확인한다.
check_user = check_topic_owner(request, topic)
entries = topic.entry_set.order_by('-date_added')
context = {'topic': topic, 'entries': entries}
return render(request, 'learning_logs/topic.html', context)
@login_required
def new_topic(request):
"""새 주제 추가"""
if request.method != 'POST':
# 들어온 데이터가 없을 때는 새 폼을 만든다.
form = TopicForm()
else:
# POST 데이터를 받아서 처리한다.
form = TopicForm(request.POST)
if form.is_valid():
new_topic = form.save(commit=False)
new_topic.owner = request.user
new_topic.save()
return HttpResponseRedirect(reverse('learning_logs:topics'))
context = {'form': form}
return render(request, 'learning_logs/new_topic.html', context)
@login_required
def new_entry(request, topic_id):
"""특정 주제에 관한 새 항목을 추가"""
topic = get_object_or_404(Topic, id=topic_id)
check_user = check_topic_owner(request, topic)
if request.method != 'POST':
# 전송된 데이터가 없으므로 빈 폼을 만든다.
form = EntryForm()
else:
# 받은 POST 데이터를 처리한다.
form = EntryForm(data=request.POST)
if form.is_valid():
new_entry = form.save(commit=False)
new_entry.topic = topic
new_entry.save()
return HttpResponseRedirect(reverse('learning_logs:topic',
args=[topic_id]))
context = {'topic':topic, 'form':form}
return render(request, 'learning_logs/new_entry.html', context)
@login_required
def edit_entry(request, entry_id):
"""기존 항목을 편집한다."""
entry = get_object_or_404(Entry, id=entry_id)
topic = entry.topic
check_user = check_topic_owner(request, topic)
if request.method != 'POST':
# 첫 요청이므로 폼을 현재 텍스트로 채운다.
form = EntryForm(instance=entry)
else:
# POST 데이터를 받았으므로 받은 데이터를 처리한다.
form = EntryForm(instance=entry, data=request.POST)
return HttpResponseRedirect(reverse('learning_logs:topic',
args=[topic.id]))
context = {'entry': entry, 'topic':topic, 'form': form}
return render(request, 'learning_logs/edit_entry.html', context)
def check_topic_owner(request, topic):
"""현재 유저가 올바른 유저인지 체크한다"""
if topic.owner != request.user:
raise Http404
```
My /learning\_logs/forms.py looks like this
```html
from django import forms
from .models import Topic, Entry
class TopicForm(forms. ModelForm):
class Meta:
model = Topic
fields = ['text', 'public']
lavels = {'text' : '', 'public' : 'lavel for public'}
class TopicForm(forms.ModelForm):
class Meta:
model = Topic
fields = ['text']
labels = {'text': ''}
class EntryForm(forms.ModelForm):
class Meta:
model = Entry
fields = ['text']
labels = {'text' : ''}
widgets = {'text' : forms.Textarea(attrs={'cols':80})}
labels = {'text':''}
widgets = {'text': forms.Textarea(attrs={'cols':80})}
```
My /learning\_logs/templates/learning\_logs/new\_topic.html
```html
{% extends "learning_logs/base.html" %}
{% load bootstrap3 %}
{% block header %}
<h2>Add a new topic:</h2>
{% endblock %}
{% block content %}
<form action="{% url 'learning_logs:new_topic' %}" method='post'
class="form">
{% csrf_token %}
{% bootstrap_form form %}
<div class="form-check">
<input type="checkbox" class="form-check-input" value=True id="public" />
<label class="form-check-label">
Make it public?
</label>
</div>
{% buttons %}
<button name="submit" class="btn btn-primary">add topic</button>
{% endbuttons %}
</form>
{% endblock %}
```
**Another error:**
If I delete @login\_required I get the following error:
```html
Traceback:
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
126. response = self.process_exception_by_middleware(e, request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
124. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/learning_log/learning_logs/views.py" in topics
31. topics = Topic.objects.filter(owner=request.user).order_by('date_added')
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/manager.py" in manager_method
82. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in filter
844. return self._filter_or_exclude(False, *args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in _filter_or_exclude
862. clone.query.add_q(Q(*args, **kwargs))
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in add_q
1263. clause, _ = self._add_q(q_object, self.used_aliases)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1287. split_subq=split_subq,
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in build_filter
1198. self.check_related_objects(join_info.final_field, value, join_info.opts)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in check_related_objects
1065. for v in value:
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/utils/functional.py" in inner
214. return func(self._wrapped, *args)
Exception Type: TypeError at /topics/
Exception Value: 'AnonymousUser' object is not iterable
```
**Seconde error**
I used the following code:
```html
# this should really be a method on a custom ModelManager
def _get_topics_for_user(user):
" returns a queryset of topics the user can access "
q = Q(public=True)
# if django < 1.10 you want "user.is_authenticated()" (with parens)
if user.is_authenticated:
# adds user's own private topics to the query
q = q | Q(private=True, owner=user)
return Topic.objects.filter(q)
def topics(request):
topics = _get_topics_for_user(request.user).order_by('date_added')
context = {'topics': topics}
return render(request, 'learning_logs/topics.html', context)
def topic(request, topic_id):
topics = _get_topics_for_user(request.user)
# here we're passing the filtered queryset, so
# if the topic "topic_id" is private and the user is either
# anonymous or not the topic owner, it will raise a 404
topic = get_object_or_404(topics, id=topic_id)
entries = topic.entry_set.order_by('-date_added')
context = {'topic': topic, 'entries': entries}
return render(request, 'learning_logs/topic.html', context)
```
```html
Traceback:
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
126. response = self.process_exception_by_middleware(e, request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
124. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/learning_log/learning_logs/views.py" in topics
31. topics = _get_topics_for_user(request.user).order_by('date_added')
File "/mnt/c/Users/heoje/Desktop/linked/learning_log/learning_logs/views.py" in _get_topics_for_user
27. return Topic.objects.filter(q)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/manager.py" in manager_method
82. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in filter
844. return self._filter_or_exclude(False, *args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in _filter_or_exclude
862. clone.query.add_q(Q(*args, **kwargs))
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in add_q
1263. clause, _ = self._add_q(q_object, self.used_aliases)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1281. current_negated, allow_joins, split_subq)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1281. current_negated, allow_joins, split_subq)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1287. split_subq=split_subq,
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in build_filter
1164. lookups, parts, reffed_expression = self.solve_lookup_type(arg)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in solve_lookup_type
1028. _, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in names_to_path
1389. "Choices are: %s" % (name, ", ".join(available)))
Exception Type: FieldError at /topics/
Exception Value: Cannot resolve keyword 'private' into field. Choices are: date_added, entry, id, owner, owner_id, public, text
```
|
2018/10/11
|
[
"https://Stackoverflow.com/questions/52751675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
The `ldict = {}` trick creates a substitute local namespace for use inside `exec`. This is useful because the dict returned by `locals()` doesn't write through to your actual locals like it used to in Python 2.
But that substitute namespace `{}` is empty. It doesn't contain your `locals()`, therefore it doesn't have `val` in it. Try using `ldict = {**locals()}` instead to copy the contents of your locals to the substitute locals `ldict`.
---
Remember that you have to read all the "locals" created by exec from the ldict. Thus, `print(tg)` won't work either, because it was only ever assigned in one of the substitute local namespaces. You probably don't want to make a new one every loop. Just `.update()` one you make in advance.
```
def f():
ldict = {}
for key,val in measurements.items():
ldict.update(locals())
exec(key + ' = val', globals(),ldict)
key = ldict[key]
# exec(key + ' = val') in globals(),locals()
print (ldict['tg'])
```
---
The number and names of locals must be known in advance by the compiler in Python3 for performance optimizations. (This doesn't apply to `globals()`, they still write through.)
If you know them in advance, you can just assign from them, e.g.
```
tg = ldict['tg']
print(tg)
```
If you need more than one you could unpack a dictionary into locals, like
```
a, b, c = (ldict[key] for key in ['a', 'b', 'c'])
```
Or you could dump the whole dict into a simple namespace and access them with `.` instead of `[]`.
```
from types import SimpleNamespace
# ...
ns = SimpleNamespace(**ldict)
print(ns.tg)
```
You could also just `exec` any code that needs the new locals, since you can give `exec` the `ldict` namespace.
```
exec("print(tg)", globals(), ldcit)
```
---
I understand that your example code may be simplified from the original, but it does not appear to need `exec` at all. It is generally considered bad form to use `exec` unless you absolutely need it, since it confuses static analysis tools and compiling strings at runtime is slow, especially if repeated in a loop like that.
If you must use exec, it's better to put the loop inside the exec string (use triple quotes) than the exec call inside the loop. That way the string only has to be compiled once, instead of for each loop.
|
```
measurements = {'tg': [8.184e+16, 8.345e+16, 8.045e+16, 8.520e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]}
def f():
for key,val in measurements.items():
exec('{} = {}'.format(key, val))
print (tg)
local = locals()
for key in measurements.keys():
print 'Key: ', key, ', Value: ', local[key]
f()
```
python3:
```
measurements = {'tg': [8.184e+16, 8.345e+16, 8.045e+16, 8.520e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]}
def f():
for key,val in measurements.items():
exec('global {};{} = {}'.format(key, key, val))
print ('tg: ', tg)
vars = globals()
for key in measurements.keys():
print ('Key: ', key, ', Value: ', vars[key])
f()
```
output:
```
[8.184e+16, 8.345e+16, 8.045e+16, 8.52e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]
Key: tg , Value: [8.184e+16, 8.345e+16, 8.045e+16, 8.52e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]
```
|
52,751,675 |
**There is problem**
I am creating a project, where you can make topics, that can be private or public to unauthenticated users. In every topic, you can then make several entries, applying to that topic. Now I'm trying to make a checkbox in my new\_topic.html, where if you check it, it evaluates to True, if not, to False
But I can't see the topic without logging in
**What I want**
I would like to show public subjects to users who have not logged in by changing the public property to True in the view, but I do not know how to implement them
**What I've tried**
I played with the query set topic.public == True
but I don't know how to use that
**The Code**
My /learning\_logs/modles.py looks like this:
```html
from django.db import models
from django.contrib.auth.models import User
class Topic(models.Model):
"""topic은 사용자가 공부하고 있는 주제이다."""
text = models.CharField(max_length=200)
date_added = models.DateTimeField(auto_now_add=True)
owner = models.ForeignKey(User, on_delete = models.CASCADE)
public = models.BooleanField(default=False)
def __str__(self):
"""모델에 관한 정보를 문자열 형태로 변환한다."""
return self.text
class Entry(models.Model):
"""주제에 관해 공부한 내용"""
topic = models.ForeignKey(Topic, on_delete = models.CASCADE)
text = models.TextField()
date_added = models.DateTimeField(auto_now_add=True)
class Meta:
verbose_name_plural = 'entries'
def __str__(self):
"""모델에 관한 정보를 문자열 형태로 반환한다."""
if self.text[:] > self.text[:50]:
return self.text[:50] + "..."
else:
return self.text[:]
```
My /learning\_logs/views.py looks like this:
```html
from django.shortcuts import render, get_object_or_404
from django.http import HttpResponseRedirect, Http404
from django.urls import reverse
from .models import Topic, Entry
from .forms import TopicForm, EntryForm
from django.contrib.auth.decorators import login_required
from .models import Topic, Entry
from .forms import TopicForm, EntryForm
def index(request):
"""학습 로그 홈페이지"""
return render(request, 'learning_logs/index.html')
@login_required
def topics(request):
"""주제를 표시한다."""
topics = Topic.objects.filter(owner=request.user).order_by('date_added')
context = {'topics': topics}
return render(request, 'learning_logs/topics.html', context)
@login_required
def topic(request, topic_id):
"""주제 하나와 연결된 모든 항목을 표시한다."""
topic = get_object_or_404(Topic, id=topic_id)
# 주제가 현재 사용자의 것인지 확인한다.
check_user = check_topic_owner(request, topic)
entries = topic.entry_set.order_by('-date_added')
context = {'topic': topic, 'entries': entries}
return render(request, 'learning_logs/topic.html', context)
@login_required
def new_topic(request):
"""새 주제 추가"""
if request.method != 'POST':
# 들어온 데이터가 없을 때는 새 폼을 만든다.
form = TopicForm()
else:
# POST 데이터를 받아서 처리한다.
form = TopicForm(request.POST)
if form.is_valid():
new_topic = form.save(commit=False)
new_topic.owner = request.user
new_topic.save()
return HttpResponseRedirect(reverse('learning_logs:topics'))
context = {'form': form}
return render(request, 'learning_logs/new_topic.html', context)
@login_required
def new_entry(request, topic_id):
"""특정 주제에 관한 새 항목을 추가"""
topic = get_object_or_404(Topic, id=topic_id)
check_user = check_topic_owner(request, topic)
if request.method != 'POST':
# 전송된 데이터가 없으므로 빈 폼을 만든다.
form = EntryForm()
else:
# 받은 POST 데이터를 처리한다.
form = EntryForm(data=request.POST)
if form.is_valid():
new_entry = form.save(commit=False)
new_entry.topic = topic
new_entry.save()
return HttpResponseRedirect(reverse('learning_logs:topic',
args=[topic_id]))
context = {'topic':topic, 'form':form}
return render(request, 'learning_logs/new_entry.html', context)
@login_required
def edit_entry(request, entry_id):
"""기존 항목을 편집한다."""
entry = get_object_or_404(Entry, id=entry_id)
topic = entry.topic
check_user = check_topic_owner(request, topic)
if request.method != 'POST':
# 첫 요청이므로 폼을 현재 텍스트로 채운다.
form = EntryForm(instance=entry)
else:
# POST 데이터를 받았으므로 받은 데이터를 처리한다.
form = EntryForm(instance=entry, data=request.POST)
return HttpResponseRedirect(reverse('learning_logs:topic',
args=[topic.id]))
context = {'entry': entry, 'topic':topic, 'form': form}
return render(request, 'learning_logs/edit_entry.html', context)
def check_topic_owner(request, topic):
"""현재 유저가 올바른 유저인지 체크한다"""
if topic.owner != request.user:
raise Http404
```
My /learning\_logs/forms.py looks like this
```html
from django import forms
from .models import Topic, Entry
class TopicForm(forms. ModelForm):
class Meta:
model = Topic
fields = ['text', 'public']
lavels = {'text' : '', 'public' : 'lavel for public'}
class TopicForm(forms.ModelForm):
class Meta:
model = Topic
fields = ['text']
labels = {'text': ''}
class EntryForm(forms.ModelForm):
class Meta:
model = Entry
fields = ['text']
labels = {'text' : ''}
widgets = {'text' : forms.Textarea(attrs={'cols':80})}
labels = {'text':''}
widgets = {'text': forms.Textarea(attrs={'cols':80})}
```
My /learning\_logs/templates/learning\_logs/new\_topic.html
```html
{% extends "learning_logs/base.html" %}
{% load bootstrap3 %}
{% block header %}
<h2>Add a new topic:</h2>
{% endblock %}
{% block content %}
<form action="{% url 'learning_logs:new_topic' %}" method='post'
class="form">
{% csrf_token %}
{% bootstrap_form form %}
<div class="form-check">
<input type="checkbox" class="form-check-input" value=True id="public" />
<label class="form-check-label">
Make it public?
</label>
</div>
{% buttons %}
<button name="submit" class="btn btn-primary">add topic</button>
{% endbuttons %}
</form>
{% endblock %}
```
**Another error:**
If I delete @login\_required I get the following error:
```html
Traceback:
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
126. response = self.process_exception_by_middleware(e, request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
124. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/learning_log/learning_logs/views.py" in topics
31. topics = Topic.objects.filter(owner=request.user).order_by('date_added')
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/manager.py" in manager_method
82. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in filter
844. return self._filter_or_exclude(False, *args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in _filter_or_exclude
862. clone.query.add_q(Q(*args, **kwargs))
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in add_q
1263. clause, _ = self._add_q(q_object, self.used_aliases)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1287. split_subq=split_subq,
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in build_filter
1198. self.check_related_objects(join_info.final_field, value, join_info.opts)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in check_related_objects
1065. for v in value:
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/utils/functional.py" in inner
214. return func(self._wrapped, *args)
Exception Type: TypeError at /topics/
Exception Value: 'AnonymousUser' object is not iterable
```
**Seconde error**
I used the following code:
```html
# this should really be a method on a custom ModelManager
def _get_topics_for_user(user):
" returns a queryset of topics the user can access "
q = Q(public=True)
# if django < 1.10 you want "user.is_authenticated()" (with parens)
if user.is_authenticated:
# adds user's own private topics to the query
q = q | Q(private=True, owner=user)
return Topic.objects.filter(q)
def topics(request):
topics = _get_topics_for_user(request.user).order_by('date_added')
context = {'topics': topics}
return render(request, 'learning_logs/topics.html', context)
def topic(request, topic_id):
topics = _get_topics_for_user(request.user)
# here we're passing the filtered queryset, so
# if the topic "topic_id" is private and the user is either
# anonymous or not the topic owner, it will raise a 404
topic = get_object_or_404(topics, id=topic_id)
entries = topic.entry_set.order_by('-date_added')
context = {'topic': topic, 'entries': entries}
return render(request, 'learning_logs/topic.html', context)
```
```html
Traceback:
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
126. response = self.process_exception_by_middleware(e, request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
124. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/learning_log/learning_logs/views.py" in topics
31. topics = _get_topics_for_user(request.user).order_by('date_added')
File "/mnt/c/Users/heoje/Desktop/linked/learning_log/learning_logs/views.py" in _get_topics_for_user
27. return Topic.objects.filter(q)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/manager.py" in manager_method
82. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in filter
844. return self._filter_or_exclude(False, *args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in _filter_or_exclude
862. clone.query.add_q(Q(*args, **kwargs))
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in add_q
1263. clause, _ = self._add_q(q_object, self.used_aliases)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1281. current_negated, allow_joins, split_subq)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1281. current_negated, allow_joins, split_subq)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1287. split_subq=split_subq,
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in build_filter
1164. lookups, parts, reffed_expression = self.solve_lookup_type(arg)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in solve_lookup_type
1028. _, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in names_to_path
1389. "Choices are: %s" % (name, ", ".join(available)))
Exception Type: FieldError at /topics/
Exception Value: Cannot resolve keyword 'private' into field. Choices are: date_added, entry, id, owner, owner_id, public, text
```
|
2018/10/11
|
[
"https://Stackoverflow.com/questions/52751675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
The `ldict = {}` trick creates a substitute local namespace for use inside `exec`. This is useful because the dict returned by `locals()` doesn't write through to your actual locals like it used to in Python 2.
But that substitute namespace `{}` is empty. It doesn't contain your `locals()`, therefore it doesn't have `val` in it. Try using `ldict = {**locals()}` instead to copy the contents of your locals to the substitute locals `ldict`.
---
Remember that you have to read all the "locals" created by exec from the ldict. Thus, `print(tg)` won't work either, because it was only ever assigned in one of the substitute local namespaces. You probably don't want to make a new one every loop. Just `.update()` one you make in advance.
```
def f():
ldict = {}
for key,val in measurements.items():
ldict.update(locals())
exec(key + ' = val', globals(),ldict)
key = ldict[key]
# exec(key + ' = val') in globals(),locals()
print (ldict['tg'])
```
---
The number and names of locals must be known in advance by the compiler in Python3 for performance optimizations. (This doesn't apply to `globals()`, they still write through.)
If you know them in advance, you can just assign from them, e.g.
```
tg = ldict['tg']
print(tg)
```
If you need more than one you could unpack a dictionary into locals, like
```
a, b, c = (ldict[key] for key in ['a', 'b', 'c'])
```
Or you could dump the whole dict into a simple namespace and access them with `.` instead of `[]`.
```
from types import SimpleNamespace
# ...
ns = SimpleNamespace(**ldict)
print(ns.tg)
```
You could also just `exec` any code that needs the new locals, since you can give `exec` the `ldict` namespace.
```
exec("print(tg)", globals(), ldcit)
```
---
I understand that your example code may be simplified from the original, but it does not appear to need `exec` at all. It is generally considered bad form to use `exec` unless you absolutely need it, since it confuses static analysis tools and compiling strings at runtime is slow, especially if repeated in a loop like that.
If you must use exec, it's better to put the loop inside the exec string (use triple quotes) than the exec call inside the loop. That way the string only has to be compiled once, instead of for each loop.
|
Years later, but I've since realized that you can (ab)use the temporary namespace made in class declarations to get a write-through locals() namespace inside of functions. This allows you to translate the Python 2 version pretty directly:
```
measurements = {'tg': [8.184e+16, 8.345e+16, 8.045e+16, 8.520e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]}
def f():
class _:
for key,val in measurements.items():
exec(key + ' = val', globals(), locals())
print (tg)
f()
```
|
52,751,675 |
**There is problem**
I am creating a project, where you can make topics, that can be private or public to unauthenticated users. In every topic, you can then make several entries, applying to that topic. Now I'm trying to make a checkbox in my new\_topic.html, where if you check it, it evaluates to True, if not, to False
But I can't see the topic without logging in
**What I want**
I would like to show public subjects to users who have not logged in by changing the public property to True in the view, but I do not know how to implement them
**What I've tried**
I played with the query set topic.public == True
but I don't know how to use that
**The Code**
My /learning\_logs/modles.py looks like this:
```html
from django.db import models
from django.contrib.auth.models import User
class Topic(models.Model):
"""topic은 사용자가 공부하고 있는 주제이다."""
text = models.CharField(max_length=200)
date_added = models.DateTimeField(auto_now_add=True)
owner = models.ForeignKey(User, on_delete = models.CASCADE)
public = models.BooleanField(default=False)
def __str__(self):
"""모델에 관한 정보를 문자열 형태로 변환한다."""
return self.text
class Entry(models.Model):
"""주제에 관해 공부한 내용"""
topic = models.ForeignKey(Topic, on_delete = models.CASCADE)
text = models.TextField()
date_added = models.DateTimeField(auto_now_add=True)
class Meta:
verbose_name_plural = 'entries'
def __str__(self):
"""모델에 관한 정보를 문자열 형태로 반환한다."""
if self.text[:] > self.text[:50]:
return self.text[:50] + "..."
else:
return self.text[:]
```
My /learning\_logs/views.py looks like this:
```html
from django.shortcuts import render, get_object_or_404
from django.http import HttpResponseRedirect, Http404
from django.urls import reverse
from .models import Topic, Entry
from .forms import TopicForm, EntryForm
from django.contrib.auth.decorators import login_required
from .models import Topic, Entry
from .forms import TopicForm, EntryForm
def index(request):
"""학습 로그 홈페이지"""
return render(request, 'learning_logs/index.html')
@login_required
def topics(request):
"""주제를 표시한다."""
topics = Topic.objects.filter(owner=request.user).order_by('date_added')
context = {'topics': topics}
return render(request, 'learning_logs/topics.html', context)
@login_required
def topic(request, topic_id):
"""주제 하나와 연결된 모든 항목을 표시한다."""
topic = get_object_or_404(Topic, id=topic_id)
# 주제가 현재 사용자의 것인지 확인한다.
check_user = check_topic_owner(request, topic)
entries = topic.entry_set.order_by('-date_added')
context = {'topic': topic, 'entries': entries}
return render(request, 'learning_logs/topic.html', context)
@login_required
def new_topic(request):
"""새 주제 추가"""
if request.method != 'POST':
# 들어온 데이터가 없을 때는 새 폼을 만든다.
form = TopicForm()
else:
# POST 데이터를 받아서 처리한다.
form = TopicForm(request.POST)
if form.is_valid():
new_topic = form.save(commit=False)
new_topic.owner = request.user
new_topic.save()
return HttpResponseRedirect(reverse('learning_logs:topics'))
context = {'form': form}
return render(request, 'learning_logs/new_topic.html', context)
@login_required
def new_entry(request, topic_id):
"""특정 주제에 관한 새 항목을 추가"""
topic = get_object_or_404(Topic, id=topic_id)
check_user = check_topic_owner(request, topic)
if request.method != 'POST':
# 전송된 데이터가 없으므로 빈 폼을 만든다.
form = EntryForm()
else:
# 받은 POST 데이터를 처리한다.
form = EntryForm(data=request.POST)
if form.is_valid():
new_entry = form.save(commit=False)
new_entry.topic = topic
new_entry.save()
return HttpResponseRedirect(reverse('learning_logs:topic',
args=[topic_id]))
context = {'topic':topic, 'form':form}
return render(request, 'learning_logs/new_entry.html', context)
@login_required
def edit_entry(request, entry_id):
"""기존 항목을 편집한다."""
entry = get_object_or_404(Entry, id=entry_id)
topic = entry.topic
check_user = check_topic_owner(request, topic)
if request.method != 'POST':
# 첫 요청이므로 폼을 현재 텍스트로 채운다.
form = EntryForm(instance=entry)
else:
# POST 데이터를 받았으므로 받은 데이터를 처리한다.
form = EntryForm(instance=entry, data=request.POST)
return HttpResponseRedirect(reverse('learning_logs:topic',
args=[topic.id]))
context = {'entry': entry, 'topic':topic, 'form': form}
return render(request, 'learning_logs/edit_entry.html', context)
def check_topic_owner(request, topic):
"""현재 유저가 올바른 유저인지 체크한다"""
if topic.owner != request.user:
raise Http404
```
My /learning\_logs/forms.py looks like this
```html
from django import forms
from .models import Topic, Entry
class TopicForm(forms. ModelForm):
class Meta:
model = Topic
fields = ['text', 'public']
lavels = {'text' : '', 'public' : 'lavel for public'}
class TopicForm(forms.ModelForm):
class Meta:
model = Topic
fields = ['text']
labels = {'text': ''}
class EntryForm(forms.ModelForm):
class Meta:
model = Entry
fields = ['text']
labels = {'text' : ''}
widgets = {'text' : forms.Textarea(attrs={'cols':80})}
labels = {'text':''}
widgets = {'text': forms.Textarea(attrs={'cols':80})}
```
My /learning\_logs/templates/learning\_logs/new\_topic.html
```html
{% extends "learning_logs/base.html" %}
{% load bootstrap3 %}
{% block header %}
<h2>Add a new topic:</h2>
{% endblock %}
{% block content %}
<form action="{% url 'learning_logs:new_topic' %}" method='post'
class="form">
{% csrf_token %}
{% bootstrap_form form %}
<div class="form-check">
<input type="checkbox" class="form-check-input" value=True id="public" />
<label class="form-check-label">
Make it public?
</label>
</div>
{% buttons %}
<button name="submit" class="btn btn-primary">add topic</button>
{% endbuttons %}
</form>
{% endblock %}
```
**Another error:**
If I delete @login\_required I get the following error:
```html
Traceback:
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
126. response = self.process_exception_by_middleware(e, request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
124. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/learning_log/learning_logs/views.py" in topics
31. topics = Topic.objects.filter(owner=request.user).order_by('date_added')
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/manager.py" in manager_method
82. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in filter
844. return self._filter_or_exclude(False, *args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in _filter_or_exclude
862. clone.query.add_q(Q(*args, **kwargs))
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in add_q
1263. clause, _ = self._add_q(q_object, self.used_aliases)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1287. split_subq=split_subq,
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in build_filter
1198. self.check_related_objects(join_info.final_field, value, join_info.opts)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in check_related_objects
1065. for v in value:
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/utils/functional.py" in inner
214. return func(self._wrapped, *args)
Exception Type: TypeError at /topics/
Exception Value: 'AnonymousUser' object is not iterable
```
**Seconde error**
I used the following code:
```html
# this should really be a method on a custom ModelManager
def _get_topics_for_user(user):
" returns a queryset of topics the user can access "
q = Q(public=True)
# if django < 1.10 you want "user.is_authenticated()" (with parens)
if user.is_authenticated:
# adds user's own private topics to the query
q = q | Q(private=True, owner=user)
return Topic.objects.filter(q)
def topics(request):
topics = _get_topics_for_user(request.user).order_by('date_added')
context = {'topics': topics}
return render(request, 'learning_logs/topics.html', context)
def topic(request, topic_id):
topics = _get_topics_for_user(request.user)
# here we're passing the filtered queryset, so
# if the topic "topic_id" is private and the user is either
# anonymous or not the topic owner, it will raise a 404
topic = get_object_or_404(topics, id=topic_id)
entries = topic.entry_set.order_by('-date_added')
context = {'topic': topic, 'entries': entries}
return render(request, 'learning_logs/topic.html', context)
```
```html
Traceback:
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
126. response = self.process_exception_by_middleware(e, request)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
124. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/learning_log/learning_logs/views.py" in topics
31. topics = _get_topics_for_user(request.user).order_by('date_added')
File "/mnt/c/Users/heoje/Desktop/linked/learning_log/learning_logs/views.py" in _get_topics_for_user
27. return Topic.objects.filter(q)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/manager.py" in manager_method
82. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in filter
844. return self._filter_or_exclude(False, *args, **kwargs)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/query.py" in _filter_or_exclude
862. clone.query.add_q(Q(*args, **kwargs))
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in add_q
1263. clause, _ = self._add_q(q_object, self.used_aliases)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1281. current_negated, allow_joins, split_subq)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1281. current_negated, allow_joins, split_subq)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in _add_q
1287. split_subq=split_subq,
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in build_filter
1164. lookups, parts, reffed_expression = self.solve_lookup_type(arg)
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in solve_lookup_type
1028. _, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())
File "/mnt/c/Users/heoje/Desktop/linked/choco/lib/python3.6/site-packages/django/db/models/sql/query.py" in names_to_path
1389. "Choices are: %s" % (name, ", ".join(available)))
Exception Type: FieldError at /topics/
Exception Value: Cannot resolve keyword 'private' into field. Choices are: date_added, entry, id, owner, owner_id, public, text
```
|
2018/10/11
|
[
"https://Stackoverflow.com/questions/52751675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
```
measurements = {'tg': [8.184e+16, 8.345e+16, 8.045e+16, 8.520e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]}
def f():
for key,val in measurements.items():
exec('{} = {}'.format(key, val))
print (tg)
local = locals()
for key in measurements.keys():
print 'Key: ', key, ', Value: ', local[key]
f()
```
python3:
```
measurements = {'tg': [8.184e+16, 8.345e+16, 8.045e+16, 8.520e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]}
def f():
for key,val in measurements.items():
exec('global {};{} = {}'.format(key, key, val))
print ('tg: ', tg)
vars = globals()
for key in measurements.keys():
print ('Key: ', key, ', Value: ', vars[key])
f()
```
output:
```
[8.184e+16, 8.345e+16, 8.045e+16, 8.52e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]
Key: tg , Value: [8.184e+16, 8.345e+16, 8.045e+16, 8.52e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]
```
|
Years later, but I've since realized that you can (ab)use the temporary namespace made in class declarations to get a write-through locals() namespace inside of functions. This allows you to translate the Python 2 version pretty directly:
```
measurements = {'tg': [8.184e+16, 8.345e+16, 8.045e+16, 8.520e+16, 8.322e+16, 7.622e+16, 4.305e+16, 2.203e+16]}
def f():
class _:
for key,val in measurements.items():
exec(key + ' = val', globals(), locals())
print (tg)
f()
```
|
27,369,379 |
I wanted to access a dictionary created in file1.py from file2.py based on value passed in a variable. Both files in same directory.
File1.py looks like
-------------------
```
dict_test_1 = {'a':1, 'b': 2}
dict_test_2 = {'a':11, 'b': 12}
```
file2.py looks like:
--------------------
```
import file1
def fun(dict_name):
var_a = file1.dict_name['a'] ## I wanted an equivalent of file1.dict_test_1['a'] or file.dict_test_2['a']
```
fun(deciding variable which is coming from run time) # Calling function
I need to access either of dictionary created in file1.py at run time by 'dict\_name' but the problem is python actually sees 'dict\_name' as it one of the dictionary defined in file1.py instead of taking value of 'dict\_value' variable passed in function 'fun' and looking for corresponding dictionary.
Any solutions??
|
2014/12/09
|
[
"https://Stackoverflow.com/questions/27369379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2033758/"
] |
this ought to do what you want
```
import random as RND
import string
import itertools as IT
```
*mock some data*
```
fnx = lambda: set(RND.sample(string.ascii_uppercase, 7))
S = [fnx() for c in range(5)]
```
*generate an index list of the sets in S so the sets can be referenced more concisely below*
```
idx = range(len(S))
```
*get all possible unique pairs of the items in S; however, since set intersection is commutative, we want the **combinations** rather than permutations*
```
pairs = IT.combinations(idx, 2)
```
*write a function perform the set intersection*
```
nt = lambda a, b: S[a].intersection(S[b])
```
*fold this function over the pairs & key the result from each function call to its arguments*
```
res = dict([ (t, nt(*t)) for t in pairs ])
```
the result below, formatted per the first option recited in the OP, is a dictionary in which the *values* are the set intersections of two sequences; each values *keyed* to a tuple comprised of the two indices of those sequences
this solution, is really just *two* lines of code: (i) calculate the permutations; (ii) then apply some function over each permutation, storing the returned value in a structured container (key-value) container
the memory footprint of this solution is minimal, but you can do even better by returning a generator expression in the last step, ie
```
res = ( (t, nt(*t)) for t in pairs )
```
notice that with this approach, neither the sequence of pairs nor the corresponding intersections have been written out in memory--ie, both *pairs* and *res* are iterators.
|
If we can assume that the input sets are ordered, a pseudo-mergesort approach seems promising. Treating each set as a sorted stream, advance the streams in parallel, always only advancing those where the value is the lowest among all current iterators. Compare each current value with the new minimum every time an iterator is advanced, and dump the matches into your same-item collections.
|
27,369,379 |
I wanted to access a dictionary created in file1.py from file2.py based on value passed in a variable. Both files in same directory.
File1.py looks like
-------------------
```
dict_test_1 = {'a':1, 'b': 2}
dict_test_2 = {'a':11, 'b': 12}
```
file2.py looks like:
--------------------
```
import file1
def fun(dict_name):
var_a = file1.dict_name['a'] ## I wanted an equivalent of file1.dict_test_1['a'] or file.dict_test_2['a']
```
fun(deciding variable which is coming from run time) # Calling function
I need to access either of dictionary created in file1.py at run time by 'dict\_name' but the problem is python actually sees 'dict\_name' as it one of the dictionary defined in file1.py instead of taking value of 'dict\_value' variable passed in function 'fun' and looking for corresponding dictionary.
Any solutions??
|
2014/12/09
|
[
"https://Stackoverflow.com/questions/27369379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2033758/"
] |
this ought to do what you want
```
import random as RND
import string
import itertools as IT
```
*mock some data*
```
fnx = lambda: set(RND.sample(string.ascii_uppercase, 7))
S = [fnx() for c in range(5)]
```
*generate an index list of the sets in S so the sets can be referenced more concisely below*
```
idx = range(len(S))
```
*get all possible unique pairs of the items in S; however, since set intersection is commutative, we want the **combinations** rather than permutations*
```
pairs = IT.combinations(idx, 2)
```
*write a function perform the set intersection*
```
nt = lambda a, b: S[a].intersection(S[b])
```
*fold this function over the pairs & key the result from each function call to its arguments*
```
res = dict([ (t, nt(*t)) for t in pairs ])
```
the result below, formatted per the first option recited in the OP, is a dictionary in which the *values* are the set intersections of two sequences; each values *keyed* to a tuple comprised of the two indices of those sequences
this solution, is really just *two* lines of code: (i) calculate the permutations; (ii) then apply some function over each permutation, storing the returned value in a structured container (key-value) container
the memory footprint of this solution is minimal, but you can do even better by returning a generator expression in the last step, ie
```
res = ( (t, nt(*t)) for t in pairs )
```
notice that with this approach, neither the sequence of pairs nor the corresponding intersections have been written out in memory--ie, both *pairs* and *res* are iterators.
|
How about using intersection method of set. See below:
```
A={"a","b","c"}
B={"c","d","e"}
C={"a","c","e"}
intersect_AB = A.intersection(B)
intersect_BC = B.intersection(C)
intersect_AC = A.intersection(C)
print intersect_AB, intersect_BC, intersect_AC
```
|
27,369,379 |
I wanted to access a dictionary created in file1.py from file2.py based on value passed in a variable. Both files in same directory.
File1.py looks like
-------------------
```
dict_test_1 = {'a':1, 'b': 2}
dict_test_2 = {'a':11, 'b': 12}
```
file2.py looks like:
--------------------
```
import file1
def fun(dict_name):
var_a = file1.dict_name['a'] ## I wanted an equivalent of file1.dict_test_1['a'] or file.dict_test_2['a']
```
fun(deciding variable which is coming from run time) # Calling function
I need to access either of dictionary created in file1.py at run time by 'dict\_name' but the problem is python actually sees 'dict\_name' as it one of the dictionary defined in file1.py instead of taking value of 'dict\_value' variable passed in function 'fun' and looking for corresponding dictionary.
Any solutions??
|
2014/12/09
|
[
"https://Stackoverflow.com/questions/27369379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2033758/"
] |
If we can assume that the input sets are ordered, a pseudo-mergesort approach seems promising. Treating each set as a sorted stream, advance the streams in parallel, always only advancing those where the value is the lowest among all current iterators. Compare each current value with the new minimum every time an iterator is advanced, and dump the matches into your same-item collections.
|
How about using intersection method of set. See below:
```
A={"a","b","c"}
B={"c","d","e"}
C={"a","c","e"}
intersect_AB = A.intersection(B)
intersect_BC = B.intersection(C)
intersect_AC = A.intersection(C)
print intersect_AB, intersect_BC, intersect_AC
```
|
6,799,021 |
I develop python application which i decided to turn into package to be installed by `easy_install` or `pip` later. I've used search to find several good sources about directory structure for python packages See [this answer](https://stackoverflow.com/questions/193161/what-is-the-best-project-structure-for-a-python-application/193181#193181) or [this post](http://jcalderone.livejournal.com/39794.html).
I created following structure (i've omitted several files in the list to make strcture be more clear)
```
Project/
|-- bin/
|-- my_package/
| |-- test/
| | |-- __init__.py
| | |-- test_server.py
| |-- __init__.py
| |-- server.py
| |-- util.py
|-- doc/
| |-- index.rst
|-- README.txt
|-- LICENSE.txt
|-- setup.py
```
After that I created executable script `server-run`
```
#!/usr/bin/env python
from my_package import server
server.main()
```
which I placed into `bin` directory. If I install my package with `python setup.py install` or via `pip/easy_install` everything works fine, i can run `server-run` and my server starts to handle incoming requests.
But my question is how to test that `server-run` works in development environment (**without prior installation of `my_package`**)? Also I want to use this script to run latest server code for dev purposes.
Development happens in `Project` directory so i am getting `ImportError` if i run `./bin/server-run`
```
user@host:~/dev/Project/$ ./bin/server-run
Traceback (most recent call last):
File "./bin/server-run", line 2, in
import my_package
ImportError: No module named my_package
```
Is it possible to modify `bin/server-run` script so it will work if i run it from another folder somewhere in the filesystem (not necessarily from `Project` dir)? Also note that I want to use (if it is possible to achieve) the same script to run server in production environment.
|
2011/07/23
|
[
"https://Stackoverflow.com/questions/6799021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471899/"
] |
You need relative imports. Try
```
from .. import mypackage
```
or
```
from ..mypackage import server
```
The documentation is here
<http://docs.python.org/tutorial/modules.html#intra-package-references>
These work on Python 2.5 or newer.
To do it only in the development version, try:
```
try:
from my_package import server
except ImportError:
from ..my_package import server
```
|
You can use [`virtualenv`](http://pypi.python.org/pypi/virtualenv) for testing Python code while in development as if it was released
|
6,799,021 |
I develop python application which i decided to turn into package to be installed by `easy_install` or `pip` later. I've used search to find several good sources about directory structure for python packages See [this answer](https://stackoverflow.com/questions/193161/what-is-the-best-project-structure-for-a-python-application/193181#193181) or [this post](http://jcalderone.livejournal.com/39794.html).
I created following structure (i've omitted several files in the list to make strcture be more clear)
```
Project/
|-- bin/
|-- my_package/
| |-- test/
| | |-- __init__.py
| | |-- test_server.py
| |-- __init__.py
| |-- server.py
| |-- util.py
|-- doc/
| |-- index.rst
|-- README.txt
|-- LICENSE.txt
|-- setup.py
```
After that I created executable script `server-run`
```
#!/usr/bin/env python
from my_package import server
server.main()
```
which I placed into `bin` directory. If I install my package with `python setup.py install` or via `pip/easy_install` everything works fine, i can run `server-run` and my server starts to handle incoming requests.
But my question is how to test that `server-run` works in development environment (**without prior installation of `my_package`**)? Also I want to use this script to run latest server code for dev purposes.
Development happens in `Project` directory so i am getting `ImportError` if i run `./bin/server-run`
```
user@host:~/dev/Project/$ ./bin/server-run
Traceback (most recent call last):
File "./bin/server-run", line 2, in
import my_package
ImportError: No module named my_package
```
Is it possible to modify `bin/server-run` script so it will work if i run it from another folder somewhere in the filesystem (not necessarily from `Project` dir)? Also note that I want to use (if it is possible to achieve) the same script to run server in production environment.
|
2011/07/23
|
[
"https://Stackoverflow.com/questions/6799021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471899/"
] |
You need relative imports. Try
```
from .. import mypackage
```
or
```
from ..mypackage import server
```
The documentation is here
<http://docs.python.org/tutorial/modules.html#intra-package-references>
These work on Python 2.5 or newer.
To do it only in the development version, try:
```
try:
from my_package import server
except ImportError:
from ..my_package import server
```
|
The simplest way is to configure the right Python path, so Python knows to look for `my_package` in the current directory.
On Linux (using Bash):
```
export PYTHONPATH=.
bin/server-run
```
On Windows:
```
set PYTHONPATH=.
python bin/server-run
```
|
6,799,021 |
I develop python application which i decided to turn into package to be installed by `easy_install` or `pip` later. I've used search to find several good sources about directory structure for python packages See [this answer](https://stackoverflow.com/questions/193161/what-is-the-best-project-structure-for-a-python-application/193181#193181) or [this post](http://jcalderone.livejournal.com/39794.html).
I created following structure (i've omitted several files in the list to make strcture be more clear)
```
Project/
|-- bin/
|-- my_package/
| |-- test/
| | |-- __init__.py
| | |-- test_server.py
| |-- __init__.py
| |-- server.py
| |-- util.py
|-- doc/
| |-- index.rst
|-- README.txt
|-- LICENSE.txt
|-- setup.py
```
After that I created executable script `server-run`
```
#!/usr/bin/env python
from my_package import server
server.main()
```
which I placed into `bin` directory. If I install my package with `python setup.py install` or via `pip/easy_install` everything works fine, i can run `server-run` and my server starts to handle incoming requests.
But my question is how to test that `server-run` works in development environment (**without prior installation of `my_package`**)? Also I want to use this script to run latest server code for dev purposes.
Development happens in `Project` directory so i am getting `ImportError` if i run `./bin/server-run`
```
user@host:~/dev/Project/$ ./bin/server-run
Traceback (most recent call last):
File "./bin/server-run", line 2, in
import my_package
ImportError: No module named my_package
```
Is it possible to modify `bin/server-run` script so it will work if i run it from another folder somewhere in the filesystem (not necessarily from `Project` dir)? Also note that I want to use (if it is possible to achieve) the same script to run server in production environment.
|
2011/07/23
|
[
"https://Stackoverflow.com/questions/6799021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471899/"
] |
You need relative imports. Try
```
from .. import mypackage
```
or
```
from ..mypackage import server
```
The documentation is here
<http://docs.python.org/tutorial/modules.html#intra-package-references>
These work on Python 2.5 or newer.
To do it only in the development version, try:
```
try:
from my_package import server
except ImportError:
from ..my_package import server
```
|
There is the [console\_scripts approach](https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html) now. See e.g.
```
entry_points={
'console_scripts': [
'wikibackup = wikibot.wikipush:mainBackup',
'wikiedit = wikibot.wikipush:mainEdit',
'wikinuke = wikibot.wikipush:mainNuke',
'wikipush = wikibot.wikipush:mainPush',
'wikiupload = wikibot.wikipush:mainUpload',
'wikiuser = wikibot.wikiuser:main',
],
},
```
from <https://pypi.org/project/py-3rdparty-mediawiki/> (where i am a committer).
If you do a pip install of that package the above scripts will be installed as part of the installation process.
see <https://github.com/WolfgangFahl/py-3rdparty-mediawiki/blob/master/setup.py> for the full source code of the setup script.
|
63,619,836 |
I am just getting into python and am specifically using Pillow and piexif to extract and edit EXIF data for bunch of jpegs that I have. I used piexif to extract and read the EXIF data information like ImageDescription, and noticed lots of fields have random letters in front; when I first pulled ImageDescription, it read b'Olympus.....' I edited the tag and when I output it now gave me, as a test, just test (no b or apostrophe's, and samples from piexif showed u's)
Anyone know the purpose of the apostrophe and/or the random letters?
|
2020/08/27
|
[
"https://Stackoverflow.com/questions/63619836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14177622/"
] |
You can use a regular expression to substitute the pattern as a whole:
```
re.sub(r'\*(.*?)\*', r'{i}\1{/i}', myvar)
```
In the regexp:
* `\*` matches a literal `*` (used twice)
* `(.*?)` matches any number of any (non-newline) characters, as few as possible - it is also in a capture group
In the replacement:
* `{i}` and `{/i}` are literals
* `\1` means to put what was in the first (and in this case, only) capture group
This gives:
```
>>> import re
>>> myvar = "this is an *italicized* substring"
>>> print(re.sub(r'\*(.*?)\*', r'{i}\1{/i}', myvar))
this is an {i}italicized{/i} substring
```
If you have more than one occurrence of the pattern, that will work also:
```
myvar = "this is an *italicized* substring, and here is *another* one"
```
will give
```
this is an {i}italicized{/i} substring, and here is {i}another{/i} one
```
|
Create a loop for parsing and keep a counter to track,whenever the counter is even keep the second tag and replace odd ones with first tag.
|
63,619,836 |
I am just getting into python and am specifically using Pillow and piexif to extract and edit EXIF data for bunch of jpegs that I have. I used piexif to extract and read the EXIF data information like ImageDescription, and noticed lots of fields have random letters in front; when I first pulled ImageDescription, it read b'Olympus.....' I edited the tag and when I output it now gave me, as a test, just test (no b or apostrophe's, and samples from piexif showed u's)
Anyone know the purpose of the apostrophe and/or the random letters?
|
2020/08/27
|
[
"https://Stackoverflow.com/questions/63619836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14177622/"
] |
You can use a regular expression to substitute the pattern as a whole:
```
re.sub(r'\*(.*?)\*', r'{i}\1{/i}', myvar)
```
In the regexp:
* `\*` matches a literal `*` (used twice)
* `(.*?)` matches any number of any (non-newline) characters, as few as possible - it is also in a capture group
In the replacement:
* `{i}` and `{/i}` are literals
* `\1` means to put what was in the first (and in this case, only) capture group
This gives:
```
>>> import re
>>> myvar = "this is an *italicized* substring"
>>> print(re.sub(r'\*(.*?)\*', r'{i}\1{/i}', myvar))
this is an {i}italicized{/i} substring
```
If you have more than one occurrence of the pattern, that will work also:
```
myvar = "this is an *italicized* substring, and here is *another* one"
```
will give
```
this is an {i}italicized{/i} substring, and here is {i}another{/i} one
```
|
you could you use a for loop and say
```
myvar = "this is an *italicized* substring"
positions = []
for x in range(len(myvar)):
if myvar[x] == "*":
positions.append(x)
inAsteriks = myvar[0:positions[0]] + "{i}" + myvar[positions[0]+1:positions[1]] + "{/i}" + myvar[positions[0]+1:]
```
|
63,619,836 |
I am just getting into python and am specifically using Pillow and piexif to extract and edit EXIF data for bunch of jpegs that I have. I used piexif to extract and read the EXIF data information like ImageDescription, and noticed lots of fields have random letters in front; when I first pulled ImageDescription, it read b'Olympus.....' I edited the tag and when I output it now gave me, as a test, just test (no b or apostrophe's, and samples from piexif showed u's)
Anyone know the purpose of the apostrophe and/or the random letters?
|
2020/08/27
|
[
"https://Stackoverflow.com/questions/63619836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14177622/"
] |
You can use a regular expression to substitute the pattern as a whole:
```
re.sub(r'\*(.*?)\*', r'{i}\1{/i}', myvar)
```
In the regexp:
* `\*` matches a literal `*` (used twice)
* `(.*?)` matches any number of any (non-newline) characters, as few as possible - it is also in a capture group
In the replacement:
* `{i}` and `{/i}` are literals
* `\1` means to put what was in the first (and in this case, only) capture group
This gives:
```
>>> import re
>>> myvar = "this is an *italicized* substring"
>>> print(re.sub(r'\*(.*?)\*', r'{i}\1{/i}', myvar))
this is an {i}italicized{/i} substring
```
If you have more than one occurrence of the pattern, that will work also:
```
myvar = "this is an *italicized* substring, and here is *another* one"
```
will give
```
this is an {i}italicized{/i} substring, and here is {i}another{/i} one
```
|
You can use [`re.sub`](https://docs.python.org/3/library/re.html#re.sub) with capture groups for that:
```py
import re
txt = "this is an *italicized* substring"
res = re.sub(r"\*([^*]+)\*", "{i}\g<1>{/i}", txt)
```
will have `res` as:
```none
this is an {i}italicized{/i} substring
```
---
This pattern is pretty basic: It matches a literal `*`, then character **not** an asterisk, then another literal `*`. The main point here is that we use a capture group to catch the word part.
Then we simply substitute the full match with the word we saved (accessed by `\g<1>`) surrounded with your wanted characters.
---
[Demo here](https://regex101.com/r/j3sxRo/2)
|
63,619,836 |
I am just getting into python and am specifically using Pillow and piexif to extract and edit EXIF data for bunch of jpegs that I have. I used piexif to extract and read the EXIF data information like ImageDescription, and noticed lots of fields have random letters in front; when I first pulled ImageDescription, it read b'Olympus.....' I edited the tag and when I output it now gave me, as a test, just test (no b or apostrophe's, and samples from piexif showed u's)
Anyone know the purpose of the apostrophe and/or the random letters?
|
2020/08/27
|
[
"https://Stackoverflow.com/questions/63619836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14177622/"
] |
You can use [`re.sub`](https://docs.python.org/3/library/re.html#re.sub) with capture groups for that:
```py
import re
txt = "this is an *italicized* substring"
res = re.sub(r"\*([^*]+)\*", "{i}\g<1>{/i}", txt)
```
will have `res` as:
```none
this is an {i}italicized{/i} substring
```
---
This pattern is pretty basic: It matches a literal `*`, then character **not** an asterisk, then another literal `*`. The main point here is that we use a capture group to catch the word part.
Then we simply substitute the full match with the word we saved (accessed by `\g<1>`) surrounded with your wanted characters.
---
[Demo here](https://regex101.com/r/j3sxRo/2)
|
Create a loop for parsing and keep a counter to track,whenever the counter is even keep the second tag and replace odd ones with first tag.
|
63,619,836 |
I am just getting into python and am specifically using Pillow and piexif to extract and edit EXIF data for bunch of jpegs that I have. I used piexif to extract and read the EXIF data information like ImageDescription, and noticed lots of fields have random letters in front; when I first pulled ImageDescription, it read b'Olympus.....' I edited the tag and when I output it now gave me, as a test, just test (no b or apostrophe's, and samples from piexif showed u's)
Anyone know the purpose of the apostrophe and/or the random letters?
|
2020/08/27
|
[
"https://Stackoverflow.com/questions/63619836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14177622/"
] |
You can use [`re.sub`](https://docs.python.org/3/library/re.html#re.sub) with capture groups for that:
```py
import re
txt = "this is an *italicized* substring"
res = re.sub(r"\*([^*]+)\*", "{i}\g<1>{/i}", txt)
```
will have `res` as:
```none
this is an {i}italicized{/i} substring
```
---
This pattern is pretty basic: It matches a literal `*`, then character **not** an asterisk, then another literal `*`. The main point here is that we use a capture group to catch the word part.
Then we simply substitute the full match with the word we saved (accessed by `\g<1>`) surrounded with your wanted characters.
---
[Demo here](https://regex101.com/r/j3sxRo/2)
|
you could you use a for loop and say
```
myvar = "this is an *italicized* substring"
positions = []
for x in range(len(myvar)):
if myvar[x] == "*":
positions.append(x)
inAsteriks = myvar[0:positions[0]] + "{i}" + myvar[positions[0]+1:positions[1]] + "{/i}" + myvar[positions[0]+1:]
```
|
38,440,179 |
I have a python app that uses several services from aws. I have one access key and secret for each service. For most of the services I use boto and don't need AWS\_ACCESS\_KEY\_ID or AWS\_SECRET\_ACCESS\_KEY in the environment. For dynamoDB I use pynamoDB and I have no idea how to set the credentials without these variables.
I want to standardize the credential in a settings file to avoid errors like clash of credentials.
Is this possible? If so, how is it done?
|
2016/07/18
|
[
"https://Stackoverflow.com/questions/38440179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2521743/"
] |
From [the PynamoDB documentation](http://pynamodb.readthedocs.io/en/latest/awsaccess.html):
>
> PynamoDB uses botocore to interact with the DynamoDB API. Thus, any
> method of configuration supported by botocore works with PynamoDB. For
> local development the use of environment variables such as
> AWS\_ACCESS\_KEY\_ID and AWS\_SECRET\_ACCESS\_KEY is probably preferable.
> You can of course use IAM users, as recommended by AWS. In addition
> EC2 roles will work as well and would be recommended when running on
> EC2.
>
>
>
Note that if all the services you are interacting with are within the same AWS account, then the preferred way to supply credentials would be to create a single IAM account with all the necessary permissions attached, or an IAM role if the code is running on EC2 or Lambda.
|
I was searching for this online and came across this question, though this is old, I am sharing my solution so that it might be helpful someone.
When defining the Dynamo DB model all we need is to add one additional line of code which contains the IAM rolename. Below is a sample model.
If you change the model like the one below we don't need `~/.aws/credentials` file on the container.
**Note**: Make sure you attach DynamoDBRead or write policy to the IAM role, I have attached `AmazonDynamoDBFullAccess` policy for my instances IAM role.
```
from pynamodb.models import Model
from pynamodb.attributes import (
UnicodeAttribute, NumberAttribute, UnicodeSetAttribute, UTCDateTimeAttribute
)
import urllib2
class TestClass(Model):
email = UnicodeAttribute(hash_key=True)
UUID = UnicodeAttribute(range_key=True)
class Meta:
region = 'eu-west-2'
# Refer: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
instanceProfileName = urllib2.urlopen(
'http://169.254.169.254/latest/meta-data/iam/security-credentials/').read()
table_name = 'dynamodb-tablename'
```
|
64,952,109 |
Environment:
* pip version: 20.2.4
* interpreter: Python 3.9.0
* OS: macOS Big Sur
* Everything else is also up to date
I tried the following commands in the Terminal and none worked:
* pip install torch
* pip3 install torch
* pip install torch torchvision torchaudio
I get different error messages for each command, but this is the output of "pip install torch":
```
Collecting torch
Using cached torch-0.1.2.post2.tar.gz (128 kB)
Requirement already satisfied: pyyaml in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from torch) (5.3.1)
Using legacy 'setup.py install' for torch, since package 'wheel' is not installed.
Installing collected packages: torch
Running setup.py install for torch: started
Running setup.py install for torch: finished with status 'error'
DEPRECATION: The -b/--build/--build-dir/--build-directory option is deprecated. pip 20.3 will remove support for this functionality. A possible replacement is use the TMPDIR/TEMP/TMP environment variable, possibly combined with --no-clean. You can find discussion regarding this at https://github.com/pypa/pip/issues/8333.
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python3.9 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pycharm-packaging/torch/setup.py'"'"'; __file__='"'"'/private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pycharm-packaging/torch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pip-record-92u0tiue/install-record.txt --single-version-externally-managed --compile --install-headers /Library/Frameworks/Python.framework/Versions/3.9/include/python3.9/torch
cwd: /private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pycharm-packaging/torch/
Complete output (23 lines):
running install
running build_deps
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pycharm-packaging/torch/setup.py", line 225, in <module>
setup(name="torch", version="0.1.2.post2",
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pycharm-packaging/torch/setup.py", line 99, in run
self.run_command('build_deps')
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pycharm-packaging/torch/setup.py", line 51, in run
from tools.nnwrap import generate_wrappers as generate_nn_wrappers
ModuleNotFoundError: No module named 'tools.nnwrap'
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/local/bin/python3.9 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pycharm-packaging/torch/setup.py'"'"'; __file__='"'"'/private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pycharm-packaging/torch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/jz/bbl56kxj3tl_wssc22lnxqmc0000gn/T/pip-record-92u0tiue/install-record.txt --single-version-externally-managed --compile --install-headers /Library/Frameworks/Python.framework/Versions/3.9/include/python3.9/torch Check the logs for full command output.
```
|
2020/11/22
|
[
"https://Stackoverflow.com/questions/64952109",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14492001/"
] |
Okay, I got this fixed by using the conda package manager instead of pip's. Conda uses python version 3.8.5, whereas version 3.9 was used for my pip manager. I'm not quite sure if the issue of incompatibility was due to pip itself, or the fact that it is using python version 3.9. Anyways after downloading the graphical macOS installer:
```
conda install pytorch torchvision torchaudio -c pytorch
```
And that should successfully install PyTorch!
|
only need to change the last one to
pip3 install torch torchvision torchaudio
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
[Caution: my answer isn't specific to python.]
I remember that *Code Complete* called this kind of parameter a "tramp parameter". Googling for "tramp parameter" doesn't return many results, however.
Some alternatives to tramp parameters might include:
* Put the data in a global variable
* Put the data in a static variable of a class (similar to global data)
* Put the data in an instance variable of a class
* Pseudo-global variable: hidden behind a singleton, or some dependency injection mechanism
Personally, I don't mind a tramp parameter as long as there's no more than one; i.e. your example is OK for me, but I wouldn't like ...
```
import a
p1 = some_command_line_argument_value
p2 = another_command_line_argument_value
p3 = a_further_command_line_argument_value
a.meth1(p1, p2, p3)
```
... instead I'd prefer ...
```
import a
p = several_command_line_argument_values
a.meth1(p)
```
... because if `meth2` decides that it wants more data than before, I'd prefer if it could extract this extra data from the original parameter which it's already being passed, so that I don't need to edit `meth1`.
|
Your example is reminiscent of the code smell [Message Chains](http://sourcemaking.com/refactoring/message-chains). You may find the corresponding refactoring, [Hide Delegate](http://sourcemaking.com/refactoring/hide-delegate), informative.
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
Create objects of types relevant to your program, and store the command line options relevant to each in them. Example:
```
import WidgetFrobnosticator
f = WidgetFrobnosticator()
f.allow_oncave_widgets = option_allow_concave_widgets
f.respect_weasel_pins = option_respect_weasel_pins
# Now the methods of WidgetFrobnosticator have access to your command-line parameters,
# in a way that's not dependent on the input format.
import PlatypusFactory
p = PlatypusFactory()
p.allow_parthenogenesis = option_allow_parthenogenesis
p.max_population = option_max_population
# The platypus factory knows about its own options, but not those of the WidgetFrobnosticator
# or vice versa. This makes each class easier to read and implement.
```
|
If "a" is a real object and not just a set of independent helper methods, you can create an "p" member variable in "a" and set it when you instantiate an "a" object. Then your main class will not need to pass "p" into meth1 and meth2 once "a" has been instantiated.
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
Maybe you should organize your code more into classes and objects? As I was writing this, Jimmy showed a class-instance based answer, so here is a pure class-based answer. This would be most useful if you only ever wanted a single behavior; if there is any chance at all you might want different defaults some of the time, you should use ordinary object-oriented programming in Python, i.e. pass around class instances with the property p set in the instance, not the class.
```
class Aclass(object):
p = None
@classmethod
def init_p(cls, value):
p = value
@classmethod
def meth1(cls):
# some code
res = cls.meth2()
# some more code w/ res
@classmethod
def meth2(cls):
# do something with p
pass
from a import Aclass as ac
ac.init_p(some_command_line_argument_value)
ac.meth1()
ac.meth2()
```
|
If "a" is a real object and not just a set of independent helper methods, you can create an "p" member variable in "a" and set it when you instantiate an "a" object. Then your main class will not need to pass "p" into meth1 and meth2 once "a" has been instantiated.
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
Maybe you should organize your code more into classes and objects? As I was writing this, Jimmy showed a class-instance based answer, so here is a pure class-based answer. This would be most useful if you only ever wanted a single behavior; if there is any chance at all you might want different defaults some of the time, you should use ordinary object-oriented programming in Python, i.e. pass around class instances with the property p set in the instance, not the class.
```
class Aclass(object):
p = None
@classmethod
def init_p(cls, value):
p = value
@classmethod
def meth1(cls):
# some code
res = cls.meth2()
# some more code w/ res
@classmethod
def meth2(cls):
# do something with p
pass
from a import Aclass as ac
ac.init_p(some_command_line_argument_value)
ac.meth1()
ac.meth2()
```
|
With objects, parameter lists should normally be very small, since most appropriate information is a property of the object itself. The standard way to handle this is to configure the object properties and then call the appropriate methods of that object. In this case set `p` as an attribute of `a`. Your `meth2` should also complain if `p` is not set.
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
Maybe you should organize your code more into classes and objects? As I was writing this, Jimmy showed a class-instance based answer, so here is a pure class-based answer. This would be most useful if you only ever wanted a single behavior; if there is any chance at all you might want different defaults some of the time, you should use ordinary object-oriented programming in Python, i.e. pass around class instances with the property p set in the instance, not the class.
```
class Aclass(object):
p = None
@classmethod
def init_p(cls, value):
p = value
@classmethod
def meth1(cls):
# some code
res = cls.meth2()
# some more code w/ res
@classmethod
def meth2(cls):
# do something with p
pass
from a import Aclass as ac
ac.init_p(some_command_line_argument_value)
ac.meth1()
ac.meth2()
```
|
Your example is reminiscent of the code smell [Message Chains](http://sourcemaking.com/refactoring/message-chains). You may find the corresponding refactoring, [Hide Delegate](http://sourcemaking.com/refactoring/hide-delegate), informative.
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
Create objects of types relevant to your program, and store the command line options relevant to each in them. Example:
```
import WidgetFrobnosticator
f = WidgetFrobnosticator()
f.allow_oncave_widgets = option_allow_concave_widgets
f.respect_weasel_pins = option_respect_weasel_pins
# Now the methods of WidgetFrobnosticator have access to your command-line parameters,
# in a way that's not dependent on the input format.
import PlatypusFactory
p = PlatypusFactory()
p.allow_parthenogenesis = option_allow_parthenogenesis
p.max_population = option_max_population
# The platypus factory knows about its own options, but not those of the WidgetFrobnosticator
# or vice versa. This makes each class easier to read and implement.
```
|
Your example is reminiscent of the code smell [Message Chains](http://sourcemaking.com/refactoring/message-chains). You may find the corresponding refactoring, [Hide Delegate](http://sourcemaking.com/refactoring/hide-delegate), informative.
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
Maybe you should organize your code more into classes and objects? As I was writing this, Jimmy showed a class-instance based answer, so here is a pure class-based answer. This would be most useful if you only ever wanted a single behavior; if there is any chance at all you might want different defaults some of the time, you should use ordinary object-oriented programming in Python, i.e. pass around class instances with the property p set in the instance, not the class.
```
class Aclass(object):
p = None
@classmethod
def init_p(cls, value):
p = value
@classmethod
def meth1(cls):
# some code
res = cls.meth2()
# some more code w/ res
@classmethod
def meth2(cls):
# do something with p
pass
from a import Aclass as ac
ac.init_p(some_command_line_argument_value)
ac.meth1()
ac.meth2()
```
|
[Caution: my answer isn't specific to python.]
I remember that *Code Complete* called this kind of parameter a "tramp parameter". Googling for "tramp parameter" doesn't return many results, however.
Some alternatives to tramp parameters might include:
* Put the data in a global variable
* Put the data in a static variable of a class (similar to global data)
* Put the data in an instance variable of a class
* Pseudo-global variable: hidden behind a singleton, or some dependency injection mechanism
Personally, I don't mind a tramp parameter as long as there's no more than one; i.e. your example is OK for me, but I wouldn't like ...
```
import a
p1 = some_command_line_argument_value
p2 = another_command_line_argument_value
p3 = a_further_command_line_argument_value
a.meth1(p1, p2, p3)
```
... instead I'd prefer ...
```
import a
p = several_command_line_argument_values
a.meth1(p)
```
... because if `meth2` decides that it wants more data than before, I'd prefer if it could extract this extra data from the original parameter which it's already being passed, so that I don't need to edit `meth1`.
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
[Caution: my answer isn't specific to python.]
I remember that *Code Complete* called this kind of parameter a "tramp parameter". Googling for "tramp parameter" doesn't return many results, however.
Some alternatives to tramp parameters might include:
* Put the data in a global variable
* Put the data in a static variable of a class (similar to global data)
* Put the data in an instance variable of a class
* Pseudo-global variable: hidden behind a singleton, or some dependency injection mechanism
Personally, I don't mind a tramp parameter as long as there's no more than one; i.e. your example is OK for me, but I wouldn't like ...
```
import a
p1 = some_command_line_argument_value
p2 = another_command_line_argument_value
p3 = a_further_command_line_argument_value
a.meth1(p1, p2, p3)
```
... instead I'd prefer ...
```
import a
p = several_command_line_argument_values
a.meth1(p)
```
... because if `meth2` decides that it wants more data than before, I'd prefer if it could extract this extra data from the original parameter which it's already being passed, so that I don't need to edit `meth1`.
|
With objects, parameter lists should normally be very small, since most appropriate information is a property of the object itself. The standard way to handle this is to configure the object properties and then call the appropriate methods of that object. In this case set `p` as an attribute of `a`. Your `meth2` should also complain if `p` is not set.
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
If "a" is a real object and not just a set of independent helper methods, you can create an "p" member variable in "a" and set it when you instantiate an "a" object. Then your main class will not need to pass "p" into meth1 and meth2 once "a" has been instantiated.
|
With objects, parameter lists should normally be very small, since most appropriate information is a property of the object itself. The standard way to handle this is to configure the object properties and then call the appropriate methods of that object. In this case set `p` as an attribute of `a`. Your `meth2` should also complain if `p` is not set.
|
1,580,792 |
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
```
import a
p = some_command_line_argument_value
a.meth1(p)
```
a.py:
```
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
```
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
2009/10/16
|
[
"https://Stackoverflow.com/questions/1580792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1515117/"
] |
If "a" is a real object and not just a set of independent helper methods, you can create an "p" member variable in "a" and set it when you instantiate an "a" object. Then your main class will not need to pass "p" into meth1 and meth2 once "a" has been instantiated.
|
Your example is reminiscent of the code smell [Message Chains](http://sourcemaking.com/refactoring/message-chains). You may find the corresponding refactoring, [Hide Delegate](http://sourcemaking.com/refactoring/hide-delegate), informative.
|
46,554,705 |
I'm new to Docker, AWS Lambda and Zappa, but I'm trying to configure a project with very little success.
I connect to Docker (`docker-compose run web bash`), activate the environment, configure the AWS credentials and run `zappa init` and `zappa deploy`. However, after deployment I get an error (executing `zappa tail`):
```
ModuleNotFoundError: No module named 'project'
```
I believe that that's because my dir structure is not the standard:
```
Dockerfile
zappa_settings.json
requirements.txt
project\
- manage.py
- root\
- settings.py
- wsgi.py
- ...
```
So, my django project is inside the `project` folder, as I don't like having all the files thrown there and mixed with other configuration files.
My zappa config file looks like this:
```
{
"dev": {
"aws_region": "us-east-2",
"django_settings": "project.root.settings",
"profile_name": "default",
"project_name": "task",
"runtime": "python3.6",
"s3_bucket": "bucket-name"
}
}
```
And Dockerfile:
```
FROM lambci/lambda:build-python3.6
# Copy in your requirements file
ADD requirements.txt /requirements.txt
# Copy your application code to the container
RUN mkdir /code/
WORKDIR /code/
ADD . /code/
ENV PS1 'zappa@$(pwd | sed "s@^/var/task/\?@@")\$ '
ADD zappa_settings.json /var/task/zappa_settings.json
WORKDIR /var/task
RUN virtualenv /var/venv && \
source /var/venv/bin/activate && \
pip install -U pip zappa && \
pip install -r /requirements.txt && \
deactivate
CMD ["zappa"]
```
Can you tell me how to set it up correctly? Thanks!
**UPDATE:**
Latest error after moving to subfolder:
[](https://i.stack.imgur.com/PdxdY.png)
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554705",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/72350/"
] |
You can keep your file structure with zappa and it will work perfectly fine.
Given the file structure
```
Dockerfile
zappa_settings.json
requirements.txt
project\
- manage.py
- root\
- settings.py
- wsgi.py
- ...
```
You can simplify your `Dockerfile` to be
```
FROM lambci/lambda:build-python3.6
ADD . /var/task
RUN python -m venv /var/task/dockervenv && \
source /var/task/dockervenv/bin/activate && \
pip install -U pip zappa && \
pip install -r /var/task/requirements.txt && \
deactivate
WORKDIR /var/task
ENV PS1 'zappa@$(pwd | sed "s@^/var/task/\?@@")\$ '
CMD ["bash"]
```
And it seems from your update that you have gotten similar results because your Django application seems to reach the Django code. Presumably you have also [edited your ALLOWED\_HOSTS to accept incoming connections](https://edgarroman.github.io/zappa-django-guide/walk_core/#deploy-your-project-using-zappa)
Now the error `ModuleNotFoundError: No module named 'root'` is because your Django settings file is not expecting to exist within the subdirectory. You must update the `settings.py` to update the variable
```
ROOT_URLCONF = 'project.root.urls'
```
This should get you up and running.
**Side Effects of this docker approach**
As an aside, the way you are setting up your docker container has some implications on coding workflow. Your docker image will contain a copy of your code at the time it was built. So any edits on the code in a running docker container will be lost when the container ends unless exported somehow (e.g. git). While this could be perfectly fine for stable code in a CI/CD pipeline, I would not recommend it for active development.
If you'd like to use it for active development, then when invoking docker, have your project directly mapped as a volume when the container is invoked. For example:
`docker run -ti -v $(pwd):/var/task --rm <name of your image>`
|
Just move, zappa\_settings.json and the .json generated by zappa deploy to your subfolder.
Then update zappa\_settings.json:
"django\_settings": "project.root.settings" --> "django\_settings": "root.settings"
cd into project folder and do a zappa update dev.
Not the best solution, but it worked for me
|
27,401,779 |
I am writing a multi-tenant application with python-django.
I want to set database connection based on each request.I thought i could write a middleware where we set the database to be used for that particular database.
```
import re
from django.db import connections
class SetTenantDatabase(object):
def process_request(self, request):
pattern = re.compile("\\b(http://|https://|www.|.com|8000|:|//)\\W\\d+", re.I)
words = request.get_host()
db_name = [pattern.sub("", words)][0].split('.')[0]
connections.databases['new-alias'] = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'store1',
'USER': 'xxx',
'PASSWORD': 'xxx',
'HOST': '127.0.0.1',
}
}
conn = connections['new-alias']
return None
```
but this is not working.How should i do this.Is the approach wrong or is the solution feasible, and lastly How?
|
2014/12/10
|
[
"https://Stackoverflow.com/questions/27401779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4218126/"
] |
this is the answer, hope it helps someone in future:
```
import re
import threading
request_cfg = threading.local()
class RouterMiddleware(object):
def process_request( self, request):
pattern = re.compile("\\b(http://|https://|www.|.com|8000|:|//)\\W\\d+", re.I)
words = request.get_host()
db_name = [pattern.sub("", words)][0].split('.')[0]
request_cfg.cfg = db_name
return None
def process_response( self, request, response ):
if hasattr( request_cfg, 'cfg' ):
del request_cfg.cfg
return response
class DatabaseRouter (object):
def _default_db( self ):
if hasattr( request_cfg, 'cfg' ):
return request_cfg.cfg
else:
return 'default'
def db_for_read( self, model, **hints ):
return self._default_db()
def db_for_write( self, model, **hints ):
return self._default_db()
```
Thanks
|
Maybe you can use:
<https://docs.djangoproject.com/en/dev/topics/db/multi-db/#manually-selecting-a-database-for-a-queryset>
```
Entity.objects.using('context1').all()
Entity.objects.using('context2').all()
```
To select/use a database depending on the request. You can define multiple DBs in the configurartion:
```
DATABASES = {
'context1': {
'NAME': 'context1',
'ENGINE': 'db.engine.to.use',
'USER': 'xxx',
'PASSWORD': 'xxx'
},
'context2': {
'NAME': 'context2',
'ENGINE': 'db.engine.to.use',
'USER': 'xxx',
'PASSWORD': 'xxx'
}
}
```
|
17,225,638 |
I have this code:
```
def save_to_gcs(self, img, img_obj):
'''
Image data, Image metadata object -> Blob Key
Given an image and image metadata, stores it in a GCS bucket
'''
bucket = '/foo'
filename = bucket + '/' + str(img_obj['filename'])
self.tmp_filenames_to_clean_up = []
logging.info('Creating file %s\n' % img_obj['filename'])
write_retry_params = gcs.RetryParams(backoff_factor=1.1)
gcs_file = gcs.open(filename,
'w',
content_type=img_obj['mimetype'],
retry_params=write_retry_params)
gcs_file.write(img)
gcs_file.close()
self.tmp_filenames_to_clean_up.append(filename)
return blobstore.create_gs_key('/gs/' + filename)
```
But it fails with this error:
```
Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
Traceback (most recent call last):
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1536, in __call__
rv = self.handle_exception(request, response, e)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1530, in __call__
rv = self.router.dispatch(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 409, in post
blob_key = self.save_to_gcs(img, img_obj) # Save the image to a GCS bucket. returns a blob_key
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 448, in save_to_gcs
retry_params=write_retry_params)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/cloudstorage_api.py", line 69, in open
return storage_api.StreamingBuffer(api, filename, content_type, options)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/storage_api.py", line 527, in __init__
errors.check_status(status, [201], headers)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/errors.py", line 99, in check_status
raise ForbiddenError(msg)
ForbiddenError: Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
```
Any help with deciphering that error and coming up with a solution would be much appreciated.
Thanks
|
2013/06/20
|
[
"https://Stackoverflow.com/questions/17225638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/98191/"
] |
Same thing happened to me and it baffled me. I got it working by following the steps on [this page](https://developers.google.com/appengine/docs/python/googlestorage/) under the Prerequisites section. A couple notes though:
* For number 2, make sure you go to the [APIs Console](https://code.google.com/apis/console) and turn on GCS under Services
* For number 5, go to the [Cloud Console](https://cloud.google.com/console), select your project, click the Settings wrench and click Teams. Add your gserviceaccount.com thing here.
* Also for number 5, I think you have to edit the ACL files with gsutil. Follow the alternate instructions provided.
That should work for you since it did for me.
|
The documentation is confusing for granting access to your app engine app (which is most likely your problem). Here is what worked for me in the latest [Google Cloud Console](https://cloud.google.com/console):
1. In the Google Cloud Console, click your project, then "APIs & auth". Turn on "Google Cloud Storage" and "Google Cloud Storage JSON API".
2. Click back to the "Overview" screen, and click "Cloud Storage" in the left menu.
3. Click the check box next to your bucket, and click the "Bucket Permissions" button.
4. Add a new "User" permission and specify your app engine service account name, in the format of application-id@appspot.gserviceaccount.com. This is found in the Application Settings of the AppEngine Console. Better instructions to find this account name are [here](https://developers.google.com/appengine/docs/python/googlestorage/#Give_permissions_to_your_bucket_or_objects).
5. Save your changes.
|
17,225,638 |
I have this code:
```
def save_to_gcs(self, img, img_obj):
'''
Image data, Image metadata object -> Blob Key
Given an image and image metadata, stores it in a GCS bucket
'''
bucket = '/foo'
filename = bucket + '/' + str(img_obj['filename'])
self.tmp_filenames_to_clean_up = []
logging.info('Creating file %s\n' % img_obj['filename'])
write_retry_params = gcs.RetryParams(backoff_factor=1.1)
gcs_file = gcs.open(filename,
'w',
content_type=img_obj['mimetype'],
retry_params=write_retry_params)
gcs_file.write(img)
gcs_file.close()
self.tmp_filenames_to_clean_up.append(filename)
return blobstore.create_gs_key('/gs/' + filename)
```
But it fails with this error:
```
Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
Traceback (most recent call last):
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1536, in __call__
rv = self.handle_exception(request, response, e)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1530, in __call__
rv = self.router.dispatch(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 409, in post
blob_key = self.save_to_gcs(img, img_obj) # Save the image to a GCS bucket. returns a blob_key
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 448, in save_to_gcs
retry_params=write_retry_params)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/cloudstorage_api.py", line 69, in open
return storage_api.StreamingBuffer(api, filename, content_type, options)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/storage_api.py", line 527, in __init__
errors.check_status(status, [201], headers)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/errors.py", line 99, in check_status
raise ForbiddenError(msg)
ForbiddenError: Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
```
Any help with deciphering that error and coming up with a solution would be much appreciated.
Thanks
|
2013/06/20
|
[
"https://Stackoverflow.com/questions/17225638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/98191/"
] |
Same thing happened to me and it baffled me. I got it working by following the steps on [this page](https://developers.google.com/appengine/docs/python/googlestorage/) under the Prerequisites section. A couple notes though:
* For number 2, make sure you go to the [APIs Console](https://code.google.com/apis/console) and turn on GCS under Services
* For number 5, go to the [Cloud Console](https://cloud.google.com/console), select your project, click the Settings wrench and click Teams. Add your gserviceaccount.com thing here.
* Also for number 5, I think you have to edit the ACL files with gsutil. Follow the alternate instructions provided.
That should work for you since it did for me.
|
... besides the above, also go to "API's" (under "API's and Auth" under your project in the "Cloud Console", and make sure "Google Cloud Storage" is turned on.
And under "Billing", make sure that is turned on.
|
17,225,638 |
I have this code:
```
def save_to_gcs(self, img, img_obj):
'''
Image data, Image metadata object -> Blob Key
Given an image and image metadata, stores it in a GCS bucket
'''
bucket = '/foo'
filename = bucket + '/' + str(img_obj['filename'])
self.tmp_filenames_to_clean_up = []
logging.info('Creating file %s\n' % img_obj['filename'])
write_retry_params = gcs.RetryParams(backoff_factor=1.1)
gcs_file = gcs.open(filename,
'w',
content_type=img_obj['mimetype'],
retry_params=write_retry_params)
gcs_file.write(img)
gcs_file.close()
self.tmp_filenames_to_clean_up.append(filename)
return blobstore.create_gs_key('/gs/' + filename)
```
But it fails with this error:
```
Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
Traceback (most recent call last):
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1536, in __call__
rv = self.handle_exception(request, response, e)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1530, in __call__
rv = self.router.dispatch(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 409, in post
blob_key = self.save_to_gcs(img, img_obj) # Save the image to a GCS bucket. returns a blob_key
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 448, in save_to_gcs
retry_params=write_retry_params)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/cloudstorage_api.py", line 69, in open
return storage_api.StreamingBuffer(api, filename, content_type, options)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/storage_api.py", line 527, in __init__
errors.check_status(status, [201], headers)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/errors.py", line 99, in check_status
raise ForbiddenError(msg)
ForbiddenError: Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
```
Any help with deciphering that error and coming up with a solution would be much appreciated.
Thanks
|
2013/06/20
|
[
"https://Stackoverflow.com/questions/17225638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/98191/"
] |
Same thing happened to me and it baffled me. I got it working by following the steps on [this page](https://developers.google.com/appengine/docs/python/googlestorage/) under the Prerequisites section. A couple notes though:
* For number 2, make sure you go to the [APIs Console](https://code.google.com/apis/console) and turn on GCS under Services
* For number 5, go to the [Cloud Console](https://cloud.google.com/console), select your project, click the Settings wrench and click Teams. Add your gserviceaccount.com thing here.
* Also for number 5, I think you have to edit the ACL files with gsutil. Follow the alternate instructions provided.
That should work for you since it did for me.
|
This did not work from me.
Google says we have to do below mentioned method.
Give permissions to your bucket or objects.
To enable your app to create new objects in a bucket, you need to do the following:
Log into the App Engine Admin Console.
Click on the application you want to authorize for your Cloud Storage bucket.
Click on Application Settings under the Administration section on the left-hand side.
Copy the value under Service Account Name. This is the service account name of your application, in the format application-id@appspot.gserviceaccount.com. If you are using an App Engine Premier Account, the service account name for your application is in the format application-id.example.com@appspot.gserviceaccount.com.
Grant access permissions using one of the following methods:
The easiest way to grant app access to a bucket is to use the Google Cloud Console to add the service account name of the app as a team member to the project that contains the bucket. You can do this under Permissions in the left sidebar of the Google Cloud Console. The app should have edit permissions if it needs to write to the bucket. For information about permissions in Cloud Storage, see Scopes and Permissions. Add more apps to the project team if desired.
Note: In some circumstances, you might not be able to add the service account as a team member. If you cannot add the service account, use the alternative method, bucket ACLs, as described next.
|
17,225,638 |
I have this code:
```
def save_to_gcs(self, img, img_obj):
'''
Image data, Image metadata object -> Blob Key
Given an image and image metadata, stores it in a GCS bucket
'''
bucket = '/foo'
filename = bucket + '/' + str(img_obj['filename'])
self.tmp_filenames_to_clean_up = []
logging.info('Creating file %s\n' % img_obj['filename'])
write_retry_params = gcs.RetryParams(backoff_factor=1.1)
gcs_file = gcs.open(filename,
'w',
content_type=img_obj['mimetype'],
retry_params=write_retry_params)
gcs_file.write(img)
gcs_file.close()
self.tmp_filenames_to_clean_up.append(filename)
return blobstore.create_gs_key('/gs/' + filename)
```
But it fails with this error:
```
Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
Traceback (most recent call last):
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1536, in __call__
rv = self.handle_exception(request, response, e)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1530, in __call__
rv = self.router.dispatch(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 409, in post
blob_key = self.save_to_gcs(img, img_obj) # Save the image to a GCS bucket. returns a blob_key
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 448, in save_to_gcs
retry_params=write_retry_params)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/cloudstorage_api.py", line 69, in open
return storage_api.StreamingBuffer(api, filename, content_type, options)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/storage_api.py", line 527, in __init__
errors.check_status(status, [201], headers)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/errors.py", line 99, in check_status
raise ForbiddenError(msg)
ForbiddenError: Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
```
Any help with deciphering that error and coming up with a solution would be much appreciated.
Thanks
|
2013/06/20
|
[
"https://Stackoverflow.com/questions/17225638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/98191/"
] |
The documentation is confusing for granting access to your app engine app (which is most likely your problem). Here is what worked for me in the latest [Google Cloud Console](https://cloud.google.com/console):
1. In the Google Cloud Console, click your project, then "APIs & auth". Turn on "Google Cloud Storage" and "Google Cloud Storage JSON API".
2. Click back to the "Overview" screen, and click "Cloud Storage" in the left menu.
3. Click the check box next to your bucket, and click the "Bucket Permissions" button.
4. Add a new "User" permission and specify your app engine service account name, in the format of application-id@appspot.gserviceaccount.com. This is found in the Application Settings of the AppEngine Console. Better instructions to find this account name are [here](https://developers.google.com/appengine/docs/python/googlestorage/#Give_permissions_to_your_bucket_or_objects).
5. Save your changes.
|
... besides the above, also go to "API's" (under "API's and Auth" under your project in the "Cloud Console", and make sure "Google Cloud Storage" is turned on.
And under "Billing", make sure that is turned on.
|
17,225,638 |
I have this code:
```
def save_to_gcs(self, img, img_obj):
'''
Image data, Image metadata object -> Blob Key
Given an image and image metadata, stores it in a GCS bucket
'''
bucket = '/foo'
filename = bucket + '/' + str(img_obj['filename'])
self.tmp_filenames_to_clean_up = []
logging.info('Creating file %s\n' % img_obj['filename'])
write_retry_params = gcs.RetryParams(backoff_factor=1.1)
gcs_file = gcs.open(filename,
'w',
content_type=img_obj['mimetype'],
retry_params=write_retry_params)
gcs_file.write(img)
gcs_file.close()
self.tmp_filenames_to_clean_up.append(filename)
return blobstore.create_gs_key('/gs/' + filename)
```
But it fails with this error:
```
Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
Traceback (most recent call last):
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1536, in __call__
rv = self.handle_exception(request, response, e)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1530, in __call__
rv = self.router.dispatch(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 409, in post
blob_key = self.save_to_gcs(img, img_obj) # Save the image to a GCS bucket. returns a blob_key
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 448, in save_to_gcs
retry_params=write_retry_params)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/cloudstorage_api.py", line 69, in open
return storage_api.StreamingBuffer(api, filename, content_type, options)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/storage_api.py", line 527, in __init__
errors.check_status(status, [201], headers)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/errors.py", line 99, in check_status
raise ForbiddenError(msg)
ForbiddenError: Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
```
Any help with deciphering that error and coming up with a solution would be much appreciated.
Thanks
|
2013/06/20
|
[
"https://Stackoverflow.com/questions/17225638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/98191/"
] |
The documentation is confusing for granting access to your app engine app (which is most likely your problem). Here is what worked for me in the latest [Google Cloud Console](https://cloud.google.com/console):
1. In the Google Cloud Console, click your project, then "APIs & auth". Turn on "Google Cloud Storage" and "Google Cloud Storage JSON API".
2. Click back to the "Overview" screen, and click "Cloud Storage" in the left menu.
3. Click the check box next to your bucket, and click the "Bucket Permissions" button.
4. Add a new "User" permission and specify your app engine service account name, in the format of application-id@appspot.gserviceaccount.com. This is found in the Application Settings of the AppEngine Console. Better instructions to find this account name are [here](https://developers.google.com/appengine/docs/python/googlestorage/#Give_permissions_to_your_bucket_or_objects).
5. Save your changes.
|
This did not work from me.
Google says we have to do below mentioned method.
Give permissions to your bucket or objects.
To enable your app to create new objects in a bucket, you need to do the following:
Log into the App Engine Admin Console.
Click on the application you want to authorize for your Cloud Storage bucket.
Click on Application Settings under the Administration section on the left-hand side.
Copy the value under Service Account Name. This is the service account name of your application, in the format application-id@appspot.gserviceaccount.com. If you are using an App Engine Premier Account, the service account name for your application is in the format application-id.example.com@appspot.gserviceaccount.com.
Grant access permissions using one of the following methods:
The easiest way to grant app access to a bucket is to use the Google Cloud Console to add the service account name of the app as a team member to the project that contains the bucket. You can do this under Permissions in the left sidebar of the Google Cloud Console. The app should have edit permissions if it needs to write to the bucket. For information about permissions in Cloud Storage, see Scopes and Permissions. Add more apps to the project team if desired.
Note: In some circumstances, you might not be able to add the service account as a team member. If you cannot add the service account, use the alternative method, bucket ACLs, as described next.
|
50,615,577 |
I have a pretty neat mess of batch/python scripts that install a program called MATRIS, followed by about 15 exe updates.
Around 11 of these updates open a window telling me the that the update was successful.
---
Now it would be really fun to run a batch or powershell script which closes all of these windows for me.
The last thing I tried was `Get-Process | Where-Object {$_.Path -like "MatrisInstaller.APCIPLUS"} | Stop-Process -WhatIf`
I wasn't sure if it was the name as read in task manager, or like the title of the window, but I tried both.
[](https://i.stack.imgur.com/NCqmY.png)
Please note that a couple of these are (32 bit) - I'm not sure if that would impact the script.
I was able to run `tasklist` followed by `kill {PID}` but PIDs change: I'm not sure how to script it.
Please reply if you need any clarification, I've historically been poor at wording my questions.
|
2018/05/31
|
[
"https://Stackoverflow.com/questions/50615577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
In your example, `Path` is pointing to the executable file on disk, so while possible to use (if it is consistent), it won't match the name you find in the processes tab of Task Manager. Typically, people will use the name as shown on the Details tab of Task manager. For example, with Outlook on my system, these three possibilities are:
```
Path: C:\Program Files\Microsoft Office\Office16\OUTLOOK.EXE
Processes tab: Microsoft Outlook
Details tab: outlook.exe
```
So, you need a command like this:
```
Get-Process | Where Name -eq 'Outlook' | Stop-Process
```
or, better:
```
Get-Process -Name 'Outlook' | Stop-Process
```
Note that PowerShell expects you to remove the '.exe' you see in Task manager.
**EDIT: Additional technique**
If you know the names of the processes, then you can simplify your script by doing something like this:
```
$processList = "Process1","Process2","Process3" # Add all process names to the list
$processList |
ForEach-Object {
Get-Process -Name $_ | Stop-Process
}
```
|
You were almost there, just need to change "Path" to "ProcessName" like so:
```
Get-Process | Where-Object {$_.ProcessName -like "MatrisInstaller.APCIPLUS"} | Stop-Process -WhatIf
```
|
47,393,026 |
>
>
> ```
> SyntaxError: Non-UTF-8 code starting with '\x90' in file
> C:\Python36\python.exe on line 1, but no encoding declared
>
> ```
>
>
I don't understand why it shows me that when I want to open the interpreter. Can someone help me please?
[](https://i.stack.imgur.com/KeL17.png)
|
2017/11/20
|
[
"https://Stackoverflow.com/questions/47393026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8831077/"
] |
Ok, it's a wrong value for PYTHONSTARTUP which was asking me a problem. Thank you all !
|
Python 3 assumes scripts are saved with UTF-8 encoding. The script being executed (from PYTHONSTARTUP?) is not UTF-8-encoded. If you know the encoding, you can add it to the top of the script with:
```
#coding:cp1252 (or whatever the encoding is).
```
Or just re-save the script in UTF-8.
See also [PEP 0263](https://www.python.org/dev/peps/pep-0263/).
|
47,393,026 |
>
>
> ```
> SyntaxError: Non-UTF-8 code starting with '\x90' in file
> C:\Python36\python.exe on line 1, but no encoding declared
>
> ```
>
>
I don't understand why it shows me that when I want to open the interpreter. Can someone help me please?
[](https://i.stack.imgur.com/KeL17.png)
|
2017/11/20
|
[
"https://Stackoverflow.com/questions/47393026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8831077/"
] |
In my case I got this error on running pyinstaller's exe file from cmd and it automatically got fixed when I ran that file by directly clicking on it NOT from cmd
So, to fix this problem don't run file from terminal/cmd, try any other way.
|
Ok, it's a wrong value for PYTHONSTARTUP which was asking me a problem. Thank you all !
|
47,393,026 |
>
>
> ```
> SyntaxError: Non-UTF-8 code starting with '\x90' in file
> C:\Python36\python.exe on line 1, but no encoding declared
>
> ```
>
>
I don't understand why it shows me that when I want to open the interpreter. Can someone help me please?
[](https://i.stack.imgur.com/KeL17.png)
|
2017/11/20
|
[
"https://Stackoverflow.com/questions/47393026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8831077/"
] |
In my case I got this error on running pyinstaller's exe file from cmd and it automatically got fixed when I ran that file by directly clicking on it NOT from cmd
So, to fix this problem don't run file from terminal/cmd, try any other way.
|
Python 3 assumes scripts are saved with UTF-8 encoding. The script being executed (from PYTHONSTARTUP?) is not UTF-8-encoded. If you know the encoding, you can add it to the top of the script with:
```
#coding:cp1252 (or whatever the encoding is).
```
Or just re-save the script in UTF-8.
See also [PEP 0263](https://www.python.org/dev/peps/pep-0263/).
|
5,226,945 |
I'm trying to install MySQL\_python 1.2.2 (I specifically need this version, not the current 1.2.3) on Snow Leopard, and getting this error below.
Command to install:
```
pip install -Iv http://sourceforge.net/projects/mysql-python/files/mysql-python/1.2.2/MySQL-python-1.2.2.tar.gz/download
```
And a portion of the trace:
```
copying MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.6-universal-2.6/MySQLdb/constants
running build_ext
building '_mysql' extension
creating build/temp.macosx-10.6-universal-2.6
/usr/bin/gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,2,'final',0) - D__version__=1.2.2 -I/usr/local/mysql/include - I/System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _mysql.c -o build/temp.macosx-10.6-universal-2.6/_mysql.o -g -Os -arch x86_64 -fno-common -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL
In file included from _mysql.c:35:
/usr/local/mysql/include/my_config.h:1053:1: warning: "HAVE_WCSCOLL" redefined
In file included from /System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:8,
from pymemcompat.h:10,
from _mysql.c:29:
/System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyconfig.h:803:1: warning: this is the location of the previous definition
In file included from /usr/local/mysql/include/mysql.h:47,
from _mysql.c:40:
/usr/include/sys/types.h:92: error: duplicate ‘unsigned’
/usr/include/sys/types.h:92: error: two or more data types in declaration specifiers
error: command '/usr/bin/gcc-4.2' failed with exit status 1
```
Has anyone seen this error before?
|
2011/03/08
|
[
"https://Stackoverflow.com/questions/5226945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/306374/"
] |
A quick google search yielded [this answer](http://justcramer.com/2007/11/05/mysqldb-on-leopard/):
>
> To get mysqldb working on leopard I found I had to edit \_mysql.c and edit out the lines:
>
>
>
> ```
> #ifndef uint
> #define uint unsigned int
> #endif
>
> ```
>
> I also had to edit site.cfg and mark threadsafe to False.
>
>
> After doing this, I managed to get MySQLdb to install, great! Not so
> fast, there was yet another problem:
>
>
> Referenced from: .../\_mysql.so Reason: image not found
>
>
> The solution:
>
>
>
> ```
> sudo ln -s /usr/local/mysql/lib/ /usr/local/mysql/lib/mysql
>
> ```
>
>
|
As the error says, things have been redefined. I've gotten around this in the past by commenting out the offending declarations in the source of the library being build. That solution however has some obvious problems...
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
To add separators, you essentially have 2 options:
1. View-based, you explicitly include separators as an 'item' in the list and define a new viewtype for those separators. Allows the list to re-use the separator views but means you need to take the separators into account when defining the data.
2. Data-based, each item actually has a separator view, but it only shows on specific items. Based on some criteria you show or hide it whilst binding the view-holder.
For the paging library only option 2 is viable since it only partially loads the data and inserting the separators becomes much more complicated. You will simply need to figure out a way to check if item `x` is a different day than item `x-1` and show/hide the date section in the view depending on the result.
|
When binding the data pass in the previous item as well
```
override fun onBindViewHolder(holder: ViewHolder, position: Int) {
val item = getItem(position)
val previousItem = if (position == 0) null else getItem(position - 1)
holder.bind(item, previousItem)
}
```
Every view then sets a header, which is only made visible if the previous item doesn't have the same header.
```
val previousHeader = previousItem?.name?.capitalize().first()
val header = item?.name?.capitalize()?.first()
view.cachedContactHeader.text = header
view.cachedContactHeader.isVisible = previousHeader != header
```
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
To add separators, you essentially have 2 options:
1. View-based, you explicitly include separators as an 'item' in the list and define a new viewtype for those separators. Allows the list to re-use the separator views but means you need to take the separators into account when defining the data.
2. Data-based, each item actually has a separator view, but it only shows on specific items. Based on some criteria you show or hide it whilst binding the view-holder.
For the paging library only option 2 is viable since it only partially loads the data and inserting the separators becomes much more complicated. You will simply need to figure out a way to check if item `x` is a different day than item `x-1` and show/hide the date section in the view depending on the result.
|
[Kiskae's answer](https://stackoverflow.com/a/53023782/305973) is excellent and for your case option 2 probably works well.
In my case I wanted to have one additional item that wasn't in the database, like this:
* Show all
* Item 1
* Item 2
It needed to be clickable as well. There's the usual way of overriding `getItemCount` to return +1 and offsetting positions for the other methods.
But I stumbled on another way that I haven't seen documented yet that might be useful for some cases. You might be able to incorporate additional elements into your query using `union`:
```
@Query("select '' as name, 0 as id " +
"union " +
"select name, id from user " +
"order by 1 asc")
DataSource.Factory<Integer, User> getAllDataSource();
```
That means the data source actually returns another item in the beginning, and there's no need to adjust positions. In your adapter, you can check for that item and handle it differently.
In your case the query would have to be different but I think it would be possible.
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
To add separators, you essentially have 2 options:
1. View-based, you explicitly include separators as an 'item' in the list and define a new viewtype for those separators. Allows the list to re-use the separator views but means you need to take the separators into account when defining the data.
2. Data-based, each item actually has a separator view, but it only shows on specific items. Based on some criteria you show or hide it whilst binding the view-holder.
For the paging library only option 2 is viable since it only partially loads the data and inserting the separators becomes much more complicated. You will simply need to figure out a way to check if item `x` is a different day than item `x-1` and show/hide the date section in the view depending on the result.
|
I was in the same spot as you and I came up with this solution.
One important note though, in order to implement this I had to change my date converter to the database, from long to string to store a timestamp
these are my converters
```
class DateConverter {
companion object {
@JvmStatic
val formatter = SimpleDateFormat("yyyyMMddHHmmss", Locale.ENGLISH)
@TypeConverter
@JvmStatic
fun toDate(text: String): Date = formatter.parse(text)
@TypeConverter
@JvmStatic
fun toText(date: Date): String = formatter.format(date)
}
}
```
Some starting info though, I have a list of report headers that I wish to show , and page through and be able to filter
They are represented by this object:
```
data class ReportHeaderEntity(
@ColumnInfo(name = "id") override val id: UUID
, @ColumnInfo(name = "name") override val name: String
, @ColumnInfo(name = "description") override val description: String
, @ColumnInfo(name = "created") override val date: Date)
```
I also wanted to add separators between the items in the list to show them by date
I achieved this by doing the following:
I created a new query in room like this
```
@Query(
"SELECT id, name, description,created " +
"FROM (SELECT id, name, description, created, created AS sort " +
" FROM reports " +
" WHERE :filter = '' " +
" OR name LIKE '%' || :filter || '%' " +
" OR description LIKE '%' || :filter || '%' " +
" UNION " +
" SELECT '00000000-0000-0000-0000-000000000000' as id, Substr(created, 0, 9) as name, '' as description, Substr(created, 0, 9) || '000000' AS created, Substr(created, 0, 9) || '256060' AS sort " +
" FROM reports " +
" WHERE :filter = '' " +
" OR name LIKE '%' || :filter || '%' " +
" OR description LIKE '%' || :filter || '%' " +
" GROUP BY Substr(created, 0, 9)) " +
"ORDER BY sort DESC ")
fun loadReportHeaders(filter: String = ""): DataSource.Factory<Int, ReportHeaderEntity>
```
This basically creates a separator line for all the items I have filtered through
it also creates a dummy date for sorting (with the time of 25:60:60 so that it will always appear in front of the other reports)
I then combine this with my list using union and sort them by the dummy date
The reason I had to change from long to string is because it is much easier to create dummy dates with string in sql and seperate the date part from the whole date time
The above creates a list like this:
```
00000000-0000-0000-0000-000000000000 20190522 20190522000000
e3b8fbe5-b8ce-4353-b85d-8a1160f51bac name 16769 description 93396 20190522141926
6779fbea-f840-4859-a9a1-b34b7e6520be name 86082 description 21138 20190522141925
00000000-0000-0000-0000-000000000000 20190521 20190521000000
6efa201f-d618-4819-bae1-5a0e907ddcfb name 9702 description 84139 20190521103247
```
In my PagedListAdapter I changed it to be an implementation of `PagedListAdapter<ReportHeader, RecyclerView.ViewHolder>` (not a specific viewholder)
Added to the companion object:
```
companion object {
private val EMPTY_ID = UUID(0L,0L)
private const val LABEL = 0
private const val HEADER = 1
}
```
and overrode get view type like so:
```
override fun getItemViewType(position: Int): Int = if (getItem(position)?.id ?: EMPTY_ID == EMPTY_ID) LABEL else HEADER
```
I then created two seperate view holders :
```
class ReportHeaderViewHolder(val binding: ListItemReportBinding) : RecyclerView.ViewHolder(binding.root)
class ReportLabelViewHolder(val binding: ListItemReportLabelBinding) : RecyclerView.ViewHolder(binding.root)
```
and implemented the other overriden methods like so:
```
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): RecyclerView.ViewHolder {
val inflater = LayoutInflater.from(parent.context)
return when (viewType) {
HEADER -> ReportHeaderViewHolder(DataBindingUtil.inflate(inflater, R.layout.list_item_report, parent, false))
else -> ReportLabelViewHolder(DataBindingUtil.inflate(inflater, R.layout.list_item_report_label, parent, false))
}
}
override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) {
val reportItem = getItem(position)
when (getItemViewType(position)) {
HEADER -> {
(holder as ReportHeaderViewHolder).binding.apply {
report = reportItem
executePendingBindings()
}
}
LABEL -> {
(holder as ReportLabelViewHolder).binding.apply {
date = reportItem?.name
executePendingBindings()
}
}
}
}
```
I hope this helps and inspires people to find even better solutions
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
To add separators, you essentially have 2 options:
1. View-based, you explicitly include separators as an 'item' in the list and define a new viewtype for those separators. Allows the list to re-use the separator views but means you need to take the separators into account when defining the data.
2. Data-based, each item actually has a separator view, but it only shows on specific items. Based on some criteria you show or hide it whilst binding the view-holder.
For the paging library only option 2 is viable since it only partially loads the data and inserting the separators becomes much more complicated. You will simply need to figure out a way to check if item `x` is a different day than item `x-1` and show/hide the date section in the view depending on the result.
|
You can achieve the same result using `insertSeparators` in **Paging 3** library.
Make sure your items are `sorted` by date.
Inside or `viewmodel` retrieve a `Pager` something like that
```
private val communicationResult: Flow<PagingData<CommunicationHistoryItem>> = Pager(
PagingConfig(
pageSize = 50,
enablePlaceholders = false,
maxSize = 400,
initialLoadSize = 50
)
) {
CommunicationPagingSource(repository)
}.flow.cachedIn(viewModelScope)
```
After all `insert separators` like a ***header***
```
val groupedCommunicationResult = communicationResult
.map { pagingData -> pagingData.map { CommunicationHistoryModel.Body(it) } }
.map {
it.insertSeparators{ after, before ->
if (before == null) {
//the end of the list
return@insertSeparators null
}
val afterDateStr = after?.createdDate
val beforeDateStr = before.createdDate
if (afterDateStr == null || beforeDateStr == null)
return@insertSeparators null
val afterDate = DateUtil.parseAsCalendar(afterDateStr)?.cleanTime()?.time ?: 0
val beforeDate = DateUtil.parseAsCalendar(beforeDateStr)?.cleanTime()?.time ?: 0
if (afterDate > beforeDate) {
CommunicationHistoryModel.Header( DateUtil.format(Date(beforeDate))) // dd.MM.yyyy
} else {
// no separator
null
}
}
}
```
`cleanTime` is required for `grouping` by `dd.MM.yyyy` ignoring time
```
fun Calendar.cleanTime(): Date {
set(Calendar.HOUR_OF_DAY, 0)
set(Calendar.MINUTE, 0)
set(Calendar.SECOND, 0)
set(Calendar.MILLISECOND, 0)
return this.time
}
```
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
When binding the data pass in the previous item as well
```
override fun onBindViewHolder(holder: ViewHolder, position: Int) {
val item = getItem(position)
val previousItem = if (position == 0) null else getItem(position - 1)
holder.bind(item, previousItem)
}
```
Every view then sets a header, which is only made visible if the previous item doesn't have the same header.
```
val previousHeader = previousItem?.name?.capitalize().first()
val header = item?.name?.capitalize()?.first()
view.cachedContactHeader.text = header
view.cachedContactHeader.isVisible = previousHeader != header
```
|
[Kiskae's answer](https://stackoverflow.com/a/53023782/305973) is excellent and for your case option 2 probably works well.
In my case I wanted to have one additional item that wasn't in the database, like this:
* Show all
* Item 1
* Item 2
It needed to be clickable as well. There's the usual way of overriding `getItemCount` to return +1 and offsetting positions for the other methods.
But I stumbled on another way that I haven't seen documented yet that might be useful for some cases. You might be able to incorporate additional elements into your query using `union`:
```
@Query("select '' as name, 0 as id " +
"union " +
"select name, id from user " +
"order by 1 asc")
DataSource.Factory<Integer, User> getAllDataSource();
```
That means the data source actually returns another item in the beginning, and there's no need to adjust positions. In your adapter, you can check for that item and handle it differently.
In your case the query would have to be different but I think it would be possible.
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
I was in the same spot as you and I came up with this solution.
One important note though, in order to implement this I had to change my date converter to the database, from long to string to store a timestamp
these are my converters
```
class DateConverter {
companion object {
@JvmStatic
val formatter = SimpleDateFormat("yyyyMMddHHmmss", Locale.ENGLISH)
@TypeConverter
@JvmStatic
fun toDate(text: String): Date = formatter.parse(text)
@TypeConverter
@JvmStatic
fun toText(date: Date): String = formatter.format(date)
}
}
```
Some starting info though, I have a list of report headers that I wish to show , and page through and be able to filter
They are represented by this object:
```
data class ReportHeaderEntity(
@ColumnInfo(name = "id") override val id: UUID
, @ColumnInfo(name = "name") override val name: String
, @ColumnInfo(name = "description") override val description: String
, @ColumnInfo(name = "created") override val date: Date)
```
I also wanted to add separators between the items in the list to show them by date
I achieved this by doing the following:
I created a new query in room like this
```
@Query(
"SELECT id, name, description,created " +
"FROM (SELECT id, name, description, created, created AS sort " +
" FROM reports " +
" WHERE :filter = '' " +
" OR name LIKE '%' || :filter || '%' " +
" OR description LIKE '%' || :filter || '%' " +
" UNION " +
" SELECT '00000000-0000-0000-0000-000000000000' as id, Substr(created, 0, 9) as name, '' as description, Substr(created, 0, 9) || '000000' AS created, Substr(created, 0, 9) || '256060' AS sort " +
" FROM reports " +
" WHERE :filter = '' " +
" OR name LIKE '%' || :filter || '%' " +
" OR description LIKE '%' || :filter || '%' " +
" GROUP BY Substr(created, 0, 9)) " +
"ORDER BY sort DESC ")
fun loadReportHeaders(filter: String = ""): DataSource.Factory<Int, ReportHeaderEntity>
```
This basically creates a separator line for all the items I have filtered through
it also creates a dummy date for sorting (with the time of 25:60:60 so that it will always appear in front of the other reports)
I then combine this with my list using union and sort them by the dummy date
The reason I had to change from long to string is because it is much easier to create dummy dates with string in sql and seperate the date part from the whole date time
The above creates a list like this:
```
00000000-0000-0000-0000-000000000000 20190522 20190522000000
e3b8fbe5-b8ce-4353-b85d-8a1160f51bac name 16769 description 93396 20190522141926
6779fbea-f840-4859-a9a1-b34b7e6520be name 86082 description 21138 20190522141925
00000000-0000-0000-0000-000000000000 20190521 20190521000000
6efa201f-d618-4819-bae1-5a0e907ddcfb name 9702 description 84139 20190521103247
```
In my PagedListAdapter I changed it to be an implementation of `PagedListAdapter<ReportHeader, RecyclerView.ViewHolder>` (not a specific viewholder)
Added to the companion object:
```
companion object {
private val EMPTY_ID = UUID(0L,0L)
private const val LABEL = 0
private const val HEADER = 1
}
```
and overrode get view type like so:
```
override fun getItemViewType(position: Int): Int = if (getItem(position)?.id ?: EMPTY_ID == EMPTY_ID) LABEL else HEADER
```
I then created two seperate view holders :
```
class ReportHeaderViewHolder(val binding: ListItemReportBinding) : RecyclerView.ViewHolder(binding.root)
class ReportLabelViewHolder(val binding: ListItemReportLabelBinding) : RecyclerView.ViewHolder(binding.root)
```
and implemented the other overriden methods like so:
```
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): RecyclerView.ViewHolder {
val inflater = LayoutInflater.from(parent.context)
return when (viewType) {
HEADER -> ReportHeaderViewHolder(DataBindingUtil.inflate(inflater, R.layout.list_item_report, parent, false))
else -> ReportLabelViewHolder(DataBindingUtil.inflate(inflater, R.layout.list_item_report_label, parent, false))
}
}
override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) {
val reportItem = getItem(position)
when (getItemViewType(position)) {
HEADER -> {
(holder as ReportHeaderViewHolder).binding.apply {
report = reportItem
executePendingBindings()
}
}
LABEL -> {
(holder as ReportLabelViewHolder).binding.apply {
date = reportItem?.name
executePendingBindings()
}
}
}
}
```
I hope this helps and inspires people to find even better solutions
|
When binding the data pass in the previous item as well
```
override fun onBindViewHolder(holder: ViewHolder, position: Int) {
val item = getItem(position)
val previousItem = if (position == 0) null else getItem(position - 1)
holder.bind(item, previousItem)
}
```
Every view then sets a header, which is only made visible if the previous item doesn't have the same header.
```
val previousHeader = previousItem?.name?.capitalize().first()
val header = item?.name?.capitalize()?.first()
view.cachedContactHeader.text = header
view.cachedContactHeader.isVisible = previousHeader != header
```
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
You can achieve the same result using `insertSeparators` in **Paging 3** library.
Make sure your items are `sorted` by date.
Inside or `viewmodel` retrieve a `Pager` something like that
```
private val communicationResult: Flow<PagingData<CommunicationHistoryItem>> = Pager(
PagingConfig(
pageSize = 50,
enablePlaceholders = false,
maxSize = 400,
initialLoadSize = 50
)
) {
CommunicationPagingSource(repository)
}.flow.cachedIn(viewModelScope)
```
After all `insert separators` like a ***header***
```
val groupedCommunicationResult = communicationResult
.map { pagingData -> pagingData.map { CommunicationHistoryModel.Body(it) } }
.map {
it.insertSeparators{ after, before ->
if (before == null) {
//the end of the list
return@insertSeparators null
}
val afterDateStr = after?.createdDate
val beforeDateStr = before.createdDate
if (afterDateStr == null || beforeDateStr == null)
return@insertSeparators null
val afterDate = DateUtil.parseAsCalendar(afterDateStr)?.cleanTime()?.time ?: 0
val beforeDate = DateUtil.parseAsCalendar(beforeDateStr)?.cleanTime()?.time ?: 0
if (afterDate > beforeDate) {
CommunicationHistoryModel.Header( DateUtil.format(Date(beforeDate))) // dd.MM.yyyy
} else {
// no separator
null
}
}
}
```
`cleanTime` is required for `grouping` by `dd.MM.yyyy` ignoring time
```
fun Calendar.cleanTime(): Date {
set(Calendar.HOUR_OF_DAY, 0)
set(Calendar.MINUTE, 0)
set(Calendar.SECOND, 0)
set(Calendar.MILLISECOND, 0)
return this.time
}
```
|
When binding the data pass in the previous item as well
```
override fun onBindViewHolder(holder: ViewHolder, position: Int) {
val item = getItem(position)
val previousItem = if (position == 0) null else getItem(position - 1)
holder.bind(item, previousItem)
}
```
Every view then sets a header, which is only made visible if the previous item doesn't have the same header.
```
val previousHeader = previousItem?.name?.capitalize().first()
val header = item?.name?.capitalize()?.first()
view.cachedContactHeader.text = header
view.cachedContactHeader.isVisible = previousHeader != header
```
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
I was in the same spot as you and I came up with this solution.
One important note though, in order to implement this I had to change my date converter to the database, from long to string to store a timestamp
these are my converters
```
class DateConverter {
companion object {
@JvmStatic
val formatter = SimpleDateFormat("yyyyMMddHHmmss", Locale.ENGLISH)
@TypeConverter
@JvmStatic
fun toDate(text: String): Date = formatter.parse(text)
@TypeConverter
@JvmStatic
fun toText(date: Date): String = formatter.format(date)
}
}
```
Some starting info though, I have a list of report headers that I wish to show , and page through and be able to filter
They are represented by this object:
```
data class ReportHeaderEntity(
@ColumnInfo(name = "id") override val id: UUID
, @ColumnInfo(name = "name") override val name: String
, @ColumnInfo(name = "description") override val description: String
, @ColumnInfo(name = "created") override val date: Date)
```
I also wanted to add separators between the items in the list to show them by date
I achieved this by doing the following:
I created a new query in room like this
```
@Query(
"SELECT id, name, description,created " +
"FROM (SELECT id, name, description, created, created AS sort " +
" FROM reports " +
" WHERE :filter = '' " +
" OR name LIKE '%' || :filter || '%' " +
" OR description LIKE '%' || :filter || '%' " +
" UNION " +
" SELECT '00000000-0000-0000-0000-000000000000' as id, Substr(created, 0, 9) as name, '' as description, Substr(created, 0, 9) || '000000' AS created, Substr(created, 0, 9) || '256060' AS sort " +
" FROM reports " +
" WHERE :filter = '' " +
" OR name LIKE '%' || :filter || '%' " +
" OR description LIKE '%' || :filter || '%' " +
" GROUP BY Substr(created, 0, 9)) " +
"ORDER BY sort DESC ")
fun loadReportHeaders(filter: String = ""): DataSource.Factory<Int, ReportHeaderEntity>
```
This basically creates a separator line for all the items I have filtered through
it also creates a dummy date for sorting (with the time of 25:60:60 so that it will always appear in front of the other reports)
I then combine this with my list using union and sort them by the dummy date
The reason I had to change from long to string is because it is much easier to create dummy dates with string in sql and seperate the date part from the whole date time
The above creates a list like this:
```
00000000-0000-0000-0000-000000000000 20190522 20190522000000
e3b8fbe5-b8ce-4353-b85d-8a1160f51bac name 16769 description 93396 20190522141926
6779fbea-f840-4859-a9a1-b34b7e6520be name 86082 description 21138 20190522141925
00000000-0000-0000-0000-000000000000 20190521 20190521000000
6efa201f-d618-4819-bae1-5a0e907ddcfb name 9702 description 84139 20190521103247
```
In my PagedListAdapter I changed it to be an implementation of `PagedListAdapter<ReportHeader, RecyclerView.ViewHolder>` (not a specific viewholder)
Added to the companion object:
```
companion object {
private val EMPTY_ID = UUID(0L,0L)
private const val LABEL = 0
private const val HEADER = 1
}
```
and overrode get view type like so:
```
override fun getItemViewType(position: Int): Int = if (getItem(position)?.id ?: EMPTY_ID == EMPTY_ID) LABEL else HEADER
```
I then created two seperate view holders :
```
class ReportHeaderViewHolder(val binding: ListItemReportBinding) : RecyclerView.ViewHolder(binding.root)
class ReportLabelViewHolder(val binding: ListItemReportLabelBinding) : RecyclerView.ViewHolder(binding.root)
```
and implemented the other overriden methods like so:
```
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): RecyclerView.ViewHolder {
val inflater = LayoutInflater.from(parent.context)
return when (viewType) {
HEADER -> ReportHeaderViewHolder(DataBindingUtil.inflate(inflater, R.layout.list_item_report, parent, false))
else -> ReportLabelViewHolder(DataBindingUtil.inflate(inflater, R.layout.list_item_report_label, parent, false))
}
}
override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) {
val reportItem = getItem(position)
when (getItemViewType(position)) {
HEADER -> {
(holder as ReportHeaderViewHolder).binding.apply {
report = reportItem
executePendingBindings()
}
}
LABEL -> {
(holder as ReportLabelViewHolder).binding.apply {
date = reportItem?.name
executePendingBindings()
}
}
}
}
```
I hope this helps and inspires people to find even better solutions
|
[Kiskae's answer](https://stackoverflow.com/a/53023782/305973) is excellent and for your case option 2 probably works well.
In my case I wanted to have one additional item that wasn't in the database, like this:
* Show all
* Item 1
* Item 2
It needed to be clickable as well. There's the usual way of overriding `getItemCount` to return +1 and offsetting positions for the other methods.
But I stumbled on another way that I haven't seen documented yet that might be useful for some cases. You might be able to incorporate additional elements into your query using `union`:
```
@Query("select '' as name, 0 as id " +
"union " +
"select name, id from user " +
"order by 1 asc")
DataSource.Factory<Integer, User> getAllDataSource();
```
That means the data source actually returns another item in the beginning, and there's no need to adjust positions. In your adapter, you can check for that item and handle it differently.
In your case the query would have to be different but I think it would be possible.
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
You can achieve the same result using `insertSeparators` in **Paging 3** library.
Make sure your items are `sorted` by date.
Inside or `viewmodel` retrieve a `Pager` something like that
```
private val communicationResult: Flow<PagingData<CommunicationHistoryItem>> = Pager(
PagingConfig(
pageSize = 50,
enablePlaceholders = false,
maxSize = 400,
initialLoadSize = 50
)
) {
CommunicationPagingSource(repository)
}.flow.cachedIn(viewModelScope)
```
After all `insert separators` like a ***header***
```
val groupedCommunicationResult = communicationResult
.map { pagingData -> pagingData.map { CommunicationHistoryModel.Body(it) } }
.map {
it.insertSeparators{ after, before ->
if (before == null) {
//the end of the list
return@insertSeparators null
}
val afterDateStr = after?.createdDate
val beforeDateStr = before.createdDate
if (afterDateStr == null || beforeDateStr == null)
return@insertSeparators null
val afterDate = DateUtil.parseAsCalendar(afterDateStr)?.cleanTime()?.time ?: 0
val beforeDate = DateUtil.parseAsCalendar(beforeDateStr)?.cleanTime()?.time ?: 0
if (afterDate > beforeDate) {
CommunicationHistoryModel.Header( DateUtil.format(Date(beforeDate))) // dd.MM.yyyy
} else {
// no separator
null
}
}
}
```
`cleanTime` is required for `grouping` by `dd.MM.yyyy` ignoring time
```
fun Calendar.cleanTime(): Date {
set(Calendar.HOUR_OF_DAY, 0)
set(Calendar.MINUTE, 0)
set(Calendar.SECOND, 0)
set(Calendar.MILLISECOND, 0)
return this.time
}
```
|
[Kiskae's answer](https://stackoverflow.com/a/53023782/305973) is excellent and for your case option 2 probably works well.
In my case I wanted to have one additional item that wasn't in the database, like this:
* Show all
* Item 1
* Item 2
It needed to be clickable as well. There's the usual way of overriding `getItemCount` to return +1 and offsetting positions for the other methods.
But I stumbled on another way that I haven't seen documented yet that might be useful for some cases. You might be able to incorporate additional elements into your query using `union`:
```
@Query("select '' as name, 0 as id " +
"union " +
"select name, id from user " +
"order by 1 asc")
DataSource.Factory<Integer, User> getAllDataSource();
```
That means the data source actually returns another item in the beginning, and there's no need to adjust positions. In your adapter, you can check for that item and handle it differently.
In your case the query would have to be different but I think it would be possible.
|
53,023,541 |
I created a simple python script to change my ubuntu wallpaper. I want to this script run after whenever I logged in.
* I tried to add command in startup application as `python /bin/wallpaper_changer.py` but it doesn't work.
* I also tried to add a desktop entry in `~/.config/autostart/Myscript.desktop` but it also doesn't work.
* I also added this file path in crontab using `sudo crontab -e` `@reboot python /bin/wallpaper_changer.py` it also doesn't work.
* I added entry in rc.local as `python /bin/wallpaper_changer.py` it also doesn't work
|
2018/10/27
|
[
"https://Stackoverflow.com/questions/53023541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9491670/"
] |
You can achieve the same result using `insertSeparators` in **Paging 3** library.
Make sure your items are `sorted` by date.
Inside or `viewmodel` retrieve a `Pager` something like that
```
private val communicationResult: Flow<PagingData<CommunicationHistoryItem>> = Pager(
PagingConfig(
pageSize = 50,
enablePlaceholders = false,
maxSize = 400,
initialLoadSize = 50
)
) {
CommunicationPagingSource(repository)
}.flow.cachedIn(viewModelScope)
```
After all `insert separators` like a ***header***
```
val groupedCommunicationResult = communicationResult
.map { pagingData -> pagingData.map { CommunicationHistoryModel.Body(it) } }
.map {
it.insertSeparators{ after, before ->
if (before == null) {
//the end of the list
return@insertSeparators null
}
val afterDateStr = after?.createdDate
val beforeDateStr = before.createdDate
if (afterDateStr == null || beforeDateStr == null)
return@insertSeparators null
val afterDate = DateUtil.parseAsCalendar(afterDateStr)?.cleanTime()?.time ?: 0
val beforeDate = DateUtil.parseAsCalendar(beforeDateStr)?.cleanTime()?.time ?: 0
if (afterDate > beforeDate) {
CommunicationHistoryModel.Header( DateUtil.format(Date(beforeDate))) // dd.MM.yyyy
} else {
// no separator
null
}
}
}
```
`cleanTime` is required for `grouping` by `dd.MM.yyyy` ignoring time
```
fun Calendar.cleanTime(): Date {
set(Calendar.HOUR_OF_DAY, 0)
set(Calendar.MINUTE, 0)
set(Calendar.SECOND, 0)
set(Calendar.MILLISECOND, 0)
return this.time
}
```
|
I was in the same spot as you and I came up with this solution.
One important note though, in order to implement this I had to change my date converter to the database, from long to string to store a timestamp
these are my converters
```
class DateConverter {
companion object {
@JvmStatic
val formatter = SimpleDateFormat("yyyyMMddHHmmss", Locale.ENGLISH)
@TypeConverter
@JvmStatic
fun toDate(text: String): Date = formatter.parse(text)
@TypeConverter
@JvmStatic
fun toText(date: Date): String = formatter.format(date)
}
}
```
Some starting info though, I have a list of report headers that I wish to show , and page through and be able to filter
They are represented by this object:
```
data class ReportHeaderEntity(
@ColumnInfo(name = "id") override val id: UUID
, @ColumnInfo(name = "name") override val name: String
, @ColumnInfo(name = "description") override val description: String
, @ColumnInfo(name = "created") override val date: Date)
```
I also wanted to add separators between the items in the list to show them by date
I achieved this by doing the following:
I created a new query in room like this
```
@Query(
"SELECT id, name, description,created " +
"FROM (SELECT id, name, description, created, created AS sort " +
" FROM reports " +
" WHERE :filter = '' " +
" OR name LIKE '%' || :filter || '%' " +
" OR description LIKE '%' || :filter || '%' " +
" UNION " +
" SELECT '00000000-0000-0000-0000-000000000000' as id, Substr(created, 0, 9) as name, '' as description, Substr(created, 0, 9) || '000000' AS created, Substr(created, 0, 9) || '256060' AS sort " +
" FROM reports " +
" WHERE :filter = '' " +
" OR name LIKE '%' || :filter || '%' " +
" OR description LIKE '%' || :filter || '%' " +
" GROUP BY Substr(created, 0, 9)) " +
"ORDER BY sort DESC ")
fun loadReportHeaders(filter: String = ""): DataSource.Factory<Int, ReportHeaderEntity>
```
This basically creates a separator line for all the items I have filtered through
it also creates a dummy date for sorting (with the time of 25:60:60 so that it will always appear in front of the other reports)
I then combine this with my list using union and sort them by the dummy date
The reason I had to change from long to string is because it is much easier to create dummy dates with string in sql and seperate the date part from the whole date time
The above creates a list like this:
```
00000000-0000-0000-0000-000000000000 20190522 20190522000000
e3b8fbe5-b8ce-4353-b85d-8a1160f51bac name 16769 description 93396 20190522141926
6779fbea-f840-4859-a9a1-b34b7e6520be name 86082 description 21138 20190522141925
00000000-0000-0000-0000-000000000000 20190521 20190521000000
6efa201f-d618-4819-bae1-5a0e907ddcfb name 9702 description 84139 20190521103247
```
In my PagedListAdapter I changed it to be an implementation of `PagedListAdapter<ReportHeader, RecyclerView.ViewHolder>` (not a specific viewholder)
Added to the companion object:
```
companion object {
private val EMPTY_ID = UUID(0L,0L)
private const val LABEL = 0
private const val HEADER = 1
}
```
and overrode get view type like so:
```
override fun getItemViewType(position: Int): Int = if (getItem(position)?.id ?: EMPTY_ID == EMPTY_ID) LABEL else HEADER
```
I then created two seperate view holders :
```
class ReportHeaderViewHolder(val binding: ListItemReportBinding) : RecyclerView.ViewHolder(binding.root)
class ReportLabelViewHolder(val binding: ListItemReportLabelBinding) : RecyclerView.ViewHolder(binding.root)
```
and implemented the other overriden methods like so:
```
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): RecyclerView.ViewHolder {
val inflater = LayoutInflater.from(parent.context)
return when (viewType) {
HEADER -> ReportHeaderViewHolder(DataBindingUtil.inflate(inflater, R.layout.list_item_report, parent, false))
else -> ReportLabelViewHolder(DataBindingUtil.inflate(inflater, R.layout.list_item_report_label, parent, false))
}
}
override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) {
val reportItem = getItem(position)
when (getItemViewType(position)) {
HEADER -> {
(holder as ReportHeaderViewHolder).binding.apply {
report = reportItem
executePendingBindings()
}
}
LABEL -> {
(holder as ReportLabelViewHolder).binding.apply {
date = reportItem?.name
executePendingBindings()
}
}
}
}
```
I hope this helps and inspires people to find even better solutions
|
63,049,432 |
Anyone can guide how to can I plot a column value against Lat & Long. The data which I want to plot through python is mentioned below. I have run the code but it isn't working. Kindly guide me on how to do it
Data in CSV File :
```
Longitude Latitude RSRP
71.676847 29.376015 -89
71.676447 29.376115 -101
71.677847 29.376215 -90
```
Code :
```py
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
df = pd.read_csv('C:\\Users\\uwx630237\\BWPMR.csv')
gdf = gpd.GeoDataFrame(df)
Lon = df['Longitude']
Lat = df['Latitude']
RSRP = df['RSRP']
```
[Required Ouput Picture](https://i.stack.imgur.com/bdhnS.jpg)
|
2020/07/23
|
[
"https://Stackoverflow.com/questions/63049432",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13980933/"
] |
Without map as background on the plot, you can use `df.plot.scatter()`. Here is the relevant lines of code that you can try:
```
# ... previous lines of code
# add a column, named `color`, and set values in it
df.loc[:, 'color'] = 'green' # set all rows -> color=green
df.loc[df['RSRP'] < -100, 'color'] = 'red' # set some rows -> color=red
# plot the data as a scatter plot
df.plot.scatter( x='Longitude' , y='Latitude', s=20, color=df['color'], alpha=0.8 )
```
The output will look like this:
[](https://i.stack.imgur.com/S8TdQ.png)
|
You can achieve that using folium. Here is a toy example how to add data to a map of San Francisco
```
import foilum
import folium.plugins
import branca
import branca.colormap as cm
colormap = cm.LinearColormap(colors=['red','lightblue'], index= 90,100],vmin=90,vmax=100)
sanfrancisco_map = folium.Map(location=[37.77, -122.42], zoom_start=12)
lat = list(df.latitude)
lon = list(df.longitude)
RSRP = list(df.RSRP)
for loc, RSRP in zip(zip(lat, lon), RSRP):
folium.Circle(
location=loc,
radius=10,
fill=True,
color=colormap(p),
).add_to(map)
# add incidents to map
sanfran_map.add_child(colormap)
```
|
35,101,850 |
I'm trying to internationalize a Django app by following the wonderful Django documentation. The problem is when I try to run command to create language files:
```
python manage.py makemessages -l fr
```
It outputs an error :
```
CommandError: Can't find msguniq. Make sure you have GNU gettext tools 0.15 or newer installed.
```
My configuration :
* OS : **OSX El Capitan v10.11.3**
* Python : **v3.5**
* Django : **v1.8**
|
2016/01/30
|
[
"https://Stackoverflow.com/questions/35101850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3755845/"
] |
Install gettext GNU tools with Homebrew using Terminal
1. Install [Homebrew](http://brew.sh) : `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`
2. Install GNU gettext : `brew install gettext`
3. Create symlink : `brew link gettext --force`
|
If you try running `link gettext --force` it warns you:
>
> Note that doing so can interfere with building software.
>
>
>
The safest way to use it is to add gettext binary to your path:
```
export PATH="/usr/local/opt/gettext/bin:$PATH"
```
|
35,101,850 |
I'm trying to internationalize a Django app by following the wonderful Django documentation. The problem is when I try to run command to create language files:
```
python manage.py makemessages -l fr
```
It outputs an error :
```
CommandError: Can't find msguniq. Make sure you have GNU gettext tools 0.15 or newer installed.
```
My configuration :
* OS : **OSX El Capitan v10.11.3**
* Python : **v3.5**
* Django : **v1.8**
|
2016/01/30
|
[
"https://Stackoverflow.com/questions/35101850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3755845/"
] |
Install gettext GNU tools with Homebrew using Terminal
1. Install [Homebrew](http://brew.sh) : `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`
2. Install GNU gettext : `brew install gettext`
3. Create symlink : `brew link gettext --force`
|
**For Windows**:
1. Download [gettext](https://mlocati.github.io/articles/gettext-iconv-windows.html) and install
2. Add in system var PATH: `C:\Program Files\gettext-iconv\bin` (if you didn't it during installation)
3. Check or create in your project directory `locale/`
4. Check or add in *settings.py*: `LOCALE_PATHS = (BASE_DIR + 'locale/', )`
Enjoy by `django-admin makemessages -l fr`. If still repeating don't forget to restart your shell to update env vars
|
35,101,850 |
I'm trying to internationalize a Django app by following the wonderful Django documentation. The problem is when I try to run command to create language files:
```
python manage.py makemessages -l fr
```
It outputs an error :
```
CommandError: Can't find msguniq. Make sure you have GNU gettext tools 0.15 or newer installed.
```
My configuration :
* OS : **OSX El Capitan v10.11.3**
* Python : **v3.5**
* Django : **v1.8**
|
2016/01/30
|
[
"https://Stackoverflow.com/questions/35101850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3755845/"
] |
Install gettext GNU tools with Homebrew using Terminal
1. Install [Homebrew](http://brew.sh) : `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`
2. Install GNU gettext : `brew install gettext`
3. Create symlink : `brew link gettext --force`
|
First make dir in root project folder with name **`locale`** then run
```
sudo apt install gettext
```
|
35,101,850 |
I'm trying to internationalize a Django app by following the wonderful Django documentation. The problem is when I try to run command to create language files:
```
python manage.py makemessages -l fr
```
It outputs an error :
```
CommandError: Can't find msguniq. Make sure you have GNU gettext tools 0.15 or newer installed.
```
My configuration :
* OS : **OSX El Capitan v10.11.3**
* Python : **v3.5**
* Django : **v1.8**
|
2016/01/30
|
[
"https://Stackoverflow.com/questions/35101850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3755845/"
] |
**For Windows**:
1. Download [gettext](https://mlocati.github.io/articles/gettext-iconv-windows.html) and install
2. Add in system var PATH: `C:\Program Files\gettext-iconv\bin` (if you didn't it during installation)
3. Check or create in your project directory `locale/`
4. Check or add in *settings.py*: `LOCALE_PATHS = (BASE_DIR + 'locale/', )`
Enjoy by `django-admin makemessages -l fr`. If still repeating don't forget to restart your shell to update env vars
|
If you try running `link gettext --force` it warns you:
>
> Note that doing so can interfere with building software.
>
>
>
The safest way to use it is to add gettext binary to your path:
```
export PATH="/usr/local/opt/gettext/bin:$PATH"
```
|
35,101,850 |
I'm trying to internationalize a Django app by following the wonderful Django documentation. The problem is when I try to run command to create language files:
```
python manage.py makemessages -l fr
```
It outputs an error :
```
CommandError: Can't find msguniq. Make sure you have GNU gettext tools 0.15 or newer installed.
```
My configuration :
* OS : **OSX El Capitan v10.11.3**
* Python : **v3.5**
* Django : **v1.8**
|
2016/01/30
|
[
"https://Stackoverflow.com/questions/35101850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3755845/"
] |
First make dir in root project folder with name **`locale`** then run
```
sudo apt install gettext
```
|
If you try running `link gettext --force` it warns you:
>
> Note that doing so can interfere with building software.
>
>
>
The safest way to use it is to add gettext binary to your path:
```
export PATH="/usr/local/opt/gettext/bin:$PATH"
```
|
40,451,587 |
What is the point of python's `operator` module? There are many obviously redundant functions there and I don't understand why should one prefer to use these functions rather than other ways to do the same thing.
For example:
```
>>> import operator
>>> operator.truth(0)
False
>>> bool(0)
False
```
seem to do exactly the same thing.
|
2016/11/06
|
[
"https://Stackoverflow.com/questions/40451587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Its sometimes useful to be able to access the functionality of an operator but as a function. For example to add two numbers together you could do.
```
>> print(1 + 2)
3
```
You could also do
```
>> import operator
>> print(operator.add(1, 2))
3
```
A use case for the function approach could be you need to write a calculator function which returns an answer given a simple formula.
```
import operator as _operator
operator_mapping = {
'+': _operator.add,
'-': _operator.sub,
'*': _operator.mul,
'/': _operator.truediv,
}
def calculate(formula):
x, operator, y = formula.split(' ')
# Convert x and y to floats so we can perform mathematical
# operations on them.
x, y = map(float, (x, y))
return operator_mapping[operator](x, y)
print(calculate('1 + 2')) # prints 3.0
```
|
For completeness and consistency. Because having all operators in one place lets you do dynamic lookups later on:
```
getattr(operator, opname)(*arguments)
```
Omitting some operations because they are redundant would defeat that purpose. And because Python names are just references, it is cheap and easy to add a name to the `operator` module that is simply another reference.
|
40,451,587 |
What is the point of python's `operator` module? There are many obviously redundant functions there and I don't understand why should one prefer to use these functions rather than other ways to do the same thing.
For example:
```
>>> import operator
>>> operator.truth(0)
False
>>> bool(0)
False
```
seem to do exactly the same thing.
|
2016/11/06
|
[
"https://Stackoverflow.com/questions/40451587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
For completeness and consistency. Because having all operators in one place lets you do dynamic lookups later on:
```
getattr(operator, opname)(*arguments)
```
Omitting some operations because they are redundant would defeat that purpose. And because Python names are just references, it is cheap and easy to add a name to the `operator` module that is simply another reference.
|
Given the existence of [`bool`](https://docs.python.org/2/library/functions.html#bool), it's hard to think of any use-case for `operator.truth` these days. Note that `bool` was new in 2.2.1, and operator predates that, so it may only exist now for historical reasons. There are also other useless functions in the operator module, [such as `operator.abs`](https://stackoverflow.com/q/22387166/674039), which simply calls the built-in `abs`.
Not everything in `operator` is entirely useless, though - operator's [C implementation](https://github.com/python/cpython/blob/master/Modules/_operator.c), if available, can offer performance gains over pure Python implementations. The `itemgetter`, `attrgetter` and `methodcaller` functions are more readable and generally better performing utility functions for tasks which are often handled by lambda functions.
|
40,451,587 |
What is the point of python's `operator` module? There are many obviously redundant functions there and I don't understand why should one prefer to use these functions rather than other ways to do the same thing.
For example:
```
>>> import operator
>>> operator.truth(0)
False
>>> bool(0)
False
```
seem to do exactly the same thing.
|
2016/11/06
|
[
"https://Stackoverflow.com/questions/40451587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Its sometimes useful to be able to access the functionality of an operator but as a function. For example to add two numbers together you could do.
```
>> print(1 + 2)
3
```
You could also do
```
>> import operator
>> print(operator.add(1, 2))
3
```
A use case for the function approach could be you need to write a calculator function which returns an answer given a simple formula.
```
import operator as _operator
operator_mapping = {
'+': _operator.add,
'-': _operator.sub,
'*': _operator.mul,
'/': _operator.truediv,
}
def calculate(formula):
x, operator, y = formula.split(' ')
# Convert x and y to floats so we can perform mathematical
# operations on them.
x, y = map(float, (x, y))
return operator_mapping[operator](x, y)
print(calculate('1 + 2')) # prints 3.0
```
|
Given the existence of [`bool`](https://docs.python.org/2/library/functions.html#bool), it's hard to think of any use-case for `operator.truth` these days. Note that `bool` was new in 2.2.1, and operator predates that, so it may only exist now for historical reasons. There are also other useless functions in the operator module, [such as `operator.abs`](https://stackoverflow.com/q/22387166/674039), which simply calls the built-in `abs`.
Not everything in `operator` is entirely useless, though - operator's [C implementation](https://github.com/python/cpython/blob/master/Modules/_operator.c), if available, can offer performance gains over pure Python implementations. The `itemgetter`, `attrgetter` and `methodcaller` functions are more readable and generally better performing utility functions for tasks which are often handled by lambda functions.
|
46,936,397 |
I have a string that can vary but will always contain `x={stuffNeeded}`.
For example: `n=1,x={y,z,w},erore={3,4,5}` or `x={y,z,w}` or `erore={3,4,5},x={y,z,w}` etc.
I am having a devil of a time figuring out how to get `y,z,w`. The closest I got to finding the answer was based off of Yatharth's answer on this other post [Regular expression to return all characters between two special characters](https://stackoverflow.com/questions/9889635/regular-expression-to-return-all-characters-between-two-special-characters).
It my searching I've so far come across something that almost worked. Testing was done here <http://rubular.com/r/bgixv2J6yF> and in python.
This was tested in python using:
```
i='n=1,x={y,z,w},erore={3,4,5}'
j='n=1,x={y,z,w}'
print re.search('x={(.*)}',i).group(1)
print re.search('x={(.*)}',j).group(1)
print re.search('x={(.*)}.',i).group(1)
print re.search('x={(.*)}.',j).group(1)
```
Result for the four different print:
```
'y,z,w'
'y,z,w},erore={3,4,5'
AttributeError: 'NoneType' object has no attribute 'group'
'y,z,w'
```
Needed result is `'y,z,w'` for all cases and then if `x={*}` really isn't found I would put an error catch.
Thank you in advance.
|
2017/10/25
|
[
"https://Stackoverflow.com/questions/46936397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5639728/"
] |
This regex does what you're trying to do :
```
regex = r'x={([^\}]*)}'
```
Live demo [here](https://regex101.com/r/OGaFkn/2)
**Explanation**
* **`{([^\}]*)`** : look for an opening bracket, then look for (and capture) any number of non `}` characters. So, your group 1 will contain the captured values for x.
* **`}`**: look for a closing bracket
|
The main problem is that `{(.*)}` matches the longest string starting by `{` and ending by `}`, which in some cases is `y,z,w},erore={3,4,5`
You could use non-greedy matching by adding `?`. You don't need any other case.
```
import re
i='n=1,x={y,z,w},erore={3,4,5}'
j='n=1,x={y,z,w}'
expr = 'x={(.*?)}'
print (re.search(expr,i).group(1))
print (re.search(expr,j).group(1))
```
result:
```
y,z,w
y,z,w
```
|
46,936,397 |
I have a string that can vary but will always contain `x={stuffNeeded}`.
For example: `n=1,x={y,z,w},erore={3,4,5}` or `x={y,z,w}` or `erore={3,4,5},x={y,z,w}` etc.
I am having a devil of a time figuring out how to get `y,z,w`. The closest I got to finding the answer was based off of Yatharth's answer on this other post [Regular expression to return all characters between two special characters](https://stackoverflow.com/questions/9889635/regular-expression-to-return-all-characters-between-two-special-characters).
It my searching I've so far come across something that almost worked. Testing was done here <http://rubular.com/r/bgixv2J6yF> and in python.
This was tested in python using:
```
i='n=1,x={y,z,w},erore={3,4,5}'
j='n=1,x={y,z,w}'
print re.search('x={(.*)}',i).group(1)
print re.search('x={(.*)}',j).group(1)
print re.search('x={(.*)}.',i).group(1)
print re.search('x={(.*)}.',j).group(1)
```
Result for the four different print:
```
'y,z,w'
'y,z,w},erore={3,4,5'
AttributeError: 'NoneType' object has no attribute 'group'
'y,z,w'
```
Needed result is `'y,z,w'` for all cases and then if `x={*}` really isn't found I would put an error catch.
Thank you in advance.
|
2017/10/25
|
[
"https://Stackoverflow.com/questions/46936397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5639728/"
] |
This regex does what you're trying to do :
```
regex = r'x={([^\}]*)}'
```
Live demo [here](https://regex101.com/r/OGaFkn/2)
**Explanation**
* **`{([^\}]*)`** : look for an opening bracket, then look for (and capture) any number of non `}` characters. So, your group 1 will contain the captured values for x.
* **`}`**: look for a closing bracket
|
Using `re.findall` :
```
>>> import re
>>> re.findall('x={[^\}]*}', s)
```
#driver values :
```
IN : s = 'n=1,x={y,z,w},erore={3,4,5}'
OUT : ['x={y,z,w}']
IN : s = 'n=1,x={y,z,w}'
OUT : ['x={y,z,w}']
IN : s = 'x={y,z,w}'
OUT : ['x={y,z,w}']
```
---
Now to get the value of `x, y, z` , use `split` and `strip` :
```
>>> l = re.findall('x={[^\}]*}', s)
#if `l` is not empty
>>> out = l[0]
=> 'x={y,z,w}'
>>> y, z, x = out.strip('x={}').split(',')
>>> y, z, x
=> ('y', 'z', 'w')
```
|
46,936,397 |
I have a string that can vary but will always contain `x={stuffNeeded}`.
For example: `n=1,x={y,z,w},erore={3,4,5}` or `x={y,z,w}` or `erore={3,4,5},x={y,z,w}` etc.
I am having a devil of a time figuring out how to get `y,z,w`. The closest I got to finding the answer was based off of Yatharth's answer on this other post [Regular expression to return all characters between two special characters](https://stackoverflow.com/questions/9889635/regular-expression-to-return-all-characters-between-two-special-characters).
It my searching I've so far come across something that almost worked. Testing was done here <http://rubular.com/r/bgixv2J6yF> and in python.
This was tested in python using:
```
i='n=1,x={y,z,w},erore={3,4,5}'
j='n=1,x={y,z,w}'
print re.search('x={(.*)}',i).group(1)
print re.search('x={(.*)}',j).group(1)
print re.search('x={(.*)}.',i).group(1)
print re.search('x={(.*)}.',j).group(1)
```
Result for the four different print:
```
'y,z,w'
'y,z,w},erore={3,4,5'
AttributeError: 'NoneType' object has no attribute 'group'
'y,z,w'
```
Needed result is `'y,z,w'` for all cases and then if `x={*}` really isn't found I would put an error catch.
Thank you in advance.
|
2017/10/25
|
[
"https://Stackoverflow.com/questions/46936397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5639728/"
] |
This regex does what you're trying to do :
```
regex = r'x={([^\}]*)}'
```
Live demo [here](https://regex101.com/r/OGaFkn/2)
**Explanation**
* **`{([^\}]*)`** : look for an opening bracket, then look for (and capture) any number of non `}` characters. So, your group 1 will contain the captured values for x.
* **`}`**: look for a closing bracket
|
You can try this:
```
import re
s = 'n=1,x={y,z,w},erore={3,4,5}'
final_data = re.findall('=\{(.*?)\}', s)
```
Output:
```
['y,z,w', '3,4,5']
```
|
46,936,397 |
I have a string that can vary but will always contain `x={stuffNeeded}`.
For example: `n=1,x={y,z,w},erore={3,4,5}` or `x={y,z,w}` or `erore={3,4,5},x={y,z,w}` etc.
I am having a devil of a time figuring out how to get `y,z,w`. The closest I got to finding the answer was based off of Yatharth's answer on this other post [Regular expression to return all characters between two special characters](https://stackoverflow.com/questions/9889635/regular-expression-to-return-all-characters-between-two-special-characters).
It my searching I've so far come across something that almost worked. Testing was done here <http://rubular.com/r/bgixv2J6yF> and in python.
This was tested in python using:
```
i='n=1,x={y,z,w},erore={3,4,5}'
j='n=1,x={y,z,w}'
print re.search('x={(.*)}',i).group(1)
print re.search('x={(.*)}',j).group(1)
print re.search('x={(.*)}.',i).group(1)
print re.search('x={(.*)}.',j).group(1)
```
Result for the four different print:
```
'y,z,w'
'y,z,w},erore={3,4,5'
AttributeError: 'NoneType' object has no attribute 'group'
'y,z,w'
```
Needed result is `'y,z,w'` for all cases and then if `x={*}` really isn't found I would put an error catch.
Thank you in advance.
|
2017/10/25
|
[
"https://Stackoverflow.com/questions/46936397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5639728/"
] |
The main problem is that `{(.*)}` matches the longest string starting by `{` and ending by `}`, which in some cases is `y,z,w},erore={3,4,5`
You could use non-greedy matching by adding `?`. You don't need any other case.
```
import re
i='n=1,x={y,z,w},erore={3,4,5}'
j='n=1,x={y,z,w}'
expr = 'x={(.*?)}'
print (re.search(expr,i).group(1))
print (re.search(expr,j).group(1))
```
result:
```
y,z,w
y,z,w
```
|
Using `re.findall` :
```
>>> import re
>>> re.findall('x={[^\}]*}', s)
```
#driver values :
```
IN : s = 'n=1,x={y,z,w},erore={3,4,5}'
OUT : ['x={y,z,w}']
IN : s = 'n=1,x={y,z,w}'
OUT : ['x={y,z,w}']
IN : s = 'x={y,z,w}'
OUT : ['x={y,z,w}']
```
---
Now to get the value of `x, y, z` , use `split` and `strip` :
```
>>> l = re.findall('x={[^\}]*}', s)
#if `l` is not empty
>>> out = l[0]
=> 'x={y,z,w}'
>>> y, z, x = out.strip('x={}').split(',')
>>> y, z, x
=> ('y', 'z', 'w')
```
|
46,936,397 |
I have a string that can vary but will always contain `x={stuffNeeded}`.
For example: `n=1,x={y,z,w},erore={3,4,5}` or `x={y,z,w}` or `erore={3,4,5},x={y,z,w}` etc.
I am having a devil of a time figuring out how to get `y,z,w`. The closest I got to finding the answer was based off of Yatharth's answer on this other post [Regular expression to return all characters between two special characters](https://stackoverflow.com/questions/9889635/regular-expression-to-return-all-characters-between-two-special-characters).
It my searching I've so far come across something that almost worked. Testing was done here <http://rubular.com/r/bgixv2J6yF> and in python.
This was tested in python using:
```
i='n=1,x={y,z,w},erore={3,4,5}'
j='n=1,x={y,z,w}'
print re.search('x={(.*)}',i).group(1)
print re.search('x={(.*)}',j).group(1)
print re.search('x={(.*)}.',i).group(1)
print re.search('x={(.*)}.',j).group(1)
```
Result for the four different print:
```
'y,z,w'
'y,z,w},erore={3,4,5'
AttributeError: 'NoneType' object has no attribute 'group'
'y,z,w'
```
Needed result is `'y,z,w'` for all cases and then if `x={*}` really isn't found I would put an error catch.
Thank you in advance.
|
2017/10/25
|
[
"https://Stackoverflow.com/questions/46936397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5639728/"
] |
The main problem is that `{(.*)}` matches the longest string starting by `{` and ending by `}`, which in some cases is `y,z,w},erore={3,4,5`
You could use non-greedy matching by adding `?`. You don't need any other case.
```
import re
i='n=1,x={y,z,w},erore={3,4,5}'
j='n=1,x={y,z,w}'
expr = 'x={(.*?)}'
print (re.search(expr,i).group(1))
print (re.search(expr,j).group(1))
```
result:
```
y,z,w
y,z,w
```
|
You can try this:
```
import re
s = 'n=1,x={y,z,w},erore={3,4,5}'
final_data = re.findall('=\{(.*?)\}', s)
```
Output:
```
['y,z,w', '3,4,5']
```
|
36,534,313 |
In python,
I have a dictionary composed of the following:
```
[ ['FOXP2_MOUSE', 'AminoAcidSequence1'], ['FOXP2_RAT','AminoAcidSequence2'], ['FOXP2_DOG', 'AminoAcidSequence3'] ].
```
I'm trying to remove the keyed `FOXP2_DOG` from the dictionary and place it in a new dictionary alone. I've tried different methods: `del` and `.remove` to no avail.
|
2016/04/10
|
[
"https://Stackoverflow.com/questions/36534313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6184948/"
] |
`\s` is just whitespace. You can have word boundaries that aren't whitespace (punctuation, etc.) which is when you need to use `\b`. If you're only matching words that are delimited by whitespace then you can just use `\s`; and in that case you don't need the `\b`.
```
import re
sentence = 'Non-whitespace delimiters: Commas, semicolons; etc.'
print(re.findall(r'(\b\w+)\s+', sentence))
print(re.findall(r'(\b\w+\b)+', sentence))
```
Produces:
```
['whitespace']
['Non', 'whitespace', 'delimiters', 'Commas', 'semicolons', 'etc']
```
Notice how trying to catch word endings with just `\s` ends up missing most of them.
|
It's not because it's at the end of the word, it's because you know what comes *after* the word. In your example:
```
m = re.search(r'(\b\w+\b)\s+\1', 'Cherry tree blooming will begin in in later March')
```
...the first `\b` is necessary to prevent a match starting with the `in` in `begin`. The second one is redundant because you're explicitly matching the non-word characters (`\s+`) that follow the word. Word boundaries are for situations where you don't know what the character on the other side will be, or even if there will be a character there.
Where you *should* be using another one is at the end of the regex. For example:
```
m = re.search(r'(\b\w+)\s+\1\b', "Let's go to the theater")
```
Without the second `\b`, you would get a false positive for `the theater`.
|
36,534,313 |
In python,
I have a dictionary composed of the following:
```
[ ['FOXP2_MOUSE', 'AminoAcidSequence1'], ['FOXP2_RAT','AminoAcidSequence2'], ['FOXP2_DOG', 'AminoAcidSequence3'] ].
```
I'm trying to remove the keyed `FOXP2_DOG` from the dictionary and place it in a new dictionary alone. I've tried different methods: `del` and `.remove` to no avail.
|
2016/04/10
|
[
"https://Stackoverflow.com/questions/36534313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6184948/"
] |
`\s` is just whitespace. You can have word boundaries that aren't whitespace (punctuation, etc.) which is when you need to use `\b`. If you're only matching words that are delimited by whitespace then you can just use `\s`; and in that case you don't need the `\b`.
```
import re
sentence = 'Non-whitespace delimiters: Commas, semicolons; etc.'
print(re.findall(r'(\b\w+)\s+', sentence))
print(re.findall(r'(\b\w+\b)+', sentence))
```
Produces:
```
['whitespace']
['Non', 'whitespace', 'delimiters', 'Commas', 'semicolons', 'etc']
```
Notice how trying to catch word endings with just `\s` ends up missing most of them.
|
```
"I understand that \b can represent either the beginning or the end of a word. When would \b be required to represent the end?"
```
`\b` is never *required* to represent the end, or beginning, of a word. To answer your bigger question, it's only useful during development -- when working with natural language, you'll ultimately need to replace `\b` with something else. Why?
The \b operator matches a word boundary as you've discovered. But a key concept here is, "What is a *word*?" The answer is the very narrow set `[A-Za-z0-9_]` -- *word* is not a natural language word but a computer language identifier. The \b operator exists for a formal language's parser.
This means it doesn't handle common natural language situations like:
The word *let's* becomes two words, 'let' & 's' if `\b' represents the boundaries of a word. Also consider titles like *Mr. & Mrs.* lose their period.
Similarly, if `\b' represents the start of a word, then the appostrophe in these cases will be lost: *'twas 'bout 'cause*
Hyphenated words suffer at the hand of `\b' as well, e.g *mother-in-law* (unless you want her to suffer.)
Unfortunately, you can't simply augment `\b` by including it in a character set as it doesn't represent a character. You may be able to combine it with other characters via alternation in a zero-width assertion.
When working with natural language, the `\b` operator is great for quickly prototyping an idea, but ultimately, probably not what you want. Ditto `\w`, but, since it represents a character, it's more easily augmented.
|
36,534,313 |
In python,
I have a dictionary composed of the following:
```
[ ['FOXP2_MOUSE', 'AminoAcidSequence1'], ['FOXP2_RAT','AminoAcidSequence2'], ['FOXP2_DOG', 'AminoAcidSequence3'] ].
```
I'm trying to remove the keyed `FOXP2_DOG` from the dictionary and place it in a new dictionary alone. I've tried different methods: `del` and `.remove` to no avail.
|
2016/04/10
|
[
"https://Stackoverflow.com/questions/36534313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6184948/"
] |
Consider wanting to match the word "march":
```
>>> regex = re.compile(r'\bmarch\b')
```
It can come at the end of the sentence...
```
>>> regex.search('I love march')
<_sre.SRE_Match object at 0x10568e4a8>
```
Or the beginning ...
```
>>> regex.search('march is a great month')
<_sre.SRE_Match object at 0x10568e440>
```
But if I don't want to match things like `marching`, word boundaries are the most convenient:
```
>>> regex.search('my favorite pass-time is marching')
>>>
```
You might be thinking "But I can get all of these things using `r'\s+march\s+'`" and you're kind of right... The difference is in what matches. With the `\s+`, you also might be including some whitespace in the match (since that's what `\s+` means). This can make certain things like search for a word and replace it more difficult because you might have to manage keeping the whitespace consistent with what it was before.
|
It's not because it's at the end of the word, it's because you know what comes *after* the word. In your example:
```
m = re.search(r'(\b\w+\b)\s+\1', 'Cherry tree blooming will begin in in later March')
```
...the first `\b` is necessary to prevent a match starting with the `in` in `begin`. The second one is redundant because you're explicitly matching the non-word characters (`\s+`) that follow the word. Word boundaries are for situations where you don't know what the character on the other side will be, or even if there will be a character there.
Where you *should* be using another one is at the end of the regex. For example:
```
m = re.search(r'(\b\w+)\s+\1\b', "Let's go to the theater")
```
Without the second `\b`, you would get a false positive for `the theater`.
|
36,534,313 |
In python,
I have a dictionary composed of the following:
```
[ ['FOXP2_MOUSE', 'AminoAcidSequence1'], ['FOXP2_RAT','AminoAcidSequence2'], ['FOXP2_DOG', 'AminoAcidSequence3'] ].
```
I'm trying to remove the keyed `FOXP2_DOG` from the dictionary and place it in a new dictionary alone. I've tried different methods: `del` and `.remove` to no avail.
|
2016/04/10
|
[
"https://Stackoverflow.com/questions/36534313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6184948/"
] |
Consider wanting to match the word "march":
```
>>> regex = re.compile(r'\bmarch\b')
```
It can come at the end of the sentence...
```
>>> regex.search('I love march')
<_sre.SRE_Match object at 0x10568e4a8>
```
Or the beginning ...
```
>>> regex.search('march is a great month')
<_sre.SRE_Match object at 0x10568e440>
```
But if I don't want to match things like `marching`, word boundaries are the most convenient:
```
>>> regex.search('my favorite pass-time is marching')
>>>
```
You might be thinking "But I can get all of these things using `r'\s+march\s+'`" and you're kind of right... The difference is in what matches. With the `\s+`, you also might be including some whitespace in the match (since that's what `\s+` means). This can make certain things like search for a word and replace it more difficult because you might have to manage keeping the whitespace consistent with what it was before.
|
```
"I understand that \b can represent either the beginning or the end of a word. When would \b be required to represent the end?"
```
`\b` is never *required* to represent the end, or beginning, of a word. To answer your bigger question, it's only useful during development -- when working with natural language, you'll ultimately need to replace `\b` with something else. Why?
The \b operator matches a word boundary as you've discovered. But a key concept here is, "What is a *word*?" The answer is the very narrow set `[A-Za-z0-9_]` -- *word* is not a natural language word but a computer language identifier. The \b operator exists for a formal language's parser.
This means it doesn't handle common natural language situations like:
The word *let's* becomes two words, 'let' & 's' if `\b' represents the boundaries of a word. Also consider titles like *Mr. & Mrs.* lose their period.
Similarly, if `\b' represents the start of a word, then the appostrophe in these cases will be lost: *'twas 'bout 'cause*
Hyphenated words suffer at the hand of `\b' as well, e.g *mother-in-law* (unless you want her to suffer.)
Unfortunately, you can't simply augment `\b` by including it in a character set as it doesn't represent a character. You may be able to combine it with other characters via alternation in a zero-width assertion.
When working with natural language, the `\b` operator is great for quickly prototyping an idea, but ultimately, probably not what you want. Ditto `\w`, but, since it represents a character, it's more easily augmented.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.