qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
64,765,086 | im trying to run a server on my laptop, when in the console i type 'python manage.py runserver'
i recieve some errors. could it be i need to import some modules i tried 'pip install python-cron' but that didnt work.
the error says:
```
[2020-11-10 09:04:47,241] autoreload: INFO - Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper
fn(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/utils/autoreload.py", line 76, in raise_last_exception
raise _exception[1]
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/core/management/__init__.py", line 357, in execute
autoreload.check_errors(django.setup)()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper
fn(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/apps/config.py", line 90, in create
module = import_module(entry)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'django_cron'
```
the cron.py file i have is:
```
from django.contrib.auth.models import User
import os
import datetime
from crontab import CronTab
#from django_common.helper import send_mail
from django_cron import CronJobBase, Schedule
from .models import Photo
from PIL import Image
class PhotoDeleteCronJob(CronJobBase):
RUN_EVERY_MINS = 1
schedule = Schedule(run_every_mins=RUN_EVERY_MINS)
code = 'cron.PhotoDeleteCronJob'
def do(self):
delet = Photo.objects.all()
delet.delete()
```
thanks in advance if you need to see any other files just ask. | 2020/11/10 | [
"https://Stackoverflow.com/questions/64765086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13740000/"
] | I have tried to re-create the same design with some minor changes in Flutter. I have to enable flutter web support by following the instructions here:
[Flutter Web](https://flutter.dev/docs/get-started/web)
[](https://i.stack.imgur.com/oxX5K.gif)
Regarding the left menu, I have used `AnimatedSize` widget to give the sliding drawer feel & placed it inside `Row`.
Please find the code below:
```
import 'package:flutter/material.dart';
final Color darkBlue = Color.fromARGB(255, 18, 32, 47);
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData.dark().copyWith(scaffoldBackgroundColor: darkBlue),
debugShowCheckedModeBanner: false,
home: Scaffold(
body: Center(
child: MyWidget(),
),
),
);
}
}
class MyWidget extends StatefulWidget {
@override
_MyWidgetState createState() => _MyWidgetState();
}
class _MyWidgetState extends State<MyWidget>
with SingleTickerProviderStateMixin {
final colors = <Color>[Colors.indigo, Colors.blue, Colors.orange, Colors.red];
double _size = 250.0;
bool _large = true;
void _updateSize() {
setState(() {
_size = _large ? 250.0 : 0.0;
_large = !_large;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Row(
children: [
AnimatedSize(
curve: Curves.easeIn,
vsync: this,
duration: Duration(seconds: 1),
child: LeftDrawer(size: _size)),
Expanded(
flex: 4,
child: Container(
child: Column(
children: [
Container(
color: Colors.white,
padding: const EdgeInsets.all(8),
child: Row(
children: [
IconButton(
icon: Icon(Icons.menu, color: Colors.black87),
onPressed: () {
_updateSize();
},
),
FlatButton(
child: Text(
'Dashboard',
style: const TextStyle(color: Colors.black87),
),
onPressed: () {},
),
FlatButton(
child: Text(
'User',
style: const TextStyle(color: Colors.black87),
),
onPressed: () {},
),
FlatButton(
child: Text(
'Settings',
style: const TextStyle(color: Colors.black87),
),
onPressed: () {},
),
const Spacer(),
IconButton(
icon: Icon(Icons.brightness_3, color: Colors.black87),
onPressed: () {},
),
IconButton(
icon: Icon(Icons.notification_important,
color: Colors.black87),
onPressed: () {},
),
CircleAvatar(),
],
),
),
Container(
height: 1,
color: Colors.black12,
),
Card(
margin: EdgeInsets.zero,
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(0),
),
child: Container(
color: Colors.white,
padding: const EdgeInsets.all(20),
child: Row(
children: [
Text(
'Home / Admin / Dashboard',
style: const TextStyle(color: Colors.black),
),
],
),
),
),
Expanded(
child: ListView(
children: [
Row(
children: [
_container(0),
_container(1),
_container(2),
_container(3),
],
),
Container(
height: 400,
color: Color(0xFFE7E7E7),
padding: const EdgeInsets.all(16),
child: Card(
color: Colors.white,
child: Container(
padding: const EdgeInsets.all(16),
child: Text(
'Traffic',
style: const TextStyle(color: Colors.black87),
),
),
),
),
],
),
),
],
),
),
),
],
),
);
}
Widget _container(int index) {
return Expanded(
child: Container(
padding: const EdgeInsets.all(20),
color: Color(0xFFE7E7E7),
child: Card(
color: Color(0xFFE7E7E7),
child: Container(
color: colors[index],
width: 250,
height: 140,
padding: const EdgeInsets.all(20),
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Row(
children: [
Expanded(
child: Text(
'9.823',
style: TextStyle(fontSize: 24),
)),
Icon(Icons.more_vert),
],
),
Text('Members online')
],
),
),
),
),
);
}
}
class LeftDrawer extends StatelessWidget {
const LeftDrawer({
Key key,
this.size,
}) : super(key: key);
final double size;
@override
Widget build(BuildContext context) {
return Expanded(
flex: 1,
child: Container(
width: size,
color: const Color(0xFF2C3C56),
child: ListView(
children: [
Container(
alignment: Alignment.center,
padding: const EdgeInsets.all(16),
color: Color(0xFF223047),
child: Text('CORE UI'),
),
_tile('Dashboard'),
Container(
padding: const EdgeInsets.only(left: 10),
margin: const EdgeInsets.only(top: 30),
child: Text('THEME',
style: TextStyle(
color: Colors.white54,
))),
_tile('Colors'),
_tile('Typography'),
_tile('Base'),
_tile('Buttons'),
],
),
),
);
}
Widget _tile(String label) {
return ListTile(
title: Text(label),
onTap: () {},
);
}
}
``` | You can use the `Drawer` widget inside a `Scaffold`. If you want the navigation drawer to be able to resize according to the browser height and width you can use the [responsive\_scaffold](https://pub.dev/packages/responsive_scaffold) package. | 17,041 |
22,073,028 | I just started python three days ago and I am already facing a problem. I couldn't get any information in the www. It looks like a bug - but I think I did s.th. wrong.
However I can't find the problem.
Here we go:
I have 1 List called "inputData".
So all I do is, take out the first 10 entries in each array, fit it with polyfit, save the fit parameters in the variable "linFit" and afterwards substract the fit from my "inputData" and save it in a new list called "correctData". The print line is only to show you the "bug".
If you run the code below and you compare the "inputData" print before and after the procedure, it is different. I have no idea, why... :(
However, if you remove one of the two arrays in "inputData", it works fine.
Anyone any idea?
Thx!
```
import matplotlib.pyplot as plt
import pylab as np
inputData = [np.array([[ 1.06999998e+01, 1.71811953e-01],
[ 2.94000015e+01, 2.08369687e-01],
[ 3.48000002e+01, 3.70725733e-01],
[ 4.28000021e+01, 4.96874842e-01],
[ 5.16000004e+01, 5.20280702e-01],
[ 6.34000015e+01, 6.79658073e-01],
[ 7.72000008e+01, 7.15826614e-01],
[ 8.08000031e+01, 8.38463318e-01],
[ 9.27000008e+01, 9.07969677e-01],
[ 10.65000000e+01, 10.76921320e-01],
[ 11.65000000e+01, 11.76921320e-01]]),
np.array([[ 0.25999999e+00, 1.21419430e-01],
[ 1.84000009e-01, 2.26843166e-01],
[ 2.41999998e+01, 3.69826150e-01],
[ 3.90000000e+01, 4.12130547e-01],
[ 4.20999985e+01, 5.92435598e-01],
[ 5.22999992e+01, 6.44819438e-01],
[ 6.62999992e+01, 7.23920727e-01],
[ 7.65000000e+01, 8.45791912e-01],
[ 8.22000008e+01, 9.97368264e-01],
[ 9.55000000e+01, 10.48223877e-01]])]
linFit = [['', '']]*15
linFitData = [['', '']]*15
correctData = np.copy(inputData)
print(inputData)
for i, entry in enumerate(inputData):
CUT = np.split(entry, [10], axis=0)
linFitData[i] = CUT[0]
linFit[i] = np.polyfit(linFitData[i][:,0], linFitData[i][:,1], 1)
for j, subentry in enumerate(entry):
correctData[i][j][1] = subentry[1]-subentry[0]*(linFit[i][0]+linFit[i][1])
#print (inputData[0][0][1])
print('----------')
print(inputData)
for i, entry in enumerate(inputData):
plt.plot(entry[:,0], entry[:,1], '.')
plt.plot(linFitData[i][:,0], (linFitData[i][:,0])*(linFit[i][0])+(linFit[i][1]))
#plt.plot(correctData[i][:,0], correctData[i][:,1], '.')
``` | 2014/02/27 | [
"https://Stackoverflow.com/questions/22073028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3361064/"
] | Your `inputData` isn't a numpy array, it's a list of arrays. Those two lists don't have the same length:
```
>>> [len(sl) for sl in inputData]
[11, 10]
```
numpy arrays can't handle varying lengths. If you try to make an array out of it, instead of having a 2-D array of float dtype, you get a 1-D array of object dtype, the members of which are lists:
```
>>> a = np.array(inputData)
>>> a.shape, a.dtype
((2,), dtype('O'))
```
and so your "copy" is actually only a shallow copy; the lists inside are the same objects as in `inputData`:
```
>>> correctData = np.copy(inputData)
>>> inputData[0] is correctData[0]
True
>>> inputData[1] is correctData[1]
True
```
---
BTW, you can't multiply lists like this `linFit = [['', '']]*15`; that doesn't make a copy either (see [here](https://stackoverflow.com/questions/17702937/generating-sublists-using-multiplication-unexpected-behavior)). `linFit[0] is linFit[1]` -- try changing one of the sublists to see this. | Your code as you posted it is not runnable at all, as a bunch of definitions are missing or wrong. After fixing this and some code cleanup, I get the following, which basically shows, everything is working as intended:
```
import numpy as np
from copy import deepcopy
dataList = [np.array([[ 1.06999998e+01, 1.71811953e-01],
[ -3.94000015e+01, -7.08369687e-02],
[ 1.48000002e+01, 1.70725733e-02],
[ 6.28000021e+00, 1.96874842e-01],
[ 2.16000004e+01, -1.20280702e-02],
[ 4.34000015e+01, -3.79658073e-01],
[ 3.72000008e+01, -1.15826614e-01],
[ 8.08000031e+01, 6.38463318e-01],
[ 5.27000008e+01, 5.07969677e-01],
[ 6.65000000e+01, -4.76921320e-01]], dtype=np.float32),
np.array([[ -3.25999999e+00, 1.21419430e-01],
[ 2.84000009e-01, -4.26843166e-02],
[ -1.41999998e+01, -1.69826150e-01],
[ 1.90000000e+01, 2.12130547e-01],
[ 3.20999985e+01, -5.92435598e-02],
[ 3.22999992e+01, 1.44819438e-01],
[ 3.62999992e+01, -3.23920727e-01],
[ 4.65000000e+01, 2.45791912e-01],
[ 6.22000008e+01, 1.97368264e-02],
[ 6.55000000e+01, -1.48223877e-01]], dtype=np.float32)]
correctData = deepcopy(dataList)
for i, entry in enumerate(dataList):
CUT = np.split(entry, 5, axis=0)[0]
linFit = np.polyfit(CUT[:,0], CUT[:,1], 1)
for j, subentry in enumerate(entry):
correctData[i][j][1] = subentry[1] - subentry[0] * linFit[0] + linFit[1]
print dataList[1][0][1]
print('----------')
```
Outputs:
```
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
----------
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
0.121419
----------
```
The actual problem in your code above is, that inputData is of type list. When you create the correctData, if it would be an array, it would be a nice copy. But as it is a list, the copy creates an array of objects, which holds only references to the original arrays. So in fact, you're directly writing to inputData, not to copies. See that:
```
correctData.dtype
>>> dtype('O')
```
So either you create a list of copies, or you switch to a 3D-arrays, to fix the problem. To create a list with copies of all contained items, use this:
```
from copy import deepcopy
correctData = deepcopy(inputData)
``` | 17,042 |
73,749,184 | I'm following this [TensorFlow guide](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10) for object detection models and I've gotten to part 6, which is training your program. I've input this line of code,
```
python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
```
But it keeps resulting in the syntax error here.
```
(tensorflow1) C:\tensorflow1\models\research\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
2022-09-16 14:38:10.767310: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2022-09-16 14:38:10.767443: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "train.py", line 53, in <module>
from object_detection.builders import model_builder
File "C:\tensorflow1\models\research\object_detection\builders\model_builder.py", line 34, in <module>
from object_detection.core import target_assigner
File "C:\tensorflow1\models\research\object_detection\core\target_assigner.py", line 1051
raise ValueError(f'Unknown heatmap type - {self._box_heatmap_type}')
^
SyntaxError: invalid syntax
```
It's happened before but I managed to fix it by going back to the file and editting the changes it asks for. But this time, if I take away that quote it's on, the whole line has a red squiggly. I've never used Python or Anaconda before, and this is my first time touching it. Any help would be appreciated. I've read online this is due to my Python being an older version, and that line of code doesn't work with the old version. I think I'm using 3.5, but I'm not sure if updating the version will break everything, because I think TensorFlow only works with 3.5 | 2022/09/16 | [
"https://Stackoverflow.com/questions/73749184",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19228228/"
] | I wold try something like this
```
var paymentStatus = JObject.Parse(response.Content)["PaymentStatus"][0];
string statusDescription = paymentStatus["StatusDescription"].ToString();
string merchantTxnRefNo = paymentStatus["MerchantTxnRefNo"].ToString();
```
or maybe you need c# classes
```
List<PaymentStatus> paymentStatuses = JObject.Parse(json)["PaymentStatus"]
.ToObject<List<PaymentStatus>>();
string statusDescription = paymentStatuses[0].StatusDescription;
long merchantTxnRefNo = paymentStatuses[0].MerchantTxnRefNo;
public class PaymentStatus
{
public long MerchantTxnRefNo { get; set; }
public long PaymentId { get; set; }
public DateTime ProcessDate { get; set; }
public string StatusDescription { get; set; }
public long TrackId { get; set; }
public long BankRefNo { get; set; }
public string PaymentType { get; set; }
public int ErrorCode { get; set; }
public string ProductType { get; set; }
public string finalStatus { get; set; }
}
``` | Since PaymentStatus resolves to an array, use the indexer to get the object as below
var StatusDescription = (string)jObject["PaymentStatus"]`[0]`["MerchantTxnRefNo"]; | 17,043 |
52,747,655 | I am trying to use the TensorFlow CLI debugger in order to identify the operation which is causing a NaN during training of a network, but when I try to run the code I get an error:
`_curses.error: cbreak() returned ERR`
I'm running the code on an Ubuntu server, which I'm connecting to via SSH, and have tried to follow [this tutorial](https://www.tensorflow.org/guide/debugger).
I have tried using `tf.add_check_numerics_ops()`, but the layers in the network include while loops so are not compatible. This is the section of code where the error is being raised:
```
import tensorflow as tf
from tensorflow.python import debug as tf_debug
...
#Prepare data
train_data, val_data, test_data = dataset.prepare_datasets(model_config)
sess = tf.Session()
sess = tf_debug.LocalCLIDebugWrapperSession(sess)
# Create iterators
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(handle, train_data.output_types, train_data.output_shapes)
mixed_spec, voice_spec, mixed_audio, voice_audio = iterator.get_next()
training_iterator = train_data.make_initializable_iterator()
validation_iterator = val_data.make_initializable_iterator()
testing_iterator = test_data.make_initializable_iterator()
training_handle = sess.run(training_iterator.string_handle())
...
```
and the full error is:
```
Traceback (most recent call last):
File "main.py", line 64, in <module>
@ex.automain
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/experiment.py", line 137, in automain
self.run_commandline()
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/experiment.py", line 260, in run_commandline
return self.run(cmd_name, config_updates, named_configs, {}, args)
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/experiment.py", line 209, in run
run()
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/run.py", line 221, in __call__
self.result = self.main_function(*args)
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/config/captured_function.py", line 46, in captured_function
result = wrapped(*args, **kwargs)
File "main.py", line 95, in do_experiment
training_handle = sess.run(training_iterator.string_handle())
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/wrappers/framework.py", line 455, in run
is_callable_runner=bool(callable_runner)))
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/wrappers/local_cli_wrapper.py", line 255, in on_run_start
self._run_start_response = self._launch_cli()
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/wrappers/local_cli_wrapper.py", line 431, in _launch_cli
title_color=self._title_color)
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/cli/curses_ui.py", line 492, in run_ui
self._screen_launch(enable_mouse_on_start=enable_mouse_on_start)
File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/cli/curses_ui.py", line 445, in _screen_launch
curses.cbreak()
_curses.error: cbreak() returned ERR
```
I'm pretty new to using Ubuntu (and TensorFlow), but as far as I can tell the server does have ncurses installed, which should allow the required curses based interface:
```
acvn728@america:~/MScFinalProject$ dpkg -l '*ncurses*' | grep '^ii'
ii libncurses5:amd64 6.0+20160213-1ubuntu1 amd64 shared libraries for terminal handling
ii libncursesw5:amd64 6.0+20160213-1ubuntu1 amd64 shared libraries for terminal handling (wide character support)
ii ncurses-base 6.0+20160213-1ubuntu1 all basic terminal type definitions
ii ncurses-bin 6.0+20160213-1ubuntu1 amd64 terminal-related programs and man pages
ii ncurses-term 6.0+20160213-1ubuntu1 all additional terminal type definitions
``` | 2018/10/10 | [
"https://Stackoverflow.com/questions/52747655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9067015/"
] | Problem solved! The solution was to change
```
sess = tf_debug.LocalCLIDebugWrapperSession(sess)
```
to
```
sess = tf_debug.LocalCLIDebugWrapperSession(sess, ui_type="readline")
```
This is similar to the solution to [this question](https://stackoverflow.com/questions/47833697/how-to-use-tensorflow-debugging-tool-tfdbg-on-tf-estimator-in-tensorflow), but I I think it is important to note that they are different because a) it refers to a different function and a different API and b) I wasn't trying to run from an IDE, as mentioned in that solution. | `cbreak` would return **`ERR`** if you run a curses application that is not on a *real terminal* (i.e., something that works with [POSIX termios calls](http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap11.html#tag_11)).
From the description,
>
> but the layers in the network include while loops so are not compatible
>
>
>
it does not seem you are running in a terminal. | 17,044 |
12,397,182 | I am trying to remove all the html surrounding the data that I seek from a webpage so that all that is left is the raw data that I will then be able to input into a database. so if I have something like:
```
<p class="location"> Atlanta, GA </p>
```
The following code would return
```
Atlanta, GA </p>
```
But what I expect is not what is returned. This is a more specific solution to the basic problem I found [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python). Any help would be appreciated, thanks! Code is found below.
```
def delHTML(self, html):
"""
html is a list made up of items with data surrounded by html
this function should get rid of the html and return the data as a list
"""
for n,i in enumerate(html):
if i==re.match('<p class="location">',str(html[n])):
html[n]=re.sub('<p class="location">', '', str(html[n]))
return html
``` | 2012/09/12 | [
"https://Stackoverflow.com/questions/12397182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845888/"
] | As rightfully pointed out in the comments, you should be using a specific library to parse HTML and extract text, here are some examples:
* [html2text](http://www.aaronsw.com/2002/html2text/): Limited functionnality, but exactly what you need.
* [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/): More complex, more powerful. | Assuming all you want is to extract the data contained in `<p class="location">` tags, you could use a quick & dirty (but correct) approach with the Python `HTMLParser` module (a simple HTML SAX parser), like this:
```
from HTMLParser import HTMLParser
class MyHTMLParser(HTMLParser):
PLocationID=0
PCount=0
buf=""
out=[]
def handle_starttag(self, tag, attrs):
if tag=="p":
self.PCount+=1
if ("class", "location") in attrs and self.PLocationID==0:
self.PLocationID=self.PCount
def handle_endtag(self, tag):
if tag=="p":
if self.PLocationID==self.PCount:
self.out.append(self.buf)
self.buf=""
self.PLocationID=0
self.PCount-=1
def handle_data(self, data):
if self.PLocationID:
self.buf+=data
# instantiate the parser and fed it some HTML
parser = MyHTMLParser()
parser.feed("""
<html>
<body>
<p>This won't appear!</p>
<p class="location">This <b>will</b></p>
<div>
<p class="location">This <span class="someclass">too</span></p>
<p>Even if <p class="location">nested Ps <p class="location"><b>shouldn't</b> <p>be allowed</p></p> <p>this will work</p></p> (this last text is out!)</p>
</div>
</body>
</html>
""")
print parser.out
```
Output:
```
['This will', 'This too', "nested Ps shouldn't be allowed this will work"]
```
This will extract all the text contained inside any `<p class="location">` tag, stripping all the tags inside it. Separate tags (if not nested - which shouldn't be allowed anyhow for paragraphs) will have a separate entry in the `out` list.
Notice that for more complex requirements this can easily get out of hand; in those cases a DOM parser is way more appropriate. | 17,045 |
64,777,843 | Today I come with a two in one set of issues that's on the verge of making me smash my computer to pieces! So please I would greatly appreciate any help as I've been stuck on it for two days now.
I have a project where osmnx is required, so I follow the install instructions [provided](https://github.com/gboeing/osmnx#installation). Which means that I created a dedicated (clean) environment for it. Within this project there is a notebook that I should run, which leads me to `ImportError: No module named dotmap`. So I say okay, and install dotmap in the environment through conda install.
Now heres the situation, even though I installed it sucessfully in the environment, I keep getting the same error when I run the notebook! So I think maybe I should reinstall dotmap using pip through conda prompt. But when I use pip install in the dedicated ox environment, I get failed to create process. So I say okay, lets install it again through conda install but it still doesn't work.
I see that the dotmap is being called from a main.py document. So I decide to check the main.py document by installing spyder through anaconda navigtor. Once it's installed I click launch and spyder never launches! I try to launch from the conda prompt and I just get `Unable to create process using 'C:\Users\THESIS\.conda\envs\ox2\python.exe C:\Users\THESIS\.conda\envs\ox2\Scripts\spyder-script.py`.
At the moment I ran out of ideas of what to do as I tried to work around it numerous times. Before questions get asked (and I hope someone asks something), I'd like to say that I already:
* Reinstalled anaconda navigator
* Tried created numerous environments in various different ways of installing osmnx and dotmap
* Tried reinstalling and launching spyder in numerous different ways (Same thing is happening with Jupyterlab)
* And probably some other things that I can not remember now off the top of my head
I know I sound fed up and angry at the moment but I just can not understand how I did not find a solution to this (what should be) simple issue.
All in all, any help is greatly appreciated!! | 2020/11/10 | [
"https://Stackoverflow.com/questions/64777843",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9935756/"
] | Following droptop recommendation, I actually just did a full clean (another one) reinstall of anaconda where I deleted almost everything that I could.
I installed it again and it's working now! Thanks for the help anyway!! | Your fresh anaconda install should have `jupyter`, `jupyterlab` and `spyder` in the `base` environment. Starting the anaconda prompt and typing in `jupyter notebook` should launch jupyter.
Try activating your `ox2` environment with another prompt, and follow through from step 3 of this post <https://medium.com/@nrk25693/how-to-add-your-conda-environment-to-your-jupyter-notebook-in-just-4-steps-abeab8b8d084> | 17,046 |
59,519,338 | Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below.
Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0).
For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183).
What is the problem, and how to fix?
---
**Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1
---
**Steps to reproduce:**
```
conda create --name vsc
conda activate vsc
conda install python==3.7.5
conda install numpy
# in VSCode: import numpy as np, etc
```
---
**Full traceback**:
```py
Traceback (most recent call last):
File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module>
from . import multiarray
File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module>
from . import overrides
File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module>
from numpy.core._multiarray_umath import (
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module>
import numpy as np
File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module>
from . import core
File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module>
raise ImportError(msg)
ImportError:
```
---
**EDIT**: added the following to `settings.json` per James's suggestion:
```
"terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe",
"terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"],
"python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe"
``` | 2019/12/29 | [
"https://Stackoverflow.com/questions/59519338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10133797/"
] | Based on your comment, it looks like the conda environment is not being activated by VSCode. Selecting the Python interpreter points VSCode to the Python executable (python.exe), but sometimes environmental variables that are set by Conda are used to tell packages with large backends where to look for the compiled binaries.
Conda does this to save on space. If you already have the DLLs in one environment, it will sometimes link to them when creating a new environment rather than installing them again. So the goal is to get VSCode to use Conda in the same way you would use it through the Start Menu: firing up the Anaconda Command prompt before starting Python.
In VSCode open your `settings.json` file for editing using the following operations:
```
(type) CTRL + SHIFT + P
(search for:) open settings
(click:) Preferences: Open Settings (JSON)
```
We are going to add 3 lines to the JSON file. The first tell VSCode to use a Windows integrated shell. The second adds additional arguments when firing up the Windows Shell that run each time; this is where we will activate the base Conda environment. (This is just copy/pasted from the Anaconda Command Prompt shortcut properties.) The third line lets VSCode where your Conda executable is so it can properly change environments.
My Anaconda base environment is located at `C:\Anaconda3\`. You will need to modify the paths to your installation.
```
settings.json
```
```json
{
... # any other settings you have already added (remove this line)
"terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\cmd.exe",
"terminal.integrated.shellArgs.windows": ["/K", "C:\\Anaconda3\\Scripts\\activate.bat C:\\Anaconda3"],
"python.condaPath": "C:\\Anaconda3\\Scripts\\conda.exe"
}
```
Save the file, change your interpreter to the `base` conda environment, restart VSCode, change your interpreter again to `vsc`. | If you deactivate the Code Runner extension and make sure you select the appropriate conda environment using the [Python extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python) you will get a green play button instead of a white one. That green play button will use the environment you selected and thus should have numpy installed. | 17,047 |
21,870,728 | Hi I am trying to run the multiprocessing example in the docs: <http://docs.python.org/3.4/library/concurrent.futures.html>, the one using prime numbers but with a small difference.
I want to be able to call a function with multiple arguments. What I am doing is matching small pieces of text (in a list around 30k long) to a much larger piece of text and return where in the larger string the smaller strings start.
I can do this serially like this:
```
matchList = []
for pattern in patterns:
# Approximate pattern matching
patternStartingPositions = processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray)
# Now add each starting position found onto our master list.
for startPos in patternStartingPositions:
matchList.append(startPos)
```
But I want to do this to speed things up:
```
matchList = []
with concurrent.futures.ProcessPoolExecutor() as executor:
for pattern, res in zip(patterns, executor.map(processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray), patterns)):
print('%d is starts at: %s' % (pattern, res))
```
At this stage I've just got the print call there because I can't get the line above, the invocation of the processes to work.
The only real difference between what I want to do and the example code is that my function takes 7 arguments and I have no idea how to do it, spent half the day on it.
The call above generates this error:
>
> UnboundLocalError: local variable 'pattern' referenced before assignment.
>
>
>
Which makes sense.
But then if I leave out that first argument, which is the one that changes with each call, and leave out the first parameter to the `processPattern` function:
```
matchList = []
with concurrent.futures.ProcessPoolExecutor() as executor:
for pattern, res in zip(patterns, executor.map(processPattern(numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray), patterns)):
print('%d is starts at: %s' % (pattern, res))
```
Then I get this error:
>
> TypeError: processPattern() missing 1 required positional argument: 'suffixArray'.
>
>
>
I don't know how to get the `pattern` argument in the call! | 2014/02/19 | [
"https://Stackoverflow.com/questions/21870728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3015449/"
] | To get the data into the right shape, simply use a generator expression (no need for `zip` at all) and use `submit` rather than `map`:
```
(pattern, executor.submit(processPattern, pattern, ...) for pattern in patterns)
```
To ensure that everything gets executed on the pool (instead of immediately), do not invoke the `processPatterns` function as you are doing in your example, but instead pass it in as the first argument to `.submit`. The fixed version of your code would be:
```
with concurrent.futures.ProcessPoolExecutor() as executor:
for pattern, res in ((pattern, executor.submit(processPattern, pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray)) for pattern in patterns):
print('%d is starts at: %s' % (pattern, res.result()))
``` | Python *for*-loop is has functional behavior, and it is not possible to change value, which is iterating.
```
with concurrent.futures.ProcessPoolExecutor() as executor:
def work(pattern):
return processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray)
results = executor.map(work, patterns)
for pattern, res in zip(patterns, results):
print('%d is starts at: %s' % (pattern, res))
```
In fact, for cycle not using *continue* and *break* instructions, works just like a map function. That is:
```
for i in something:
work(i)
```
Is equivalent to
```
map(work, something)
``` | 17,056 |
57,354,747 | I am trying to add a package to PyPi so I can install it with Pip. I am trying to add it using `twine upload dist/*`.
This causes me to get multiple SSL errors such as `raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='upload.pypi.org', port=443): Max retries exceeded with url: /legacy/ (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))`.
I am using a school laptop and I presume that this is something my administrator has done however I can install stuff with pip by using `pip3 install --trusted-host pypi.org --trusted-h\ost files.pythonhosted.org`.
I was wondering if there was another to add my package to pip? | 2019/08/05 | [
"https://Stackoverflow.com/questions/57354747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9868018/"
] | My guess is your school has something in place where they are replacing the original cert with their own, you could maybe get around it using `--cert` and referencing the path for your schools cert, but I think an easier workaround is to copy the files to a non school computer and upload from there. | This could be a number of things, including an out-of-date version of `twine`, or (more likely) an out-of-date version of OpenSSL. Some possible solutions are listed here: <https://github.com/pypa/twine/issues/273> | 17,059 |
48,313,388 | I am trying to get selenium working on my headless raspberry pi with firefox. I have it working fine on Windows with chrome. Here are my versions:
```
uname -a Linux megabyte.thompco.com 4.9.59-v7+ #1047 SMP Sun Oct 29
12:19:23 GMT 2017 armv7l GNU/Linux
which firefox
/usr/bin/firefox
firefox --version
Mozilla Firefox 52.5.2
./geckodriver_32 --version
geckodriver 0.19.1
The source code of this program is available from
testing/geckodriver in https://hg.mozilla.org/mozilla-central.
This program is subject to the terms of the Mozilla Public License 2.0.
You can obtain a copy of the license at https://mozilla.org/MPL/2.0/.
```
I think I have compatible versions of the driver and firefox (this seems to work):
```
./geckodriver_32 -b /usr/bin/firefox
1516245181824 geckodriver INFO geckodriver 0.19.1
1516245181881 geckodriver INFO Listening on 127.0.0.1:4444
```
When I run the following code:
```
def __init__(self, tag, user_name, password, driver_location, headless):
logger = logging_utils.get_logger()
logging_utils.start_function(logger, user_name=user_name)
self.tag = tag
self.user_name = user_name
self.password = password
self.cards = []
driver_options = Options()
driver = None
try:
if "chrome" in driver_location.lower():
if headless:
driver_options.add_argument("--headless")
driver = webdriver.Chrome(executable_path=os.path.abspath("chromedriver.exe"),
chrome_options=driver_options)
elif "gecko" in driver_location.lower():
binary = FirefoxBinary("/usr/bin/firefox")
driver_options.binary = binary
profile = webdriver.FirefoxProfile()
driver_options.profile = profile
driver_options.set_headless(headless)
driver = webdriver.Firefox(firefox_binary=binary,
firefox_profile=profile,
executable_path=os.path.abspath(driver_location),
firefox_options=driver_options)
```
I get this error:
```
Traceback (most recent call last):
File "/mnt/usbdrive/python/AmexOfferChecker/amexParser.py", line 105, in __init__
firefox_options=driver_options)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/firefox/webdriver.py", line 158, in __init__
keep_alive=True)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 154, in __init__
self.start_session(desired_capabilities, browser_profile)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 243, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 312, in execute
self.error_handler.check_response(response)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 237, in check_response
raise exception_class(message, screen, stacktrace)
SessionNotCreatedException: Message: Unable to find a matching set of capabilities
```
Any suggestions would be most welcome!
I have modified my "gecko" section to look like this:
```
options = Options()
options.add_argument('-headless')
print driver_location
print os.path.abspath(driver_location)
driver = Firefox(executable_path=os.path.abspath(driver_location),
firefox_options=options)
print "Driver has been loaded!"
```
Now I get this error:
```
geckodriver_32
/mnt/usbdrive/python/AmexOfferChecker/geckodriver_32
Traceback (most recent call last):
File "/mnt/usbdrive/python/AmexOfferChecker/amexParser.py", line 106, in __init__
firefox_options=options)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/firefox/webdriver.py", line 158, in __init__
keep_alive=True)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 154, in __init__
self.start_session(desired_capabilities, browser_profile)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 243, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 312, in execute
self.error_handler.check_response(response)
File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 237, in check_response
raise exception_class(message, screen, stacktrace)
WebDriverException: Message: Process unexpectedly closed with status: 1
```
I did see this bug:
On Linux, certain libraries are currently required on your system even though headless mode doesn't use them — because Firefox links against them. See [bug 1372998](https://bugzilla.mozilla.org/show_bug.cgi?id=1372998) for more details and progress towards a fix.
I added the suggested libraries:
```
sudo apt-get install libgtk-3-0 libdbus-glib-1-2 xvfb
```
but am still getting the same exception.
I have tried this also with the same error (**NOTE** that the debug file is created, but empty):
```
options = Options()
options.add_argument('--headless')
profile = webdriver.FirefoxProfile()
profile.set_preference("logs/webdriver.log", "/tmp/firefox_console")
binary = FirefoxBinary(firefox_path="/usr/bin/firefox",
log_file=open("/tmp/firefox_output", "wb"))
driver = webdriver.Firefox(firefox_profile=profile,
firefox_binary=binary,
options=options,
executable_path=os.path.abspath(driver_location))
```
Finally got logging turned on for seleniuim (does this mean anything to anyone?):
```
2018-01-24 22:51:00,078 - selenium.webdriver.remote.remote_connection 480 -DEBUG - POST http://127.0.0.1:45413/session {"capabilities": {"alwaysMatch": {"acceptInsecureCerts": true, "browserName": "firefox", "moz:firefoxOptions": {"args": ["headless"]}}, "firstMatch": [{}]}, "desiredCapabilities": {"acceptInsecureCerts": true, "browserName": "firefox", "moz:firefoxOptions": {"args": ["headless"]}}}
2018-01-24 22:51:00,944 - selenium.webdriver.remote.remote_connection 567 -DEBUG - Finished Request {"value":{"error":"unknown error","message":"Process unexpectedly closed with status: 1","stacktrace":"stack backtrace:\n 0: 0x55d797 - backtrace::backtrace::trace::hc4bd56a2f176de7e\n 1: 0x55d8ff - backtrace::capture::Backtrace::new::he3b2a15d39027c46\n 2: 0x4b7f4b - webdriver::error::WebDriverError::new::ha0fbd6d1a1131b43\n 3: 0x4bcb57 - geckodriver::marionette::MarionetteHandler::create_connection::hf0532ddb9e159684\n 4: 0x4a14cb - <webdriver::server::Dispatcher<T, U>>::run::h2119c674d7b88193\n 5: 0x47fcbf - std::sys_common::backtrace::__rust_begin_short_backtrace::h21d98a9ff86d4c25\n 6: 0x4871cf - std::panicking::try::do_call::h5cff0c9b18cfdbba\n 7: 0x606237 - panic_unwind::__rust_maybe_catch_panic\n at /checkout/src/libpanic_unwind/lib.rs:99\n 8: 0x4999e7 - <F as alloc::boxed::FnBox<A>>::call_box::h413eb1d9d9f1c473\n 9: 0x6000d3 - alloc::boxed::{{impl}}::call_once<(),()>\n at /checkout/src/liballoc/boxed.rs:692\n - std::sys_common::thread::start_thread\n at /checkout/src/libstd/sys_common/thread.rs:21\n - std::sys::imp::thread::{{impl}}::new::thread_start\n at /checkout/src/libstd/sys/unix/thread.rs:84"}}
2018-01-24 22:51:00,947 - main.main 38 -WARNING - Problem (Message: Process unexpectedly closed with status: 1
```
Here is the crux of the error (interesting that it is reported as a DEBUG). does anyone have any suggestions:
```
2018-01-24 22:51:02,863 - selenium.webdriver.remote.remote_connection 567 -DEBUG - Finished Request
{"value":
{"error":"unknown error","message":"Process unexpectedly closed with status: 1","stacktrace":"stack backtrace:
0: 0x576797 - backtrace::backtrace::trace::hc4bd56a2f176de7e
1: 0x5768ff - backtrace::capture::Backtrace::new::he3b2a15d39027c46
2: 0x4d0f4b - webdriver::error::WebDriverError::new::ha0fbd6d1a1131b43
3: 0x4d5b57 - geckodriver::marionette::MarionetteHandler::create_connection::hf0532ddb9e159684
4: 0x4ba4cb - <webdriver::server::Dispatcher<T, U>>::run::h2119c674d7b88193
5: 0x498cbf - std::sys_common::backtrace::__rust_begin_short_backtrace::h21d98a9ff86d4c25
6: 0x4a01cf - std::panicking::try::do_call::h5cff0c9b18cfdbba
7: 0x61f237 - panic_unwind::__rust_maybe_catch_panic
at /checkout/src/libpanic_unwind/lib.rs:99
8: 0x4b29e7 - <F as alloc::boxed::FnBox<A>>::call_box::h413eb1d9d9f1c473
9: 0x6190d3 - alloc::boxed::{{impl}}::call_once<(),()>
at /checkout/src/liballoc/boxed.rs:692
- std::sys_common::thread::start_thread
at /checkout/src/libstd/sys_common/thread.rs:21
- std::sys::imp::thread::{{impl}}::new::thread_start
at /checkout/src/libstd/sys/unix/thread.rs:84"
}
}
``` | 2018/01/18 | [
"https://Stackoverflow.com/questions/48313388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1039860/"
] | Finally got this to work, but with chromedriver and chrome.
You will have to install chrome first:
```
sudo apt-get install chromium-browser
```
Next downloaded the debian package from here:
<https://packages.debian.org/stretch/armhf/chromium-driver/download>
Unpack the file "chromedriver":
```
mkdir tmp
dpkg-deb -R chromium-driver_63.0.3239.84-1_deb9u1_armhf.deb tmp
cp /usr/local/bin/chromedriver .
mv chromedriver chromedriver_arm_64
```
The rest of the code is unchanged. Note that
```
driver_options.add_argument("headless")
```
is fine ("--headless" may work as well - I haven't tried it). I sure hope that someone finds this before spending as much time on this as I have! | You can also try declaring the DISPLAY variable, it works especially for remote connections.
Run this command on the terminal:
```
export DISPLAY=:0.0
``` | 17,060 |
36,426,547 | I am using Ubuntu 14.04
I wanted to install package "requests" to use in python 3.5, so I installed it using pip3. I could see it in /usr/lib/python3.4, but while trying to actually execute scripts with Python 3.5 I always got "ImportError: No module named 'requests'"
OK, so I figured, perhaps that's because the package is not in python3.5 but in python3.4. Therefore, I tried to uninstall and install i again, but it just kept popping up where I didn't want it (not to mention, when I run apt-get remove pip3-requests, it actually removed pip3 for me as well lol). Therefore, I tried physically removing python3.4 from usr/lib and usr/local/lib in order to try and see if maybe pip3 was confused and installed packages in wrong directories.
I'm afraid it was not a good idea... when I now run e.g.
`sudo pip3 install reqests`
I get the following error:
`Could not find platform independent libraries <prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ImportError: No module named 'encodings'`
Is there any way to fix this now? And to actually use requests package?
When I use
```
sudo apt-get install python3-pip
```
It works and starts unpacking etc. but then I get a long error that starts with:
```
Setting up python3.4 (3.4.3-1ubuntu1~14.04.3)
Could not find platform independent libraries <prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ImportError: No module named 'encodings'
Aborted
dpkg: error processing package python3.4 (--configure):
subprocess installed post-installation script returned error exit status 134
dpkg: dependency problems prevent configuration of python3:
```
(...)
and ends with
```
python3 depends on python3.4 (>= 3.4.0-0~); however:
Package python3.4 is not configured yet.
dpkg: error processing package python3-wheel (--configure):
dependency problems - leaving unconfigured
E: Sub-process /usr/bin/dpkg returned an error code (1)
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36426547",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4680896/"
] | **First of all, it is a very bad idea to remove your *system* Python 3 in Ubuntu (which 3.4 is in recent
subrevisions of Trusty LTS)**. That is because it is a **vital part of the system**. If you run the command `apt-cache rdepends python3`, you'd see that packages such as `ubuntu-minimal`, `ubuntu-release-upgrader-core`, `lsb-release`, `lsb-core`, `ubuntu-core-libs` and so on, all depend on Ubuntu's version of Python 3 being installed (and this is the **python3.4** in Ubuntu 14.04.4). If you force-remove python 3.4 by hand, you've ruined your system.
It might very well be
that you now have to reinstall the whole operating system, unless you manage to reinstall all the system
`.deb` packages that put data in `/usr/lib/python3.4`.
And especially so if you do it with force. It can make your system even unbootable, so do not reboot that
computer before you've successfully reinstalled Python 3... actually I am not sure how to do it safely since
it seems you've forcefully removed all system dependencies from the /usr/lib)
---
You should try to reinstall python3.4
```
sudo apt-get install --reinstall python3.4
```
But now the bigger problem is that you've still missing all sorts of dependencies for your system programs.
Do note that `pip` also should be available as a *module*. Thus to ensure that you install for Python 3.5,
you can do
```
sudo python3.5 -mpip install requests
```
The `pip3` is a wrapper for a `pip` that installs to the *system* Python 3 version (3.4 in your case). | Ubuntu 14.04LTS uses the [*trusty* package list](http://packages.ubuntu.com/trusty/). That repository comes with [Python 3.4.0-0ubuntu2](http://packages.ubuntu.com/trusty/python3). So the `pip` contained in `python3-pip` belongs to *that* version: 3.4.
As such, when using Python 3.5, packages installed using Python 3.4 and that version’s `pip` will not be available.
I don’t know how you installed Python 3.5 on your system, but you should use that way to install `pip` for that version as well. If you compiled it from source yourself, you should see the [install instructions for pip](https://pip.pypa.io/en/stable/installing/) on how to get it installed for Python 3.5. | 17,061 |
6,397,344 | it is a python code..whether implementing using linked list .... is efficient in this way...........
```
data = [] # data storage for stacks represented as linked lists
stack = [-1, -1, -1] # pointers to each of three stacks (-1 is the "null" pointer)
free = -1 # pointer to list of free stack nodes to be reused
def allocate(value):
''' allocate a new node and return a pointer to it '''
global free
global data
if free == -1:
# free list is empty, need to expand data list
data += [value,-1]
return len(data)-2
else:
# pop a node off the free list and reuse it
temp = free
free = data[temp+1]
data[temp] = value
data[temp+1] = -1
return temp
def release(ptr):
''' put node on the free list '''
global free
temp = free
free = ptr
data[free+1] = temp
def push(n, value):
''' push value onto stack n '''
global free
global data
temp = stack[n]
stack[n] = allocate(value)
data[stack[n]+1] = temp
def pop(n):
''' pop a value off of stack n '''
value = data[stack[n]]
temp = stack[n]
stack[n] = data[stack[n]+1]
release(temp)
return value
def list(ptr):
''' list contents of a stack '''
while ptr != -1:
print data[ptr],
ptr = data[ptr+1]
print
def list_all():
''' list contents of all the stacks and the free list '''
print stack,free,data
for i in range(3):
print i,":",
list(stack[i])
print "free:",
list(free)
push(0,"hello")
push(1,"foo")
push(0,"goodbye")
push(1,"bar")
list_all()
pop(0)
pop(0)
push(2,"abc")
list_all()
pop(1)
pop(2)
pop(1)
list_all()
```
r there is any way to do this efficiently other than this??implement in this way in c /c++ would be eficient??? | 2011/06/18 | [
"https://Stackoverflow.com/questions/6397344",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/804287/"
] | In python, a list is a stack:
```
>>> l = [1, 2, 3, 4, 5]
>>> l.pop()
5
>>> l.pop()
4
>>> l.append(9)
>>> l
[1, 2, 3, 9]
>>> l.pop()
9
>>> l.pop()
3
>>> l.append(12)
>>> l
[1, 2, 12]
```
Although it may be an... entertaining exercise to implement a c-style linked list in python, it is unnecessary, and likely to be very slow. Just use a list instead. | A far better solution could be using list instead of stack to implement linked list. The code given is stack implementation of linked list, which I believe is a norm in python but in C/C++ you can use list for efficient implementation.
A sample code in C would be as follows :-
```
#include <stdio.h>
#include <stdlib.h>
struct node{
int data;
struct node *next;
};
struct node* add(struct node *head, int data){
struct node *tmp;
if(head == NULL){
head=(struct node *)malloc(sizeof(struct node));
if(head == NULL){
printf("Error! memory is not available\n");
exit(0);
}
head-> data = data;
head-> next = head;
}else{
tmp = head;
while (tmp-> next != head)
tmp = tmp-> next;
tmp-> next = (struct node *)malloc(sizeof(struct node));
if(tmp -> next == NULL)
{
printf("Error! memory is not available\n");
exit(0);
}
tmp = tmp-> next;
tmp-> data = data;
tmp-> next = head;
}
return head;
}
void printlist(struct node *head)
{
struct node *current;
current = head;
if(current!= NULL)
{
do
{
printf("%d\t",current->data);
current = current->next;
} while (current!= head);
printf("\n");
}
else
printf("The list is empty\n");
}
void destroy(struct node *head)
{
struct node *current, *tmp;
current = head->next;
head->next = NULL;
while(current != NULL) {
tmp = current->next;
free(current);
current = tmp;
}
}
void main()
{
struct node *head = NULL;
head = add(head,1); /* 1 */
printlist(head);
head = add(head,20);/* 20 */
printlist(head);
head = add(head,10);/* 1 20 10 */
printlist(head);
head = add(head,5); /* 1 20 10 5*/
printlist(head);
destroy(head);
getchar();
}
```
In the above example if you create an array of pointers with size 3, each of the pointer pointing to head, you can create three linked lists. This would handle the space with maximum efficiency and there is no need to check for free nodes too. | 17,063 |
28,223,747 | I'm new to python and want help cleaning up my code.
I had to make a definition that takes a string and returns the first half lowercase and second part uppercase.
This is my code - but I can't help think there's a cleaner way to write this.
```
def sillycase(string):
x = len(string)/2
y = round(x)
print (string[:y].lower() + string[y:].upper())
``` | 2015/01/29 | [
"https://Stackoverflow.com/questions/28223747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4459432/"
] | You can use `find -exec` for this:
```
find /FolderA -type f -exec sed -i 's/wordA/wordB/g' {} +
``` | `find` would be the way to go.
A bash alternative:
```
shopt -s globstar
files=()
for file in FolderA/**; do # double asterisk is not a typo
[[ -f "$file" ]] && files+=("$file")
done
sed -i 's/wordA/wordB/g' "${files[@]}"
``` | 17,068 |
55,454,514 | I'm using a Kubernetes inventory builder script found here: <https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py>
On `line 36`, the ruamel YML library is imported using the code `from ruamel.yaml import YAML`. This library can be found here: <https://pypi.org/project/ruamel.yaml/>
On my OSX device (`Mojave 10.14.3`), if I run `pip list`, I can clearly see the most up to date version of `ruamel.yaml`:
[](https://i.stack.imgur.com/iuu1g.png)
If I run `pip show ruamel.yaml`, I get the following output:
[](https://i.stack.imgur.com/WRdHY.png)
I'm running the script with this command: `CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py 10.0.0.1 10.0.0.2 10.0.0.4 10.0.0.5`
Bizarrely, it returns the following error:
```
Traceback (most recent call last):
File "contrib/inventory_builder/inventory.py", line 36, in <module>
from ruamel.yaml import YAML
ModuleNotFoundError: No module named 'ruamel'
```
I have very little experience with Python, so don't understand how this could be failing. Have I installed the library incorrectly or something? From the documentation on the `ruamel.yml` project page, it looks like the script is calling the library as it should be.
Thanks in advance | 2019/04/01 | [
"https://Stackoverflow.com/questions/55454514",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/864245/"
] | `pip` is set to point to the Python 2 installation. To install the library under Python 3, do `pip3 install ruamel.yml`. | you're using python 3 and want to use the package that is with python 2. Go to the directory where your python 3 is, navigate to Scripts and use the pip in there to install the needed library. | 17,069 |
466,321 | How can I convert from a unix timestamp (say 1232559922) to a fractional julian date (2454853.03150).
I found a website ( <http://aa.usno.navy.mil/data/docs/JulianDate.php> ) that performs a similar calculation but I need to do it programatically.
Solutions can be in C/C++, python, perl, bash, etc... | 2009/01/21 | [
"https://Stackoverflow.com/questions/466321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/280/"
] | The Unix epoch (zero-point) is January 1, 1970 GMT. That corresponds to the Julian day of 2440587.5
So, in pseudo-code:
```
function float getJulianFromUnix( int unixSecs )
{
return ( unixSecs / 86400.0 ) + 2440587.5;
}
``` | I know that this is an old post, but I'll just say ...
The answer given by Jason Cohen is a good approximation of the conversion.
There is a problem though that relates to the number of seconds in one day. A day is not -exactly- 86400 seconds long, and periodically seconds are added to days in order to keep time synchronized with the various observable standards. These are called Leap Seconds (<https://en.wikipedia.org/wiki/Leap_second>). Leap seconds are added to UTC in order to keep it within 1 second of UT1.
It stands to reason that as more and more time has elapsed since Jan 1, 1970, the simple conversion above will accrue more and more error from "actual observable time." Between 1972 and 2013 there were added 25 leap seconds.
Part of the beauty and the simplicity of Julian Day numbers is that they don't represent date strings at all. They are just a count of elapsed time since the start of the Julian Epoch, much like POSIX time is a continuous count of milliseconds since the POSIX Epoch. The only problem that exists, then, is when you try to map a Julian Day number to a localized date string.
If you need a date string that is accurate to within a minute (in 2013), then you'll need an algorithm that can account for leap seconds. | 17,075 |
24,235,241 | I recently installed sublime text 2 to try it out before I decide to get sublime text 3 but I can't properly run any code from it. I've hit Ctrl + B and I get an output like this.
```
[Error 2] The system cannot find the file specified
[cmd: [u'python', u'-u', u'C:\\Users\\Jeff\\Desktop\\Personal codes\\print.py']]
[dir: C:\Users\Jeff\Desktop\Personal codes]
[path: C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Program Files(x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Windows Live\Shared]
[Finished]
```
I've looked in my roaming folder and found sublime text 2 because another post mentioned editing a file in the python folder there but no such folder exists in Roaming\Sublime Text 2 all I have is Installed Packages, Packages, Pristine Packages and Settings. Am I missing something or is it something obvious that I should know? | 2014/06/16 | [
"https://Stackoverflow.com/questions/24235241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3697905/"
] | Instead of adding python to the path, I prefer simply specifying the full path to python in the sublime build. Python.exe is probably installed in one of these (or something similar)
```
C:/Python
C:/Program Files/Python
C:/Program Files (x86)/Python
etc...
```
Once you found it (lets say its in C:\Program Files (x86)\Python27) edit the sublime\_build for python. Here is the build I use:
```
{
"cmd": ["C:\\Program Files (x86)\\Python27\\python.exe","-u","$file"],
"selector": "source.python"
}
```
for me, this file is in
```
Sublime Text\Data\Packages\Python\Python.sublime-build
``` | Windows is unable to find your python installation. When you run a command like:
```
python <your_file.py>
```
the first `python` tells your system to find wherever your python binary is and try to run some command by that name. By looking over the path that was echoed, it doesn't look like you actually have your python binary on your system path.
If you're uncertain as to how to add python to your `path`, check out this superuser question: <https://superuser.com/questions/143119/how-to-add-python-to-the-windows-path> | 17,080 |
35,931,198 | I searched the forum and all answers are python or C+ related, this is for ruby.
I'm trying to figure out how to make the below program prompt the user for an item in the array by typing a number 1-4 (so the position wouldn't start from 0 in the users eyes).
It's probably a simple fix, but I am new to this.. I appreciate any time and help.
```
array = []
puts "please add to the array 4 times"
4.times do
array << gets.chomp
end
puts "#{array}"
puts "Select a position in the array by typing a singular number from 1-4"
``` | 2016/03/11 | [
"https://Stackoverflow.com/questions/35931198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5281054/"
] | You can try something like this:
```
array = []
puts "please add to the array 4 times"
4.times do
array << gets.chomp
end
puts "#{array}"
loop do
puts "Select a position in the array by typing a singular number from 1-4"
ans = gets.chomp.to_i
if ans > 0 && ans <= array.length
puts "The element at position #{ans} is " + array[ans-1]
break
else
puts "You have to pick a number between 1 & 4, try again."
end
end
``` | You can get the index by combining `gets.chomp` (reads a line of user input and removes the trailing newline character) and `to_i` (convert to integer).
Combine this with the ability to access an array's element at a specific index using the `array[index_integer]` method.
To piece it together:
```
array = ["first_item", "second_item", "third_item"]
puts "enter the array index: "
index = gets.chomp.to_i
adjusted_index = index - 1
value_at_index = array[adjusted_index]
puts "The element at that index is #{value_at_index}"
```
However be forewarned that the index will 'loop around' to -1 if a value of 0 is given.
For example, if the user enters 0, then `adjusted_index` will be -1 and the last element of the array will be displayed. | 17,081 |
17,806,673 | Is there a canonical location where to put self-written packages? My own search only yielded a blog post about [where to put version-independent pure Python packages](http://pythonsimple.noucleus.net/python-install/python-site-packages-what-they-are-and-where-to-put-them) and a [SO question for the canonical location under Linux](https://stackoverflow.com/questions/16196268/where-should-i-put-my-own-python-module-so-that-it-can-be-imported), while I am working on Windows.
My use case is that I would like to be able to import my own packages during a IPython session just like any site-package, no matter in which working directory I started the session. In Matlab, the corresponding folder for example is simply `C:/Users/ojdo/Documents/MATLAB`.
```
import mypackage as mp
mp.awesomefunction()
...
``` | 2013/07/23 | [
"https://Stackoverflow.com/questions/17806673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2375855/"
] | Thanks to the [two](http://docs.python.org/2/install/#how-installation-works) [additional](http://docs.python.org/2/install/#alternate-installation-the-home-scheme) links, I found not only the intended answer to my question, but also a solution that I like even more and that - ironically - was also explained in my first search result, but obfuscated by all the version-(in)dependent site-package lingo.
Answer to original question: default folder
-------------------------------------------
I wanted to know if there was a canonical (as in "default") location for my self-written packages. And that exists:
```
>>> import site
>>> site.USER_SITE
'C:\\Users\\ojdo\\AppData\\Roaming\\Python\\Python27\\site-packages'
```
And for a Linux and Python 3 example:
```
ojdo@ubuntu:~$ python3
>>> import site
>>> site.USER_SITE
'/home/ojdo/.local/lib/python3.6/site-packages'
```
The docs on [user scheme package installation](http://docs.python.org/2/install/#alternate-installation-the-user-scheme) state that folder `USER_SITE` - if it exists - will be automatically added to your Python's `sys.path` upon interpreter startup, no manual steps needed.
---
Bonus: custom directory for own packages
----------------------------------------
1. Create a directory anywhere, e.g. `C:\Users\ojdo\Documents\Python\Libs`.
2. Add the file `sitecustomize.py` to the site-packages folder of the Python installation, i.e. in `C:\Python27\Lib\site-packages` (for all users) or `site.USER_SITE` (for a single user).
3. This file then is filled with the following code:
```
import site
site.addsitedir(r'C:\Users\ojdo\Documents\Python\Libs')
```
4. Voilà, the new directory now is automatically added to `sys.path` in every (I)Python session.
How it works: Package [site](http://docs.python.org/2/library/site.html), that is automatically imported during every start of Python, also tries to import the package `sitecustomize` for custom package path modifications. In this case, this dummy package consists of a script that adds the personal package folder to the Python path. | I'd use the home scheme for this:
<http://docs.python.org/2/install/#alternate-installation-the-home-scheme> | 17,082 |
25,403,110 | I am getting started with Django through [this](http://www.youtube.com/watch?v=3DccH9AMwFQ) beautiful video tutorial.On Tutorial 15 of the video series, there is database migration using **south**. But when I do `python manage.py migrate signups`, I got a whole lot of errors. The first error was:
```
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 164, i
n _run_migration
for name, db in south.db.dbs.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'
```
I changed `iteritems()` to `items()` to fix that but there is a whole lot of other errors popping up. My guess is that it has to do with the versions in action- `South==1.0 Django == 1.6.5 and Python 3.4.1`
Here is the content of my *models.py* and `for_you, timestamp, updated` are the attributes added after migration. The commented out attributes were there originally.
```
`from django.db import models
class SignUp(models.Model):
for_you = models.BooleanField(default = True)
first_name = models.CharField(max_length = 120, null=True, blank=True)
last_name = models.CharField(max_length = 120, null=True, blank=True)
email = models.EmailField()
timestamp = models.DateTimeField(auto_now_add = True, auto_now = False)
updated = models.DateTimeField(auto_now_add = False, auto_now = True, default=True)
#timestamp = models.DateTimeField(auto_now_add = False, auto_now = True)
#timestamp = models.DateTimeField(auto_now_add = True, auto_now = False)
def __str__(self):
return self.email`
```
The autogenerated **migrations/0002\_auto\_\_add\_field\_signup\_for\_you\_\_add\_field\_signup\_updated.py** looks like
```
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'SignUp.for_you'
db.add_column('signups_signup', 'for_you',
self.gf('django.db.models.fields.BooleanField')(default=True),
keep_default=False)
# Adding field 'SignUp.updated'
db.add_column('signups_signup', 'updated',
self.gf('django.db.models.fields.DateTimeField')(blank=True, default=True, auto_now=True),
keep_default=False)
def backwards(self, orm):
# Deleting field 'SignUp.for_you'
db.delete_column('signups_signup', 'for_you')
# Deleting field 'SignUp.updated'
db.delete_column('signups_signup', 'updated')
models = {
'signups.signup': {
'Meta': {'object_name': 'SignUp'},
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75'}),
'first_name': ('django.db.models.fields.CharField', [], {'blank': 'True', 'null': 'True', 'max_length': '120'}),
'for_you': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_name': ('django.db.models.fields.CharField', [], {'blank': 'True', 'null': 'True', 'max_length': '120'}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {'blank': 'True', 'auto_now_add': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'blank': 'True', 'default': 'True', 'auto_now': 'True'})
}
}
complete_apps = ['signups']
```
And here is the complete error log:
```
Running migrations for signups:
- Migrating forwards to 0002_auto__add_field_signup_for_you__add_field_signup_u
pdated.
> signups:0002_auto__add_field_signup_for_you__add_field_signup_updated
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 175, i
n _run_migration
migration_function()
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 60, in
<lambda>
return (lambda: direction(orm))
File "D:\Projects\skillshare\src\signups\migrations\0002_auto__add_
field_signup_for_you__add_field_signup_updated.py", line 19, in forwards
keep_default=False)
File "C:\Python34\lib\site-packages\south\db\sqlite3.py", line 35, in add_colu
mn
field_default = "'%s'" % field.get_db_prep_save(default, connection=self._ge
t_connection())
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
350, in get_db_prep_save
prepared=False)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
911, in get_db_prep_value
value = self.get_prep_value(value)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
895, in get_prep_value
value = self.to_python(value)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
854, in to_python
parsed = parse_datetime(value)
File "C:\Python34\lib\site-packages\django\utils\dateparse.py", line 67, in pa
rse_datetime
match = datetime_re.match(value)
TypeError: expected string or buffer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line
399, in execute_from_command_line
utility.execute()
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line
392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python34\lib\site-packages\django\core\management\base.py", line 242,
in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python34\lib\site-packages\django\core\management\base.py", line 285,
in execute
output = self.handle(*args, **options)
File "C:\Python34\lib\site-packages\south\management\commands\migrate.py", lin
e 111, in handle
ignore_ghosts = ignore_ghosts,
File "C:\Python34\lib\site-packages\south\migration\__init__.py", line 220, in
migrate_app
success = migrator.migrate_many(target, workplan, database)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 256, i
n migrate_many
result = migrator.__class__.migrate_many(migrator, target, migrations, datab
ase)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 331, i
n migrate_many
result = self.migrate(migration, database)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 133, i
n migrate
result = self.run(migration, database)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 113, i
n run
dry_run.run_migration(migration, database)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 192, i
n run_migration
self._run_migration(migration)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 178, i
n _run_migration
raise exceptions.FailedDryRun(migration, sys.exc_info())
south.exceptions.FailedDryRun: ! Error found during dry run of '0002_auto__add_
field_signup_for_you__add_field_signup_updated'! Aborting.
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 175, i
n _run_migration
migration_function()
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 60, in
<lambda>
return (lambda: direction(orm))
File "D:\Projects\skillshare\src\signups\migrations\0002_auto__add_
field_signup_for_you__add_field_signup_updated.py", line 19, in forwards
keep_default=False)
File "C:\Python34\lib\site-packages\south\db\sqlite3.py", line 35, in add_colu
mn
field_default = "'%s'" % field.get_db_prep_save(default, connection=self._ge
t_connection())
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
350, in get_db_prep_save
prepared=False)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
911, in get_db_prep_value
value = self.get_prep_value(value)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
895, in get_prep_value
value = self.to_python(value)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
854, in to_python
parsed = parse_datetime(value)
File "C:\Python34\lib\site-packages\django\utils\dateparse.py", line 67, in pa
rse_datetime
match = datetime_re.match(value)
TypeError: expected string or buffer
``` | 2014/08/20 | [
"https://Stackoverflow.com/questions/25403110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2673433/"
] | There's that problem, you use boolean as a default value (see `default=True` on line 19 in your migration) for `DateTime` column. That wont work. Just remove that `default=True` from your model and regenerate your migration.
You would probably need `null=True` in that column or some time-based default value. | In your migration the `fields.DateTimeField` can not be a Boolean value (default=True).
You can edit your migrations set a datetime value
```
import datetime
...
default = datetime.datetime(2016,2,25,16,35,658000)
...
```
The `models.DateTimeField` should be a `None` or a `datetime` object | 17,087 |
28,779,395 | This is a very easy code to understand things :
Main :
```html
import pdb
#pdb.set_trace()
import sys
import csv
sys.version_info
if sys.version_info[0] < 3:
from Tkinter import *
else:
from tkinter import *
from Untitled import *
main_window =Tk()
main_window.title("Welcome")
label = Label(main_window, text="Enter your current weight")
label.pack()
Current_Weight=StringVar()
Current_Weight.set("0.0")
entree1 = Entry(main_window,textvariable=Current_Weight,width=30)
entree1.pack()
bouton1 = Button(main_window, text="Enter", command= lambda evt,Current_Weight,entree1: get(evt,Current_Weight,entree1))
bouton1.pack()
```
and in another file Untitled i have the "get" function :
```html
def get (event,loot, entree):
loot=float(entree.get())
print(loot)
```
When i run the main i receive the following error :
>
> Exception in Tkinter callback
> Traceback (most recent call last):
> File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/idlelib/run.py", line 121, in main
> seq, request = rpc.request\_queue.get(block=True, timeout=0.05)
> File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/queue.py", line 175, in get
> raise Empty
> queue.Empty
>
>
>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/tkinter/**init**.py", line 1533, in **call**
return self.func(\*args)
TypeError: () missing 3 required positional arguments: 'evt', 'Current\_Weight', and 'entree1'
How can i solve that ?
I thought the lambda function allows us to uses some args in a event-dependant function. | 2015/02/28 | [
"https://Stackoverflow.com/questions/28779395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4502651/"
] | The `command` lambda does not take any arguments at all; furthermore there is no `evt` that you can catch. A lambda can refer to variables outside it; this is called a closure. Thus your button code should be:
```
bouton1 = Button(main_window, text="Enter",
command = lambda: get(Current_Weight, entree1))
```
And your `get` should say:
```
def get(loot, entree):
loot = float(entree.get())
print(loot)
``` | Actually, you just need the Entry object entree1 as the lamda pass-in argument. Either statement below would work.
```
bouton1 = Button(main_window, text="Enter", command=lambda x = entree1: get(x))
bouton1 = Button(main_window, text="Enter", command=lambda : get(entree1))
```
with the function get defined as
```
def get(entree):
print(float(entree.get()))
``` | 17,088 |
17,349,928 | I understand that an RGB to HSV conversion should take RGB values 0-255 and convert to HSV values [0-360, 0-1, 0-1]. For example see this [converter in java](http://www.javascripter.net/faq/rgb2hsv.htm):
When I run matplotlib.colors.rbg\_to\_hsv on an image, it seems to output values [0-1, 0-1, 0-360] instead. However, I have used this function on an [image like this](http://www.tadpolewebworks.com/web/atomic/images/H02-HSV.jpg), and it seems to be working in the right order [H,S,V], just the V is too large.
Example:
```
In [1]: import matplotlib.pyplot as plt
In [2]: import matplotlib.colors as colors
In [3]: image = plt.imread("/path/to/rgb/jpg/image")
In [4]: print image
[[[126 91 111]
[123 85 106]
[123 85 106]
...,
In [5]: print colors.rgb_to_hsv(image)
[[[ 0 0 126]
[ 0 0 123]
[ 0 0 123]
...,
```
Those are not 0s, they're some number between 0 and 1.
Here is the definition from matplotlib.colors.rgb\_to\_hsv
```
def rgb_to_hsv(arr):
"""
convert rgb values in a numpy array to hsv values
input and output arrays should have shape (M,N,3)
"""
out = np.zeros(arr.shape, dtype=np.float)
arr_max = arr.max(-1)
ipos = arr_max > 0
delta = arr.ptp(-1)
s = np.zeros_like(delta)
s[ipos] = delta[ipos] / arr_max[ipos]
ipos = delta > 0
# red is max
idx = (arr[:, :, 0] == arr_max) & ipos
out[idx, 0] = (arr[idx, 1] - arr[idx, 2]) / delta[idx]
# green is max
idx = (arr[:, :, 1] == arr_max) & ipos
out[idx, 0] = 2. + (arr[idx, 2] - arr[idx, 0]) / delta[idx]
# blue is max
idx = (arr[:, :, 2] == arr_max) & ipos
out[idx, 0] = 4. + (arr[idx, 0] - arr[idx, 1]) / delta[idx]
out[:, :, 0] = (out[:, :, 0] / 6.0) % 1.0
out[:, :, 1] = s
out[:, :, 2] = arr_max
return out
```
I would use one of the other rgb\_to\_hsv conversions like colorsys, but this is the only vectorized python one I have found. Can we figure this out? Do we need to report it on github?
Matplotlib 1.2.0 , numpy 1.6.1 , Python 2.7 , Mac OS X 10.8 | 2013/06/27 | [
"https://Stackoverflow.com/questions/17349928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1276299/"
] | It works beautifully if, instead of unsigned int RGB values from 0 to 255, you feed it float RGB values from 0 to 1. It would be nice if the documentation specified this, or if the function tried to catch what seems to be a very likely human error. But you can get what you want simply by calling:
```
print colors.rgb_to_hsv(image / 255)
``` | Take care, the source comment states input/output should be of dimension M,N,3, and the function fails for RGBA (M,N,4) images, e.g. imported png files. | 17,089 |
65,408,099 | When i was practising list&if in python i got stuck with a problem
```
friends=["a","b","c"]
print("eklemek mi cikarmak mi istiyosunuz ?")
ans=(input())
if ans == 'add':
add=input("adding who ?")
friends.append(add)
if ans=='remove':
remove = input("removing who ?")
friends.remove(remove)
print(remove)
```
the code is above works fine but when i want to improve it with already existing friends and not having that friend i got stuck and having this error = if add in list:
TypeError: argument of type 'type' is not iterable same goes to not having that friend to remove
```
friends=["a","b","c"]
print("add or remove ? ?")
ans=(input())
if ans == 'add':
add=input("adding who ? ?")
if add in friends:
print("you already added this person")
else :
friends.append(add)
if ans=='remove':
remove = input("removing who ?")
friends.remove(remove)
print(friends)
``` | 2020/12/22 | [
"https://Stackoverflow.com/questions/65408099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14871427/"
] | We had the same issue, and managed to fix it by searching for the exact minified function in the minified code (in this case `(0,o.useState`), then search around that code to find some string or identifier that wasn't minifed (found a prop name that was a string) that we could use to find the place in the source code.
There we saw that VS Code had auto imported useState from the wrong place (`import { useEffect } from 'react/cjs/react.development'` ). So we just removed that and imported useState from "react" instead.
(Also had to clear all react native cache to make this work) | I had a similar problem,
I've realized that my Expo SDK version was an older one, I've upgraded Expo SDK and re-deployed my app, problem did not occur again. | 17,090 |
57,774,652 | This function:
```js
function print(){
console.log('num 1')
setTimeout(() => {
global.name = 'max'
console.log('num 2')
},9000);
console.log('num 3');
}
print();
console.log(global.name)
```
is priting this:
```
num 1
num 3
undefined
num 2
```
And I need to:
1. print `num 1`
2. wait untill the 9 seconds
3. set the `global.name` = `max`
4. print `num 2`
5. print `num 3`
6. `console.log(global.name)`
7. print `max` and not `undefined`
I wrote this code in python and it executese line by line
because there is nothing called sync and async.
I need this code executed like python(line by line) | 2019/09/03 | [
"https://Stackoverflow.com/questions/57774652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10489311/"
] | The error is cause by this line:
```
options['partitionKey'] = '/Structures'
```
You need to specify the specific value of partition key here, not the column name.For example,my partition key is '/name',and the specific value in this document is 'A'.
[](https://i.stack.imgur.com/qvm0f.png)
Then your code looks like :
```
from azure.cosmos import cosmos_client
client = cosmos_client.CosmosClient("https://***.documents.azure.com:443/", {'masterKey': '***'})
options = {}
options['enableCrossPartitionQuery'] = True
options['maxItemCount'] = 5
options['partitionKey'] = 'A'
client.DeleteItem("dbs/db/colls/coll/docs/2", options)
``` | ```
import datetime as datetime
import pandas as pd
import json
import os
URL = 'https://resouceName.documents.azure.com:443/'
KEY = 'YourKey'
DATABASE_NAME = 'resourceName'
CONTAINER_NAME = 'ContainerName'
client = CosmosClient(URL, credential=KEY)
database = client.get_database_client(DATABASE_NAME)
container = database.get_container_client(CONTAINER_NAME)
items = container.query_items(
query=f'SELECT * FROM {CONTAINER_NAME} c ',
enable_cross_partition_query=True)
documents = []
for i in items:
delete = container.delete_item(i["id"],i["partitionKey"]) ```
#The parameter above for delete_item should be your Id and PartitonKey which runs in a loop and all the records will be deleted
``` | 17,091 |
39,875,273 | I have attempted to create an insertion sort in python, however the list returned is not sorted. What is the problem with my code?
Argument given: [3, 2, 1, 4, 5, 8, 7, 9, 6]
Result: 2
1
3
6
4
7
5
8
9
Python code:
```
def insertion_sort(mylist):
sorted_list = []
for i in mylist:
posfound = 0 #defaults to 0
for j in range(len(sorted_list)):
if sorted_list[j] > i:
sorted_list.insert(j-1, i) #put the number in before element 'j'
posfound = 1 #if you found the correct position in the list set to 1
break
if posfound == 0: #if you can't find a place in the list
sorted_list.insert(len(sorted_list), i) #put number at the end of the list
return sorted_list
``` | 2016/10/05 | [
"https://Stackoverflow.com/questions/39875273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4363434/"
] | You need to change `sorted_list.insert(j-1, i)` to be `sorted_list.insert(j, i)` to insert before position `j`.
`insert(j-1, ..)` will insert before the *previous* element, and in the case where `j=0` it'll wrap around and insert before the last element.
The [Python data structures tutorial](https://docs.python.org/3/tutorial/datastructures.html#data-structures) may be useful. | As often, it was a off-by-one error, the code below is fixed. I also made some parts a bit prettier.
```
def insertion_sort(mylist):
sorted_list = []
for i in mylist:
for index, j in enumerate(sorted_list):
if j > i:
sorted_list.insert(index, i) #put the number in before element 'j'
break
else:
sorted_list.append(i) #put number at the end of the list
return sorted_list
``` | 17,092 |
7,615,511 | I am writing a python script and I just need the second line of a series of very small text files. I would like to extract this without saving the file to my harddrive as I currently do.
I have found a few threads that reference the TempFile and StringIO modules but I was unable to make much sense of them.
Currently I download all of the files and name them sequentially like 1.txt, 2.txt, etc, then go through all of them and extract the second line. I would like to open the file grab the line then move on to finding and opening and reading the next file.
Here is what I do currently with writing it to my HDD:
```
while (count4 <= num_files):
file_p = [directory,str(count4),'.txt']
file_path = ''.join(file_p)
cand_summary = string.strip(linecache.getline(file_path, 2))
linkFile = open('Summary.txt', 'a')
linkFile.write(cand_summary)
linkFile.write("\n")
count4 = count4 + 1
linkFile.close()
``` | 2011/09/30 | [
"https://Stackoverflow.com/questions/7615511",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/935684/"
] | There's a glitch in iText and iTextSharp but you can fix it pretty easily if you don't mind downloading the source and recompiling it. You need to make a change to two files. Any changes I've made are commented inline in the code. Line numbers are based on the 5.1.2.0 code rev 240
The first is in `iTextSharp.text.html.HtmlUtilities.cs`. Look for the function `EliminateWhiteSpace` at line 249 and change it to:
```
public static String EliminateWhiteSpace(String content) {
// multiple spaces are reduced to one,
// newlines are treated as spaces,
// tabs, carriage returns are ignored.
StringBuilder buf = new StringBuilder();
int len = content.Length;
char character;
bool newline = false;
bool space = false;//Detect whether we have written at least one space already
for (int i = 0; i < len; i++) {
switch (character = content[i]) {
case ' ':
if (!newline && !space) {//If we are not at a new line AND ALSO did not just append a space
buf.Append(character);
space = true; //flag that we just wrote a space
}
break;
case '\n':
if (i > 0) {
newline = true;
buf.Append(' ');
}
break;
case '\r':
break;
case '\t':
break;
default:
newline = false;
space = false; //reset flag
buf.Append(character);
break;
}
}
return buf.ToString();
}
```
The second change is in `iTextSharp.text.xml.simpleparser.SimpleXMLParser.cs`. In the function `Go` at line 185 change line 248 to:
```
if (html /*&& nowhite*/) {//removed the nowhite check from here because that should be handled by the HTML parser later, not the XML parser
``` | I would recommend using [wkhtmltopdf](http://code.google.com/p/wkhtmltopdf/) instead of iText. wkhtmltopdf will output the html exactly as rendered by webkit (Google Chrome, Safari) instead of iText's conversion. It is just a binary that you can call. That being said, I might check the html to ensure that there are paragraphs and/or line breaks in the user input. They might be stripped out before the conversion. | 17,095 |
65,470,264 | I am pretty new to python. Just been working through some online tutorials on udemy. I seem to have an issue with pip installing modules.
* I've tried reinstalling them.
* Upgrading my python version.
* In VS I always just get `module not found`.
If I do it in the cmd prompt this is what I get below.
[](https://i.stack.imgur.com/joizG.png) | 2020/12/27 | [
"https://Stackoverflow.com/questions/65470264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14898017/"
] | >
> there is no action called until my final write to parquet.
>
>
>
and
>
> Spark during that final write to parquet call will be able to see that this dataframe is being used in f1 and f2 and will cache the dataframe itself.
>
>
>
are correct. If you do `output_df.explain()`, you will see the query plan, which will show that what you said is correct.
Thus, there is no need to do `special_rows.cache()`. Generally, `cache` is only necessary if you intend to reuse the dataframe **after** forcing Spark to calculate something, e.g. after `write` or `show`. If you see yourself intentionally calling `count()`, you're probably doing something wrong. | You might want to repartition after running `special_rows = df.filter(col('special') > 0)`. There can be a large number of empty partitions after running a filtering operation, [as explained here](https://mungingdata.com/apache-spark/filter-where/).
The `new_df_1` will make cache `special_rows` which will be reused by `new_df_2` here `new_df_1.union(new_df_2)`. That's not necessarily a performance optimization. Caching is expensive. I've seen caching slow down a lot of computations, even when it's being used in a textbook manner (i.e. caching a DataFrame that gets reused several times downstream).
Counting does not necessarily make sure the data is cached. Counts avoid scanning rows whenever possible. They'll use the Parquet metadata when they can, which means they don't cache all the data like you might expect.
You can also "cache" data by writing it to disk. Something like this:
```py
df.filter(col('special') > 0).repartition(500).write.parquet("some_path")
special_rows = spark.read.parquet("some_path")
```
To summarize, yes, the DataFrame will be cached in this example, but it's not necessarily going to make your computation run any faster. It might be better to have no cache or to "cache" by writing data to disk. | 17,098 |
74,081,960 | I got my program running fine as explained at: [How can you make a micropython program on a raspberry pi pico autorun?](https://stackoverflow.com/questions/66183596/how-can-you-make-a-micropython-program-on-a-raspberry-pi-pico-autorun/74078142#74078142)
I'm installing a `main.py` that does:
```
import machine
import time
led = machine.Pin('LED', machine.Pin.OUT)
# For Rpi Pico (non-W) it was like this instead apparently.
# led = Pin(25, Pin.OUT)
i = 0
while (True):
led.toggle()
print(i)
time.sleep(.5)
i += 1
```
When I power the device on by plugging the USB to my laptop, it seems to run fine, with the LED blinking.
Then, if I connect from my laptop to the UART with:
```
screen /dev/ttyACM0 115200
```
I can see the numbers coming out on my host terminal correctly, and the LED still blinks, all as expected.
However, when I disconnect from screen with Ctrl-A K, after a few seconds, the LED stops blinking! It takes something around 15 seconds for it to stop, but it does so every time I tested.
If I reconnect the UART again with:
```
screen /dev/ttyACM0 115200
```
it starts blinking again.
Also also noticed that after I reconnect the UART and execution resumes, the count has increased much less than the actual time passed, so one possibility is that the Pico is going into some slow low power mode?
If I remove the `print()` from the program, I noticed that it does not freeze anymore after disconnecting the UART (which of course shows no data in this case).
`screen -fn`, `screen -f` and `screen -fa` made no difference.
Micropython firmware: rp2-pico-w-20221014-unstable-v1.19.1-544-g89b320737.uf2, Ubuntu 22.04 host.
Some variants follow.
`picocom /dev/ttyACM0` instead of screen and disconnect with Ctrl-A Ctrl-Q: still freezes like with `screen`.
If I exit from `picocom` with Ctrl-A Ctrl-X instead however, then it works. The difference between both seems to be that Ctrl-Q logs:
```
Skipping tty reset...
```
while Ctrl-X doesn't, making this a good possible workaround.
The following C analog of the MicroPython hacked from:
* <https://github.com/raspberrypi/pico-examples/blob/a7ad17156bf60842ee55c8f86cd39e9cd7427c1d/pico_w/blink>
* <https://github.com/raspberrypi/pico-examples/blob/a7ad17156bf60842ee55c8f86cd39e9cd7427c1d/hello_world/usb>
did not show the same problem, tested on <https://github.com/raspberrypi/pico-sdk/tree/2e6142b15b8a75c1227dd3edbe839193b2bf9041>
```
#include <stdio.h>
#include "pico/stdlib.h"
#include "pico/cyw43_arch.h"
int main() {
stdio_init_all();
if (cyw43_arch_init()) {
printf("WiFi init failed");
return -1;
}
int i = 0;
while (true) {
printf("%i\n", i);
cyw43_arch_gpio_put(CYW43_WL_GPIO_LED_PIN, i % 2);
i++;
sleep_ms(500);
}
return 0;
}
```
Reproduction speed can be greatly increased from a few seconds to almost instant by printing more and faster as in:
```
import machine
import time
led = machine.Pin('LED', machine.Pin.OUT)
i = 0
while (True):
led.toggle()
print('asdf ' * 10 + str(i))
time.sleep(.1)
i += 1
```
This corroborates people's theories that the problem is linked to flow control: the sender appears to stop sending if the consumer stops being able to receive fast enough.
Also asked at:
* <https://github.com/orgs/micropython/discussions/9633>
Possibly related:
* <https://forums.raspberrypi.com/viewtopic.php?p=1833725&hilit=uart+freezes#p1833725> | 2022/10/15 | [
"https://Stackoverflow.com/questions/74081960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/895245/"
] | What appears to be happening here is that exiting `screen` (or exiting `picocom` without the tty reset) leaves the [`DTR`](https://en.wikipedia.org/wiki/Data_Terminal_Ready) line on the serial port high. We can verify this by writing some simple code to control the DTR line, like this:
```
#include <unistd.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <termios.h>
#include <sys/types.h>
#include <sys/time.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <signal.h>
int main(int argc, char **argv)
{
int fd;
int dtrEnable;
int flags;
if (argc < 2) {
fprintf(stderr, "Usage: ioctl <device> <1 or 0 (DTR high or low)>\n");
exit(1);
}
if ((fd = open(argv[1], O_RDWR | O_NDELAY)) < 0) {
perror("open:");
exit(1);
}
sscanf(argv[2], "%d", &dtrEnable);
ioctl(fd, TIOCMGET, &flags);
if(dtrEnable!=0) {
flags |= TIOCM_DTR;
} else {
flags &= ~TIOCM_DTR;
}
ioctl(fd, TIOCMSET, &flags);
close(fd);
}
```
Compile this into a tool called `setdtr`:
```
gcc -o setdtr setdtr.c
```
Connect to your Pico using `screen`, start your code, and then disconnect. Wait for the LED to stop blinking. Now run:
```
./setdtr /dev/ttyACM0 0
```
You will find that your code starts running again. If you run:
```
./setdr /dev/ttyACM0 1
```
You will find that your code gets stuck again.
---
The serial chip on the RP2040 interprets a high DTR line to mean that a device is still connected. If nothing is reading from the serial port, it eventually blocks. Setting the DTR pin to 0 -- either using this `setdtr` tool or by explicitly resetting the serial port state on close -- avoids this problem. | I don't know why it works, but based on advie from larsks:
```
sudo apt install picocom
picocom /dev/ttyACM0
```
and then quit with Ctrl-A Ctrl-X (not Ctrl-A Ctrl-Q) does do what I want. Not sure what `screen` is doing differently exactly.
When quitting, Ctrl-Q shows on terminal:
```
Skipping tty reset...
```
and Ctrl-X does not, which may be a major clue. | 17,099 |
20,795,230 | I have blob representing webp image I want to be able to create an image from the blob using Wand and then convert it to jpeg. Is that possible with Wand or any other python library. | 2013/12/27 | [
"https://Stackoverflow.com/questions/20795230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2442744/"
] | Wand is a wrapper for imagemagick - in general, the file types that Wand supports are based on how imagemagick is configured on the system in question.
For example, if you're on a mac using homebrew, it would need to be installed with:
```
brew install imagemagick --with-webp
``` | Well I could not do it with Wand. I found another library [Pillow](https://pypi.python.org/pypi/Pillow/).
I have a java script code that capture video frame from canvas and convert the webp imge from based64 to binary image and send it using web socket to a server on the server I construct the image and convert it from webp to jpeg and then use OpenCV to process the jpeg image. Here is a sample
code
```
from PIL import Image
import StringIO
import numpy as np
import cv2
#webpimg is binary webp image received from the websocket
newImg = Image.open(StringIO.StringIO(webpimg)).convert("")
temp = StringIO.StringIO()
newImg.save(temp, "JPEG")
contents = temp.getvalue()
temp.close()
array = np.fromstring(contents, dtype=np.uint8)
jpegimg = cv2.imdecode(array, cv2.CV_LOAD_IMAGE_COLOR)
cv2.imwrite("imgCV.jpeg", img1)
``` | 17,100 |
1,243,418 | I need a function that given a relative URL and a base returns an absolute URL. I've searched and found many functions that do it different ways.
```
resolve("../abc.png", "http://example.com/path/thing?foo=bar")
# returns http://example.com/abc.png
```
Is there a canonical way?
On this site I see great examples for python and c#, lets get a PHP solution. | 2009/08/07 | [
"https://Stackoverflow.com/questions/1243418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90025/"
] | Another solution in case you already use [GuzzleHttp](http://docs.guzzlephp.org/).
This solution is based on an internal method of `GuzzleHttp\Client`.
```php
use GuzzleHttp\Psr7\UriResolver;
use GuzzleHttp\Psr7\Utils;
function resolve(string $uri, ?string $base_uri): string
{
$uri = Utils::uriFor(trim($uri));
if (isset($base_uri)) {
$uri = UriResolver::resolve(Utils::uriFor(trim($base_uri)), $uri);
}
// optional: set default scheme if missing
$uri = $uri->getScheme() === '' && $uri->getHost() !== '' ? $uri->withScheme('http') : $uri;
return (string)$uri;
}
```
**EDIT:** the source code was updated as suggested by myriacl | If your have pecl-http, you can use <http://php.net/manual/en/function.http-build-url.php>
```
<?php
$url_parts = parse_url($relative_url);
$absolute = http_build_url($source_url, $url_parts, HTTP_URL_JOIN_PATH);
```
Ex:
```
<?php
function getAbsoluteURL($source_url, $relative_url)
{
$url_parts = parse_url($relative_url);
return http_build_url($source_url, $url_parts, HTTP_URL_JOIN_PATH);
}
echo getAbsoluteURL('http://foo.tw/a/b/c', '../pic.jpg') . "\n";
// http://foo.tw/a/pic.jpg
echo getAbsoluteURL('http://foo.tw/a/b/c/', '../pic.jpg') . "\n";
// http://foo.tw/a/b/pic.jpg
echo getAbsoluteURL('http://foo.tw/a/b/c/', 'http://bar.tw/a.js') . "\n";
// http://bar.tw/a.js
echo getAbsoluteURL('http://foo.tw/a/b/c/', '/robots.txt') . "\n";
// http://foo.tw/robots.txt
``` | 17,101 |
45,952,387 | I'm trying to follow along the [Audio Recognition Network](https://www.tensorflow.org/versions/master/tutorials/audio_recognition) tutorial.
I've created an Anaconda environment with python 3.6 and followed the install instruction accordingly for installing the GPU whl.
I can run the 'hello world' TF example.
When I go to run 'train.py' in the Audio Recognition Network tutorial/example, I get:
```
Traceback (most recent call last):
File "train.py", line 79, in <module>
import input_data
File "/home/philglau/speech_commands/input_data.py", line 35, in <module>
from tensorflow.contrib.framework.python.ops import audio_ops as contrib_audio
ImportError: cannot import name 'audio_ops'
```
The [code](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/speech_commands/input_data.py) in the tutorial that fails is:
```
from tensorflow.contrib.framework.python.ops import audio_ops as contrib_audio
```
I then backed up that chain until I could import some part of it:
```
import tensorflow.contrib.framework as test ==> works
import tensorflow.contrib.framework.python as test --> fail:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'tensorflow.contrib.framework' has no attribute 'python'
```
Not sure where I'm going wrong on my install.
Details:
```
Ubuntu 16.04
Anaconda env with python 3.6
Followed the 'anaconda' instruction on the TF install page. (GPU version)
```
I also tried using a python 2.7 env for anaconda but got the same results. | 2017/08/30 | [
"https://Stackoverflow.com/questions/45952387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/194267/"
] | It looks like they're releasing the audio\_ops modules in version 1.4 (<https://github.com/tensorflow/tensorflow/issues/11339#issuecomment-327879009>).
Until v1.4 is released, an easy way around this is to install the nightly tensorflow build
```
pip install tf-nightly
```
or with the docker image linked in the issue comment. | The short answer:
The framework is missing the "audio\_ops.py" and the example wont work until the file is released. Or you code the wrappers.
More on this:
If you go to the: tensorflow.contrib.framework.python.ops local folder you can find other \*\_ops.py files but not the "audio\_ops.py".
If you get it from the Master at: <https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/framework/python/ops>
You will find the file is almost empty and with import labels wrong: "audio\_ops" vs "gen\_audio\_ops".
With almost empty I mean that: decode\_wav, encode\_wav, audio\_spectrogram , mfcc are not implemented/wrapped.
So, no working example and no fun.
We need to check again when "audio\_ops.py" is released.
Here:
<https://github.com/tensorflow/tensorflow/issues/11339>
You can find a Developer saying: "we don't actually want to make them public / supported yet. I'm sorry this decision wasn't better documented" | 17,111 |
55,432,601 | I have a string : `5kg`.
I need to make the numerical and the textual parts apart. So, in this case, it should produce two parts : `5` and `kg`.
For that I wrote a code:
```
grocery_uom = '5kg'
unit_weight, uom = grocery_uom.split('[a-zA-Z]+', 1)
print(unit_weight)
```
Getting this error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-66-23a4dd3345a6> in <module>()
1 grocery_uom = '5kg'
----> 2 unit_weight, uom = grocery_uom.split('[a-zA-Z]+', 1)
3 #print(unit_weight)
4
5
ValueError: not enough values to unpack (expected 2, got 1)
print(uom)
```
**Edit:**
I wrote this:
```
unit_weight, uom = re.split('[a-zA-Z]+', grocery_uom, 1)
print(unit_weight)
print('-----')
print(uom)
```
Now I am getting this output:
```
5
-----
```
How to store the 2nd part of the string to a var?
**Edit1:**
I wrote this which solved my purpose (Thanks to Peter Wood):
```
unit_weight = re.split('([a-zA-Z]+)', grocery_uom, 1)[0]
uom = re.split('([a-zA-Z]+)', grocery_uom, 1)[1]
``` | 2019/03/30 | [
"https://Stackoverflow.com/questions/55432601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6528055/"
] | You don't want to split on the "kg", because that means it's not part of the actual data. Although looking at the docs, I see you can include them <https://docs.python.org/3/howto/regex.html> But the split pattern is intended to be a separater.
Here's an example of just making a pattern for exactly what you want:
```
import re
pattern = re.compile(r'(?P<weight>[0-9]+)\W*(?P<measure>[a-zA-Z]+)')
text = '5kg'
match = pattern.search(text)
print (match.groups())
weight, measure = match.groups()
print (weight, measure)
print ('the weight is', match.group('weight'))
print ('the unit is', match.group('measure'))
print (match.groupdict())
```
output
>
> ('5', 'kg')
>
> 5 kg
>
> the weight is 5
>
> the unit is kg
>
> {'weight': '5', 'measure': 'kg'}
>
>
>
> | \*updated to allow for bigger numbers, such as "1,000"
Try this.
```
import re
grocery_uom = '5kg'
split_str = re.split(r'([0-9,?]+)([a-zA-Z]+)', grocery_uom, 1)
unit_weight, uom = split_str[1:3]
## Output: 5 kg
``` | 17,112 |
7,047,133 | I wrote a test program that looked like this:
```
#!/usr/bin/python
def incrementc():
c = c + 1
def main():
c = 5
incrementc()
main()
print c
```
I'd think that since I called incrementc within the body of main, all variables from main would pass to incrementc. But when I run this program I get
```
Traceback (most recent call last):
File "test.py", line 10, in <module>
main()
File "test.py", line 8, in main
incrementc()
File "test.py", line 4, in incrementc
c = c + 1
UnboundLocalError: local variable 'c' referenced before assignment
```
Why isn't c passing through? And if I want a variable to be referenced by multiple functions, do I have to declare it globally? I read somewhere that global variables are bad.
Thanks! | 2011/08/12 | [
"https://Stackoverflow.com/questions/7047133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/892549/"
] | You're thinking of [dynamic scoping](http://en.wikipedia.org/wiki/Dynamic_scoping#Dynamic_scoping). The problem with dynamic scoping is that the behavior of `incrementc` would depend on previous function calls, which makes it very difficult to reason about the code. Instead most programming languages (also Python) use static scoping: `c` is visible only within `main`.
To accomplish what you want, you'd either use a global variable, or, better, pass `c` as a parameter. Now, because the primitives in Python are immutable, passing an integer can't be changed (it's effectively passed by value), so you'd have to pack it into a container, like a list. Like this:
```
def increment(l):
l[0] = l[0] + 1
def main():
c = [5]
increment(c)
print c[0]
main()
```
Or, even simpler:
```
def increment(l):
return l + 1
def main():
c = 5
print increment(c)
main()
```
Generally, global variables are bad because they make it very easy to write code that's hard to understand. If you only have these two functions, you can go ahead and make `c` global because it's still obvious what the code does. If you have more code, it's better to pass the variables as a parameter instead; this way you can more easily see who depends on the global variable. | Global variables are bad.
Just like friends and enemys. Keep your friends close but keep your enemys even closer.
The function main last a local variable c, assignment the value 5
You then call the function inc..C. The c from main is now out of scope so you are trying to use a value of c that is not in scope - hence the error. | 17,114 |
14,716,111 | I'd like to rename `%paste` to something like `%pp` so that it takes fewer keystrokes. I worked out a way to do that but it seems complicated. Is there a better way?
```
def foo(self, bar):
get_ipython().magic("paste")
get_ipython().define_magic('pp', foo)
``` | 2013/02/05 | [
"https://Stackoverflow.com/questions/14716111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/461389/"
] | From IPython 0.13, there's a new `%alias_magic` magic function, which you would use as:
```
%alias_magic pp paste
``` | use `%alias` magic to do it (if you want it to be permanent use `%store`):
```
In [8]: %alias??
"""Define an alias for a system command.
'%alias alias_name cmd' defines 'alias_name' as an alias for 'cmd'
...
``` | 17,119 |
52,601,350 | I'm trying to make a minesweeper game using lists in python. I have have this code so far:
```
import random as r
import sys
#dimension of board and number of bombs
width = int(sys.argv[1])
height = int(sys.argv[2])
b = int(sys.argv[3])
#creates the board
board = [[0.0] * width] * height
#places bombs
for i in range(b):
x = r.randint(0, width - 1)
y = r.randint(0, height - 1)
board.insert(x, y, 0.1)
#prints board
for i in range(len(board)):
for j in range(len(board[i]))
print(board[i][j], end=" ")
```
I'm trying to get the bombs to be placed at random places on the board, but `insert()` only accepts 2 args. Is there any other way in which I can do this?
I have an idea to place a random bomb in row 1, then a random bomb in row 2, and so on and once it hits row n it loops back to row 1 until enough bombs have been placed, but I'm not sure if it will work (and I have no way of testing it because I have do idea how to do that either). I feel like this solution is pretty inefficient, so I'm also wondering if there's a more efficient way to do it. | 2018/10/02 | [
"https://Stackoverflow.com/questions/52601350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10173748/"
] | You can just use `board[x][y] = 0.1` to access index `y` in row `x` of your board. Also, you don't want to build a board like that. The way you're doing it will only actually create 1 array with numbers. Here's your code with some modifications.
```
import random as r
# dimension of board and number of bombs
# (I'm using hard coded values as an example)
width = 5
height = 7
b = 10
#creates the board
board = []
for i in range(height): # create a new array for every row
board.append([0.0] * width)
#places bombs
for i in range(b):
x = r.randint(0, height - 1)
y = r.randint(0, width - 1)
board[x][y] = 0.1 # this is how you place a bomb at a random coordinate
#prints board
for row in board:
print(row)
```
The resulting board for me looks like:
```
[0.0, 0.0, 0.1, 0.0, 0.0]
[0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, 0.0, 0.0, 0.0, 0.0]
[0.1, 0.0, 0.1, 0.0, 0.1]
[0.1, 0.0, 0.0, 0.0, 0.1]
[0.0, 0.1, 0.0, 0.0, 0.1]
[0.0, 0.1, 0.0, 0.0, 0.0]
```
Note that you can end up with less than `b` bombs in case the random x and y values repeat. That's a good problem to solve next. | We are dealing list of list. If we run your board initialization code and modify board value as follows:
```
>>> width = 2; height = 3
>>> board = [[0.0] * width] * height
>>> print board
[[0.0, 0.0], [0.0, 0.0], [0.0, 0.0]]
>>> x = 0; y = 1; board[y][x] = 1.1
>>> print board
[[1.1, 0.0], [1.1, 0.0], [1.1, 0.0]]
```
We see our modification appears three places. This is because we have put the same list ([0.0] \* width) *height* times. One way of doing it properly is board = [[0.0] \* width for \_ in range(3)]. Please refer to [Two dimensional array in python](https://stackoverflow.com/questions/8183146/two-dimensional-array-in-python) .
Since we are using list of list, one way of inserting element 0.1 for x and y will be using insert will be board[y].insert(x, 0.1). But I feel like that what you wanted to do is board[y][x]=0.1.
For the placing bombs, what you have described can be implemented like:
```
n = <the number of bombs>
for i in xrange(n):
x = i % width # x will iterate over 0 to width
y = r.randint(0, height - 1)
# insert bomb
```
Cheers, | 17,120 |
59,661,745 | I have pytest-django == 2.9.1 installed
I started setting up a test environment according to the instructions.
<https://pytest-django.readthedocs.io/en/latest/tutorial.html#step-2-point-pytest-to-your-django-settings>
In the second step, in the root of the project, I created a pytest.ini file and added DJANGO\_SETTINGS\_MODULE there (all according to the instructions)
But when you start the test, an error appears
```
django.core.exceptions.ImproperlyConfigured: Requested setting CACHES, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
```
But DJANGO\_SETTINGS\_MODULE is defined in the pytest.ini!
I tried putting pytest.ini in different directories, but the result is always the same.
I tried running with a parameter --ds, but the result is still the same.
Can you tell me why this happens?
**UPD**
Hmm... I remove [pytest] from the file header and got an error "no section header defined" The file does not seem to be ignored, but DJANGO\_SETTINGS\_MODULE is still not exported.
I tried adding it to the environment variables separately and received an error on startup
>
> export DJANGO\_SETTINGS\_MODULE=project.settings.test\_settings
>
>
> pytest
>
>
> ImportError: Could not import settings 'project.settings.test\_settings' (Is it on sys.path? Is there an import error in the settings file?): No module named project.settings.test\_settings
>
>
>
But at startup python manage.py runserver --settings=project.settings.test\_settings everything is working fine | 2020/01/09 | [
"https://Stackoverflow.com/questions/59661745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7526559/"
] | had the same issue, there were 2 problems:
1. settings.py had a bug.
2. pytest-django was installed in a different environment.
So ensure you can import settings.py as hoefling recommended, and ensure pytest-django is actually installed in your environment | So the [docs](https://pytest-django.readthedocs.io/en/latest/configuring_django.html#order-of-choosing-settings) say that the order of precedence when choosing setting is
command line
environment variable
pytest.ini file.
Then it goes further to say you can override this precedence using `addopts`.
In my case, I specified my settings like below
```
DJANGO_SETTINGS_MODULE=bm.settings.ttest
addopts = --ds=bm.settings.ttest
```
And it works. | 17,122 |
53,686,556 | I'm trying to prepare a model that takes an input image of shape 56x56 pixels and 3 channels: (56, 56, 3). Output should be an array of 216 numbers. I reuse a code from a digit recognizer and modified it a little bit:
```
model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same',
activation ='relu', input_shape = (56,56,3)))
model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation = "relu"))
model.add(Dropout(0.5))
model.add(Dense(216, activation = "linear"))
from tensorflow.python.keras.losses import categorical_crossentropy
model.compile(loss = categorical_crossentropy,
optimizer = "adam",
metrics = ['accuracy'])
```
This is giving me an error:
```
ValueError: Error when checking target: expected dense_1 to have shape (216,) but got array with shape (72,)
```
I know how to code aclassifier model but not to obtain an array as output, so probably I'm not setting the right shape in last Dense layer. I don't know if it should be 1 or 216.
I read in [this post](https://stackoverflow.com/questions/51617857/keras-dense-layer-shape-error) that the problem could be the loss function, but I'm not sure what other loss function should I use.
Thanks in advance! | 2018/12/08 | [
"https://Stackoverflow.com/questions/53686556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10754618/"
] | Please note that in almost all scenarios you just have to handle the `catch` and not bother with the validity of the `ObjectID` since mongoose would complain `throw` if invalid `ObjectId` is provided.
```
Model.findOne({ _id: 'abcd' }).exec().catch(error => console.error('error', error));
```
Other than that you could either use the [mongoose.Types.ObjectId.isValid](https://mongodb.github.io/node-mongodb-native/api-bson-generated/objectid.html#objectid-isvalid) or a regular expression: `/^[a-fA-F0-9]{24}$/` | ```
let mongoose = require('mongoose');
let ObjectId = mongoose.Types.ObjectId;
let recId1 = "621f1d71aec9313aa2b9074c";
let isValid1 = ObjectId.isValid(recId1); //true
console.log("isValid1 = ", isValid1); //true
let recId2 = "621f1d71aec9313aa2b9074cd";
let isValid2 = ObjectId.isValid(recId2); //false
console.log("isValid2 = ", isValid2); //false
``` | 17,123 |
48,836,596 | I stumbled upon the following syntax in [Python decorator to keep signature and user defined attribute](https://stackoverflow.com/questions/48746567/python-decorator-to-keep-signature-and-user-defined-attribute):
```
> def func():
... return "Hello World!"
...
> func?
Signature: func()
Docstring: <no docstring>
File: ~/<ipython-input-8-934f46134434>
Type: function
```
Attempting the same in my Python Shell, whether 2 or 3, simply raises a `SyntaxError`. So what is this exactly, is it specific to some shells? | 2018/02/17 | [
"https://Stackoverflow.com/questions/48836596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5079316/"
] | Check if you have added the below detail in your settings file. If yes, then skip this part.
**settings.py**
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, "templates")], # Add this to your settings file
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
myproject/
|-- myproject/
| |-- myapp/
| |-- myproject/
| |-- templates/ <-- here!
| | |-- myapp/
| | | |-- index.html
| | |-- base.html
| | +-- home.html
| +-- manage.py
+-- venv/
```
**views.py:**
```
@login_required
def post_publish(request,pk):
post = get_object_or_404(Post,pk=pk)
return render(request, template_name.html,{'post': post})
```
Try using return render this will work for sure.
In your view you have not show anywhere. what is your template\_name. | From Django Docs:
Additional form template furniture
Don’t forget that a form’s output does not include the surrounding tags, or the form’s submit control. You will have to provide these yourself.
<https://docs.djangoproject.com/en/2.0/topics/forms/>
You are missing the input with type submit:
```
<input type="submit" value="Submit">
```
After handling the submitted data in your views.py you can redirect to the desired url there is no need to use an tag. | 17,133 |
71,215,277 | I have been working on writing a Wordle bot, and wanted to see how it preforms with all 13,000 words. The problem is that I am running this through a for loop and it is very inefficient. After running it for 30 minutes, it only gets to around 5%. I could wait all that time, but it would end up being 10+ hours. There has got to be a more efficient way. I am new to python, so any suggestions would be greatly appreciated.
The code here is the code that is used to limit down the guesses each time. Would there be a way to search for a word that contains "a", "b", and "c"? Instead of running it 3 separate times. Right now the containts, nocontains, and isletter will each run every time I need to search for a new letter. Searching them all together would greatly reduce the time.
```py
#Find the words that only match the criteria
def contains(letter, place):
list.clear()
for x in words:
if x not in removed:
if letter in x:
if letter == x[place]:
removed.append(x)
else:
list.append(x)
else:
removed.append(x)
def nocontains(letter):
list.clear()
for x in words:
if x not in removed:
if letter not in x:
list.append(x)
else:
removed.append(x)
def isletter(letter, place):
list.clear()
for x in words:
if x not in removed:
if letter == x[place]:
list.append(x)
else:
removed.append(x)
``` | 2022/02/22 | [
"https://Stackoverflow.com/questions/71215277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18257273/"
] | The performance problems can be massively reduced by using [sets](https://docs.python.org/3/library/stdtypes.html#set-types-set-frozenset). Any time that you want to repeatedly test for membership (even only a few times), e.g. `if x not in removed`, you want to try to make a set. Lists require checking every element to find x, which is bad if the list has thousands of elements. In a Python set, `if x not in removed` should take as long to run if `removed` has `100` elements or `100,000`, a small constant amount of time.
Besides this, you're running into problems by trying to use mutable global variables everywhere, like for `list` (which needs to be renamed) and `removed`. There's no benefit to doing that and several downsides, such as making it harder to reason about your code or optimize it. One benefit of Python is that you can pass large containers or objects to functions without any extra time or space cost: calling a function `f(huge_list)` is as fast and uses as much memory as `f(tiny_list)`, as if you were passing by reference in other languages, so don't hesitate to use containers as function parameters or return types.
In summary, here's how your code could be refactored if you take away 'list' and 'removed' and instead store this as a `set` of possible words:
```py
all_words = [] # Huge word list to read in from text file
current_possible_words = set(all_words)
def contains_only_elsewhere(possible_words, letter, place):
"""Given letter and place, remove from possible_words
all words containing letter but not at place"""
to_remove = {word for word in possible_words
if letter not in word or word[place] == letter}
return possible_words - to_remove
def must_not_contain(possible_words, letter):
"""Given a letter, remove from possible_words all words containing letter"""
to_remove = {word for word in possible_words
if letter in word}
return possible_words - to_remove
def exact_letter_match(possible_words, letter, place):
"""Given a letter and place, remove from possible_words
all words not containing letter at place"""
to_remove = {word for word in possible_words
if word[place] != letter}
return possible_words - to_remove
```
The outside code will be different: for example,
```py
current_possible_words = exact_letter_match(current_possible_words, 'a', 2)`
```
Further optimizations are possible (and much easier now): storing only indices to words rather than the strings; precomputing, for each letter, the set of all words containing that letter, etc. | I just wrote a wordle bot that runs in about a second including the web scraping to fetch a list of 5 letter words.
```
import urllib.request
from bs4 import BeautifulSoup
def getwords():
source = "https://www.thefreedictionary.com/5-letter-words.htm"
filehandle = urllib.request.urlopen(source)
soup = BeautifulSoup(filehandle.read(), "html.parser")
wordslis = soup.findAll("li", {"data-f": "15"})
words = []
for k in wordslis:
words.append(k.getText())
return words
words = getwords()
def hasLetterAtPosition(letter,position,word):
return letter==word[position]
def hasLetterNotAtPosition(letter,position,word):
return letter in word[:position]+word[position+1:]
def doesNotHaveLetter(letter,word):
return not letter in word
lettersPositioned = [(0,"y")]
lettersMispositioned = [(0,"h")]
lettersNotHad = ["p"]
idx = 0
while idx<len(words):
eliminated = False
for criteria in lettersPositioned:
if not hasLetterAtPosition(criteria[1],criteria[0],words[idx]):
del words[idx]
eliminated = True
break
if eliminated:
continue
for criteria in lettersMispositioned:
if not hasLetterNotAtPosition(criteria[1],criteria[0],words[idx]):
del words[idx]
eliminated = True
break
if eliminated:
continue
for letter in lettersNotHad:
if not doesNotHaveLetter(letter,words[idx]):
del words[idx]
eliminated = True
break
if eliminated:
continue
idx+=1
print(words) # ["youth"]
```
The reason yours is slow is because you have a lot of calls to check if word in removed in addition to a number of superfluous logical conditions in addition to going through all the words for each of your checks.
Edit: Here's a get words function that gets more words.
```
def getwords():
source = "https://wordfind-com.translate.goog/length/5-letter-words/?_x_tr_sl=es&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=wapp"
filehandle = urllib.request.urlopen(source)
soup = BeautifulSoup(filehandle.read(), "html.parser")
wordslis = soup.findAll("a", {"rel": "nofollow"})
words = []
for k in wordslis:
words.append(k.getText())
return words
``` | 17,134 |
27,935,800 | I have been on this for days now. Everytime I attempt to install psycopg2 into a virtual environment on my RHEL VPS it fails with the following error. Anyone with a clue should please help out. Thanks.
```
(pyenv)[root@10 pyenv]# pip install psycopg2==2.5.4
Collecting psycopg2==2.5.4
Using cached psycopg2-2.5.4.tar.gz
/tmp/pip-build-Vn6ET9/psycopg2/setup.py:12: DeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately.
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public
Installing collected packages: psycopg2
Running setup.py install for psycopg2
building 'psycopg2._psycopg' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE= 2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic - D_GNU_SOURCE -fPIC -fwrapv -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSI ON="2.5.4 (dt dec pq3 ext)" -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHA VE_PQFREEMEM=1 -DPG_VERSION_HEX=0x080414 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_B OOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.6 -I. -I/usr/include -I/usr/ include/pgsql/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-2.6/ psycopg/psycopgmodule.o -Wdeclaration-after-statement
unable to execute gcc: No such file or directory
error: command 'gcc' failed with exit status 1
Complete output from command /root/pyenv/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-Vn6ET9/psycopg2/setup.py';exec(compile(getatt r(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'ex ec'))" install --record /tmp/pip-9d8Iwo-record/install-record.txt --single-vers ion-externally-managed --compile --install-headers /root/pyenv/include/site/pyt hon2.6:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.6
creating build/lib.linux-x86_64-2.6/psycopg2
copying lib/psycopg1.py -> build/lib.linux-x86_64-2.6/psycopg2
copying lib/extras.py -> build/lib.linux-x86_64-2.6/psycopg2
copying lib/__init__.py -> build/lib.linux-x86_64-2.6/psycopg2
copying lib/_json.py -> build/lib.linux-x86_64-2.6/psycopg2
copying lib/pool.py -> build/lib.linux-x86_64-2.6/psycopg2
copying lib/errorcodes.py -> build/lib.linux-x86_64-2.6/psycopg2
copying lib/extensions.py -> build/lib.linux-x86_64-2.6/psycopg2
copying lib/_range.py -> build/lib.linux-x86_64-2.6/psycopg2
copying lib/tz.py -> build/lib.linux-x86_64-2.6/psycopg2
creating build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/dbapi20_tpc.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_copy.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_notify.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_quote.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/dbapi20.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_lobject.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_bug_gc.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_psycopg2_dbapi20.py -> build/lib.linux-x86_64-2.6/psycop g2/tests
copying tests/testutils.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_cursor.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/__init__.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_bugX000.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_types_basic.py -> build/lib.linux-x86_64-2.6/psycopg2/te sts
copying tests/test_connection.py -> build/lib.linux-x86_64-2.6/psycopg2/tes ts
copying tests/test_extras_dictcursor.py -> build/lib.linux-x86_64-2.6/psyco pg2/tests
copying tests/test_transaction.py -> build/lib.linux-x86_64-2.6/psycopg2/te sts
copying tests/test_module.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_dates.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/testconfig.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_cancel.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_async.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_with.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_green.py -> build/lib.linux-x86_64-2.6/psycopg2/tests
copying tests/test_types_extras.py -> build/lib.linux-x86_64-2.6/psycopg2/t ests
running build_ext
building 'psycopg2._psycopg' extension
creating build/temp.linux-x86_64-2.6
creating build/temp.linux-x86_64-2.6/psycopg
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE= 2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic - D_GNU_SOURCE -fPIC -fwrapv -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSI ON="2.5.4 (dt dec pq3 ext)" -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHA VE_PQFREEMEM=1 -DPG_VERSION_HEX=0x080414 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_B OOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.6 -I. -I/usr/include -I/usr/ include/pgsql/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-2.6/ psycopg/psycopgmodule.o -Wdeclaration-after-statement
unable to execute gcc: No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/root/pyenv/bin/python -c "import setuptools, tokenize;__file__='/ tmp/pip-build-Vn6ET9/psycopg2/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --rec ord /tmp/pip-9d8Iwo-record/install-record.txt --single-version-externally-manag ed --compile --install-headers /root/pyenv/include/site/python2.6" failed with error code 1 in /tmp/pip-build-Vn6ET9/psycopg2
``` | 2015/01/14 | [
"https://Stackoverflow.com/questions/27935800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2669337/"
] | I found my way around it. I noticed it installs successfully globally. So I installed psycopg2 globally and created a new virtual environment with `--system-site-packages` option. Then I installed my other packages using the `-I` option.
Hope this helps someone else.
OK. I later found out that I had no `gcc` installed. So I had to install it first. And after that, I could `pip install psycopg2`. Thank you cel for the direction. | For me, I'm using Redhat 8 enterprise and my issue wasn't solved by installing gcc and gcc-c++.
I was able to solve the issue by installing **python3-devel** and **development tools**.
to install them on Redhat using yum manager, please follow this [link](https://linuxize.com/post/how-to-install-pip-on-centos-8/) | 17,135 |
30,445,136 | I am using z3py. I am trying to check the satisfiability for different problems with different sizes and verify the scalability of the proposed method. However, to do that I need to know the memory consumed by the solver for each problem. Is there a way to access the memory or make the z3py print it in the STATISTICS section. Thank you so much in advance.
**Update-27/5/2015:**
I tried with the paython memory profiler but it seems that the generated memory is very large. I am not sure but the reported memory is similar to the memory consumed by the python application and not only Z3(constructing the z3 model and checking the sat and then generating model). Moreover, I used formal modeling checking tools for many years now. I am expecting Z3 to be more efficient and have better scalability, however, I am getting much less memory than what the paython is generating.
What I am thinking to do is to try to measure the design size or the scalability using factors other than memory. In z3py statistics many details are generated to describe the design size and complexity. However, I am not able to find any explanation of these parameters in the tutorial, webpage, or z3 papers.
For example, can you help me understand the following parameters generated in the statistics for one of the basic models I have. Also is there any parameter/parameters which can replace the memory or be good indication of the Z3 mdoel size/complexity.
* :added-eqs 152
* :assert-lower 2
* :assert-upper 2
* :binary-propagations 59
* :conflicts 6
* :datatype-accessor-ax 12
* :datatype-constructor-ax 14
* :datatype-occurs-check 19
* :datatype-splits 12
* :decisions 35
* :del-clause 2
* :eq-adapter 2
* :final-checks 1
* :mk-clause 9
* :offset-eqs 2
* :propagations 61
Thank again for your time. | 2015/05/25 | [
"https://Stackoverflow.com/questions/30445136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343141/"
] | You should use `getIntent().getIntExtra(name, defaultValue)` instead of `Integer.parseInt(intent.getStringExtra("page"));`
**Update:**
```
int defaultValue = -1;// take any default value of your choice
String name = intent.getStringExtra("name");
int page1 = intent.getIntExtra("page", defaultValue);
``` | Activity A
```
String textname = (String) dataItem.get("name");
Intent m = new Intent(list.this,main.class);
m.putExtra("name",textname);
m.putExtra("page",1);
startActivity(m);
```
Activity B
```
Intent intent = getIntent();
name = intent.getStringExtra("name");
int page = intent.getIntExtra("page", 0);
```
where 0 is the default value. | 17,136 |
66,472,929 | i try to learn better dict in python. I am using an api "chess.com"
```
data = get_player_game_archives(username).json
url = data['archives'][-1]
games = requests.get(url).json()
game = games['games'][-1]
print(games)
```
That's my code and they are no problem and the result is
```
{'games': [{'url': 'https://www.chess.com/live/game/8358870805', 'pgn': '[Event "Live Chess"]\n[Site "Chess.com"]\n
```
But i dont know how to get the "url",
i tried
```
game = games['games']['url'] or
game = games['games'][{url}]
```
obviously i misunderstood something, but i dont know what.
Thanks for your reading | 2021/03/04 | [
"https://Stackoverflow.com/questions/66472929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13934941/"
] | According to the [SQL Server docs](https://learn.microsoft.com/en-us/sql/t-sql/functions/charindex-transact-sql?view=sql-server-ver15), `CHARINDEX` will find the index of the *first* occurrence of the first parameter substring. As for `LIKE` it is highly likely that it is smart enough to stop searching as soon as it finds a match. Therefore, I would expect the performance of both of your queries to be roughly the same. As for which version to use, the `LIKE` expression can handle more complexity than just substring matches, so you might choose that version if you expect the logic could get more complex later on. | I know this has been answered but it's worth noting that you can create a test harness and see for yourself. I created a 1,000,000 row test; first against a shorter string then against a longer one.
```
SELECT TOP(1000000) SomeCol = NEWID()
INTO #t
FROM sys.all_columns, sys.all_columns a;
DECLARE @x INT, @st DATETIME = GETDATE();
SELECT @x = COUNT(*) FROM #t AS t WHERE t.SomeCol LIKE '%-1234%'
PRINT DATEDIFF(MS,@st,GETDATE());
GO 5
DECLARE @x INT, @st DATETIME = GETDATE();
SELECT @x = COUNT(*) FROM #t AS t WHERE CHARINDEX('-1234',t.SomeCol)>0
PRINT DATEDIFF(MS,@st,GETDATE());
GO 5
```
The performance was relatively the same:
```
Beginning execution loop
586
510
530
530
597
Batch execution completed 5 times.
Beginning execution loop
537
606
547
580
607
Batch execution completed 5 times.
```
The execution plans are similar but it would appear that the LIKE query is a smidgen better based on the 49/51% work estimation but this is based on inaccurate cardinality estimations.
[](https://i.stack.imgur.com/XVnUi.png)
Now if I replace the values with longer strings...
```
IF OBJECT_ID('tempdb..#t') IS NOT NULL DROP TABLE #t;
SELECT TOP(1000000) SomeCol = CONCAT(REPLICATE('x',100),NEWID(),NEWID(),NEWID())
INTO #t
FROM sys.all_columns, sys.all_columns a;
```
... and run the same test both queries actually speed up:
```
Beginning execution loop
350
363
364
370
363
Batch execution completed 5 times.
Beginning execution loop
370
370
376
380
380
Batch execution completed 5 times.
```
The improvement is due to both queries getting parallel execution plans (multiple CPU's doing the work.)
[](https://i.stack.imgur.com/VE2Fv.png)
It's interesting to note that the plans are slightly different - the optimizer choosing a Stream Aggregate operator whereas the CHARINDEX solution uses a Hash Match aggregate. Thought the performance is the same, this still serves as an interesting example of how to manipulate the optimizer's behavior. | 17,139 |
56,337,696 | I have this abstract class
```
class Kuku(ABC):
def __init__(self):
self.a = 4
@property
@abstractmethod
def kaka(self):
pass
```
`kaka` is an abstract property, So I would expect python to enforce it being a property in inheritors, But it allows me to create:
```
class KukuChild(Kuku):
def kaka(self):
return 3
```
and KukuChild().kaka() returns 3 as if it's not a property. Is this intentional? Pycharm doesn't enforce this either, so why even add the `property` decorator in the abstract class? | 2019/05/28 | [
"https://Stackoverflow.com/questions/56337696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2899096/"
] | I've come across this problem myself, after looking into the options of how to enforce such behaviour I came up with the idea of implementing a class that has that type checking.
```
import abc
import inspect
from typing import Generic, Set, TypeVar, get_type_hints
T = TypeVar('T')
class AbstractClassVar(Generic[T]):
pass
class Abstract(abc.ABC):
"""Inherit this class to:
1. Enforce type checking for abstract properties.
2. Check that abstract class members (aka. `AbstractClassVar`) are implemented.
"""
def __init_subclass__(cls) -> None:
def get_abstract_properties(cls) -> Set[str]:
"""Gets a class's abstract properties"""
abstract_properties = set()
if cls is Abstract:
return abstract_properties
for base_cls in cls.__bases__:
abstract_properties.update(get_abstract_properties(base_cls))
abstract_properties.update(
{abstract_property[0] for abstract_property in
inspect.getmembers(cls, lambda a: getattr(a, "__isabstractmethod__", False) and type(a) == property)})
return abstract_properties
def get_non_property_members(cls) -> Set[str]:
"""Gets a class's non property members"""
return {member[0] for member in inspect.getmembers(cls, lambda a: type(a) != property)}
def get_abstract_members(cls) -> Set[str]:
"""Gets a class's abstract members"""
abstract_members = set()
if cls is Abstract:
return abstract_members
for base_cls in cls.__bases__:
abstract_members.update(get_abstract_members(base_cls))
for (member_name, annotation) in get_type_hints(cls).items():
if getattr(annotation, '__origin__', None) is AbstractClassVar:
abstract_members.add(member_name)
return abstract_members
cls_abstract_properties = get_abstract_properties(cls)
cls_non_property_members = get_non_property_members(cls)
# Type checking for abstract properties
if cls_abstract_properties:
for member in cls_non_property_members:
if member in cls_abstract_properties:
raise TypeError(f"Wrong class implementation {cls.__name__} " +
f"with abstract property {member}")
# Implementation checking for abstract class members
if Abstract not in cls.__bases__:
for cls_member in get_abstract_members(cls):
if not hasattr(cls, cls_member):
raise NotImplementedError(f"Wrong class implementation {cls.__name__} " +
f"with abstract class variable {cls_member}")
return super().__init_subclass__()
```
Usage:
------
```
class Foo(Abstract):
foo_member: AbstractClassVar[str]
@property
@abc.abstractmethod
def a(self):
...
class UpperFoo(Foo):
# Everything should be implemented as intended or else...
...
```
1. Not implementing the abstract property `a` or implementing it as anything but a `property` (type) will result in a TypeError
2. The trick with the class member is quite different and uses a more complex approach with annotations, but the result is almost the same.
Not Implementing the abstract class member `foo_member` will result in a NotImplementedError. | You are overriding the `kaka` property in the child class. You must also use `@property` to decorate the overridden methods in the child:
```
from abc import ABC, abstractmethod
class Kuku(ABC):
def __init__(self):
self.a = 4
@property
@abstractmethod
def kaka(self):
pass
class KukuChild(Kuku):
@property
def kaka(self):
return 3
KukuChild().kaka # <-- use as an attribute, not a method call.
``` | 17,140 |
44,825,529 | I am looking for a piece of software (python preferred, but really anything for which a jupyter kernel exists) to fit a data sample to a mixture of t-distributions.
I searched quite a while already and it seems to be that this is a somehwat obscure endeavor as most search results turn up for mixture of gaussians (what I am not interested here).
TThe most promising candidates so far are the "AdMit" and "MitSEM" R packages. However I do not know R and find the description of these packages rather comlple and it seems their core objective is not the fitting of mixtures of t’s but instead use this as a step to accomplish something else.
This is in a nutshell what I want the software to accomplish:
Fitting a mixture of t-distributions to some data and estimate the "location" "scale" and "degrees of freedom" for each.
I hope someone can point me to a simple package, I can’t believe that this is such an obscure use case. | 2017/06/29 | [
"https://Stackoverflow.com/questions/44825529",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1639834/"
] | This seems to work (in R):
Simulate example:
```
set.seed(101)
x <- c(5+ 3*rt(1000,df=5),
10+1*rt(10000,df=20))
```
Fit:
```
library(teigen)
tt <- teigen(x,
Gs=2, # two components
scale=FALSE,dfupdate="numeric",
models=c("univUU") # univariate model, unconstrained scale and df
# (i.e. scale and df can vary between components)
)
```
The parameters are all reasonably close (except for the df for the second component, but this is a very tough thing to estimate ...)
```
tt$parameters[c("df","mean","sigma","pig")]
## $df ## degrees of freedom
## [1] 3.578491 47.059841
## $mean ## ("location")
## [,1]
## [1,] 4.939179
## [2,] 10.002038
## $sigma ## reporting variance rather than sd (I think?)
## , , 1
## [,1]
## [1,] 8.763076
## , , 2
## [,1]
## [1,] 1.041588
## $pig ## mixture probabilities
## [1] 0.09113273 0.90886727
``` | Late to this party but since you prefer something for Python, there appear to be several packages out there on pypi that fit finite Student's t mixtures, including:
<https://pypi.org/project/studenttmixture/>
<https://pypi.org/project/student-mixture/>
<https://pypi.org/project/smm/>
so all of these can be installed with pip.
Scikit-learn and the other usual suspects obviously don't have this functionality at this time. | 17,141 |
8,251,039 | I am currently writing a script where I want to take the data and write it to a spreadsheet. I've found a few modules for writing xls files, however those only seem to work up to python 2.x, and I'm using 3.2 (also on a mac if that's helpful). Anyone have any ideas on how to get a python3.2 script to output to a spreadsheet? | 2011/11/24 | [
"https://Stackoverflow.com/questions/8251039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/242191/"
] | Use the **[csv](http://docs.python.org/release/3.0.1/library/csv.html)** module. The intro from the docs:
>
> The csv module implements classes to read and write tabular data in
> CSV format. It allows programmers to say, “write this data in the
> format preferred by Excel,” or “read data from this file which was
> generated by Excel,” without knowing the precise details of the CSV
> format used by Excel. Programmers can also describe the CSV formats
> understood by other applications or define their own special-purpose
> CSV formats.
>
>
> | On Windows, you can use the COM interface: <http://users.rcn.com/python/download/quoter.pyw>
As @sdolan pointed out, CSV can be a good choice if your data is laid out in a tabular format.
Since Excel can save spreadsheets in an XML format, you can use XML tools to access the data. | 17,142 |
19,485,233 | I am complete newb at python :P. How can I return just the third word of a string using string slicing? Am I close with:
```
splitString = myString.split()
print splitString[2]
``` | 2013/10/21 | [
"https://Stackoverflow.com/questions/19485233",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2517330/"
] | ```
for (int i = 0; i < 5; i++){
int asciiVal = rand()%26 + 97;
char asciiChar = asciiVal;
cout << asciiChar << " and ";
}
``` | To convert an `int` ASCII value to character you can also use:
```
int asciiValue = 65;
char character = char(asciiValue);
cout << character; // output: A
cout << char(90); // output: Z
``` | 17,143 |
11,296,768 | Ok so I got python to run in command prompt I just can't figure out the syntax to call scripts from it. So my file is in c:\python\script so I've been calling like this;
```
"C:\Python\Script"
```
but it doesn't anything and returns
```
""File<stdin>", line 1"
``` | 2012/07/02 | [
"https://Stackoverflow.com/questions/11296768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1469751/"
] | Is it possible that the connections in question are being intercepted by an enterprise proxy like [bluecoat](http://www.bluecoat.com/) or [websense](http://www.websense.com/) that's middling the SSL session? | Altering the certificate would break its signature, and as your validation shows that something alters the certificate, you should look at *what* changes the certificate, not "how" it's done.
The change is simple - as the certificate is self-signed, someone can just create another self-signed certificate with his own keypair and put different Subject or Issuer to the certificate. Not a big deal. The goal is obviously to capture and decode the traffic by installing man-in-the-middle proxy. | 17,146 |
61,335,488 | I'm using a Nodejs server for a WebApp and Mongoose is acting as the ORM.
I've got some hooks that fire when data is inserted into a certain collection.
I want those hooks to fire when a python script inserts into the mongoDB instance. So if I have a pre save hook, it would modify the python scripts insert according to that hook.
Is this possible? If so, How do I do it?
If not, please feel free to explain to me why this is impossible and/or why I'm stupid.
`EDIT:` I came back to this question some months later and cringed just at how green I was when I asked it. All I really needed done was to create an API endpoint/flag on the NodeJS server that is specifically for automated tasks like the python script to send data to, and have mongoose in NodeJS land structure. | 2020/04/21 | [
"https://Stackoverflow.com/questions/61335488",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11370450/"
] | It is impossible because python and nodejs are 2 different runtimes - separate isolated processes which don't have access to each other memories.
Mongoose is a nodejs ORM - a library that maps Javascript objects to Mongodb documents and handles queries to the database.
All mongoose hooks belong to javascript space. They are executed on javascript objects before Mongoose sends any request to mongo. 2 outcomes from there: no other process can mess up with these hooks, not even another nodejs, and once the query reaches mongodb it's final, no more hooks, no more modifications.
One said a picture worth 100 words:
[](https://i.stack.imgur.com/aXsp4.png)
Neither python nor mongo are aware about mongoose hooks. All queries to mongo are initiated on the client side - a script sends a request to modify state of the database or to query state of the database.
**The only way to trigger a javascript code execution from an update on mongodb side is to use [change streams](https://mongoosejs.com/docs/models.html#change-streams)**
Change streams are not mongoose hooks but can be used to hook into the updates on mongo side. It's a bit more advanced use of the database. It comes with additional requirements for mongo set up, size of the oplog, availability of the changestream clients, error handling etc.
You can learn more about change streams here <https://docs.mongodb.com/manual/changeStreams/> I would strongly recommend to seek professional advice to architect such set up to avoid frustration and unexpected behaviour. | Mongo itself does not support hooks as a feature, `mongoose` gives you out of the box hooks you can use as you've mentioned. So what can you do to make it work in python?
1. Use an existing framework like python's [eve](https://docs.python-eve.org/en/stable/features.html#insert-events), eve gives you database hooks, much like `mongoose` does. Now eve is a REST api framework which from your description doesn't sound like what you're looking for. Unfortunately I do not know of any package that's a perfect fit to your needs (if you do find one it would be great if you share a link in your question).
2. Build your own custom wrapper like [this](https://github.com/airflow-plugins/mongo_plugin/blob/master/hooks/mongo_hook.py) one. You can just built a custom wrapper class real quick and implement your own logic very easily. | 17,147 |
57,396,394 | I have two dataframes, one bigger, with names and family names, defined as a multi-index (Family and name) dataframe:
```
Age Weight
Family Name
Marge
SIMPSON Bart
Lisa
Homer
Harry
POTTER Lilian
Lisa
James
```
And the another df is smaller, containing just some of the names of the first df:
```
Family Name
SIMPSON Lisa
SIMPSON Bart
POTTER Lisa
```
I want to filter the first df to show just the names that exists in the second df.
To a better explain, as a reference, in Excel I would create an extra colum and type (suposing that second df is in Sheet2)
`=COUNTIFS(Sheet2!A:Sheet2!A,A1,Sheet2!B:Sheet2!B,B1)`
Than I would filter rows that are equal to 1 in the created column.
Ps: I am not asking how to replicate exaclty the excel code, because I know that propably there's a simple way to do in python. | 2019/08/07 | [
"https://Stackoverflow.com/questions/57396394",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9975452/"
] | Your `df1` have multiple index , so normal filter will not work , we can try `reindex`
```
df1 = df1.reindex(pd.MultiIndex.from_frame(df2))
``` | Let `df1` be the bigger dataframe with `MutiIndex` and `df2` smaller one with names.
Then you could do something like this:
```
names = set(df2.Name.astype(str).values)
df1 = df1.loc[df1.index.get_level_values('Name').isin(names)]
``` | 17,148 |
32,667,047 | I want to program the following (I've just start to learn python):
```
f[i]:=f[i-1]-(1/n)*(1-(1-f[i-1])^n)-(1/n)*(f[i-1])^n+(2*f[0]/n);
```
with `F[0]=x`, `x` belongs to `[0,1]` and `n` a constant integer.
My try:
```
import pylab as pl
import numpy as np
N=20
n=100
h=0.01
T=np.arange(0, 1+h, h)
def f(i):
if i == 0:
return T
else:
return f(i-1)-(1./n)*(1-(1-f(i-1))**n)-(1./n)*(f(i-1))**n+2.*T/n
pl.figure(figsize=(10, 6), dpi=80)
pl.plot(T,f(N), color="red",linestyle='--', linewidth=2.5)
pl.show()
```
For `N=10` (number of iterations) it returns the correct plot fast enough, but for `N=20` it keeps running and running (more than 30 minutes already). | 2015/09/19 | [
"https://Stackoverflow.com/questions/32667047",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5353182/"
] | You calculate `f(i-1)` three times in a single recursion layer - so after the first run you "know" the answer but still calculate it two more times. A naive approach:
```
fi_1 = f(i-1)
return fi_1-(1./n)*(1-(1-fi_1)**n)-(1./n)*(fi_1)**n+2.*T/n
```
But of course we can still do better and cache **every** evaluation of f:
```
cache = {}
def f_cached(i):
if not i in cache:
cache[i] = f(i)
return(cache[i])
```
Then replace every every occurence of `f` with `f_cached`.
There are also libraries out there that can do that for you automatically (with a decorator).
While recursion often yields nice and easy formulas, python is not that good at evaluating them (see tail recursion). You are probably better off with rewriting it in a iterativ way and calculate that. | First of all you are calculating f[i-1] three times when you can save it's result in some variable and calculate it only once :
```
t = f(i-1)
return t-(1./n)*(1-(1-t)**n)-(1./n)*(t)**n+2.*T/n
```
It will increase the speed of the program, but I would also like to recommend to calculate f without using recursion.
```
fs = T
for i in range(1,N+1):
tmp = fs
fs = (tmp-(1./n)*(1-(1-tmp)**n)-(1./n)*(tmp)**n+2.*T/n)
``` | 17,151 |
52,297,298 | i'm facing some issues while trying to fetch a bulk mail via python wincom32.client.
Basically, it seems like there's a limit on the number of items that could be opened on a single session, and that is a server-side flag or status..
the problem is that i didn't find out any way to resume/close/re-set and i can't ask for sysadmins to do it for me..
Here's my code snippet - i know it's ugly - but i'm using it on a jupiter notebook just to experiment/play around before arranging it down properly.
```
import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
def create_message_list(folder):
return [{
"folder": folder.Name,
"sender" : m.Sender,
"recipients" : m.Recipients,
"subject" : m.subject,
"body":m.body
} for m in folder.Items]
for folder in outlook.Folders:
if (folder.Name=="myfolder.name"):
message_list = create_message_list(folder)
```
and here's what i get:
>
> com\_error: (-2147352567, 'Exception occurred.', (4096, 'Microsoft
> Outlook', 'Your server administrator has limited the number of items
> you can open simultaneously. Try closing messages you have opened or
> removing attachments and images from unsent messages you are
> composing.', None, 0, -2147220731), None).
>
>
>
Notice that i *had my sketch code working for a while*. i could fetch something like 10k messages before it messed up with that error.
Things i've been trying (with no result)
- try to close the folder / mapi session
- running a for loop on the same items and calling the message.Close(0)
- using the GetFirst() / GetLast() methods on message instead of list comprehension
thanks anyone for ideas/suggestions/whatever. | 2018/09/12 | [
"https://Stackoverflow.com/questions/52297298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8979081/"
] | the problem was due to using dictionary to store message data held somehow a reference to it, without letting the resource to be released, even if an explicit `m.Close(0)` was invoked.
I've replaced them all with a `"dictkey" : str(m.<field>)` call and the error does not show up anymore. | You keep *all* items in a folder open - that is a really bad idea. Store only the entry ids, and reopen the messages on demand using `Namespace.GetItemFromID`. As soon as you are done with the item, release it. | 17,153 |
11,632,905 | >
> **Possible Duplicate:**
>
> [Inverse dictionary lookup - Python](https://stackoverflow.com/questions/2568673/inverse-dictionary-lookup-python)
>
> [reverse mapping of dictionary with Python](https://stackoverflow.com/questions/3221475/reverse-mapping-of-dictionary-with-python)
>
>
>
How do i get key of index in dictionary?
For example like:
```
i = {'a': 0, 'b': 1, 'c': 2}
```
so if i want to get key of i[0], it will return 'a' | 2012/07/24 | [
"https://Stackoverflow.com/questions/11632905",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1485698/"
] | Python dictionaries have a key and a value, what you are asking for is what key(s) point to a given value.
You can only do this in a loop:
```
[k for (k, v) in i.iteritems() if v == 0]
```
Note that there can be more than one key per value in a dict; `{'a': 0, 'b': 0}` is perfectly legal.
If you want ordering you either need to use a list or a [OrderedDict instance](http://docs.python.org/library/collections.html#collections.OrderedDict) instead:
```
items = ['a', 'b', 'c']
items.index('a') # gives 0
items[0] # gives 'a'
``` | By definition dictionaries are unordered, and therefore cannot be indexed. For that kind of functionality use an ordered dictionary. [Python Ordered Dictionary](http://docs.python.org/library/collections.html) | 17,156 |
56,191,147 | Im trying to extract user identities from a smartcard, and I need to match this pattern: `CN=LAST.FIRST.MIDDLE.0000000000`
And have this result returned: `FIRST.LAST`
This would normaly be easy if I were doing this in my own code:
```
# python example
string = 'CN=LAST.FIRST.MIDDLE.000000000'
pattern = 'CN=(\w+)\.(\w+)\.'
match = regex.search(pattern, string)
parsedResult = match.groups()[1] + '.' + match.groups()[0]
```
Unfortunately, I am matching a pattern using [Keycloaks X.509 certmap web form](https://www.keycloak.org/docs/latest/server_admin/index.html#adding-x-509-client-certificate-authentication-to-a-browser-flow).
I am limited to using only one regular expression, and the regular expression can only contain one capturing group. This is an HTML form so there is no actual code used here, just a single regular expression.
It seems as if i need to have sub capturing groups, and return the second matched group first, and then the first matched group, all within the main capturing group. Is it possible for something like this to be done?
Also, I assume we are limited to whatever features are supported by Java because that is what the app runs on. | 2019/05/17 | [
"https://Stackoverflow.com/questions/56191147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5357869/"
] | I don't think this is possible with just one capturing group. If I read the documentation of keycloak correctly, the capturing group is actually the result of the regular expression. So you can either match FIRST or LAST or both in the original order, but not reorder. | Yes, it is possible. This expression might help you to do so:
```
CN=([A-Z]+)\.(([A-Z]+)+)\.([A-Z]+)\.([0-9]+)
```
### [Demo](https://regex101.com/r/iosym4/1)
[](https://i.stack.imgur.com/69QLP.png)
### RegEx
If this wasn't your desired expression, you can modify/change your expressions in [regex101.com](https://regex101.com/r/iosym4/1). For example, you add reduce the boundaries of the expression and much simplify it, if you want. For example, this would also work:
```
CN=(\w+)\.(\w+)(.*)
```
### RegEx Circuit
You can also visualize your expressions in [jex.im](https://jex.im/regulex/#!flags=&re=%5E(a%7Cb)*%3F%24):
[](https://i.stack.imgur.com/D3VU2.png)
### Python Test
```
# coding=utf8
# the above tag defines encoding for this document and is for Python 2.x compatibility
import re
regex = r"CN=([A-Z]+)\.(([A-Z]+)+)\.([A-Z]+)\.([0-9]+)"
test_str = "CN=LAST.FIRST.MIDDLE.000000000"
subst = "\\2\\.\\1"
# You can manually specify the number of replacements by changing the 4th argument
result = re.sub(regex, subst, test_str, 0, re.MULTILINE)
if result:
print (result)
# Note: for Python 2.7 compatibility, use ur"" to prefix the regex and u"" to prefix the test string and substitution.
```
### JavaScript Demo
```js
const regex = /CN=([A-Z]+)\.(([A-Z]+)+)\.([A-Z]+)\.([0-9]+)/gm;
const str = `CN=LAST.FIRST.MIDDLE.000000000`;
const subst = `$2\.$1`;
// The substituted value will be contained in the result variable
const result = str.replace(regex, subst);
console.log('Substitution result: ', result);
``` | 17,159 |
54,604,608 | I have about 30 SEM (scanning-electron microscope) images like that:
[](https://i.stack.imgur.com/uFHNf.png)
What you see is photoresist pillars on a glass substrate.
What I would like to do, is to get the mean diameter in x and y-direction as well as the mean period in x- and y-direction.
Now, instead of doing all the measurement manually, I was wondering, if maybe there is a way to **automate it using python and opencv** ?
EDIT:
I tried the following code, it seems to be *working to detect circles*, **but what I actually need are ellipse,** since I need the diameter in x- and y-direction.
... and I don't quite see how to get the scale yet ?
```
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread("01.jpg",0)
output = img.copy()
edged = cv2.Canny(img, 10, 300)
edged = cv2.dilate(edged, None, iterations=1)
edged = cv2.erode(edged, None, iterations=1)
# detect circles in the image
circles = cv2.HoughCircles(edged, cv2.HOUGH_GRADIENT, 1.2, 100)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles[0]:
print(x,y,r)
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
plt.imshow(output, cmap = 'gray', interpolation = 'bicubic')
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.figure()
plt.show()
```
[](https://i.stack.imgur.com/aaITo.png)
Source of inspiration: <https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/> | 2019/02/09 | [
"https://Stackoverflow.com/questions/54604608",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | I rarely find Hough useful for realworld applications, thus I'd rather follow the path of denoising, segmentation and ellipse fit.
For the denoising, one selects the non local means (NLM). For the segmentation --- just looking at the image --- I came up with a Gaussian mixture model with three classes: one for background and two for the object (diffuse and specular component). Here, the mixture model essentially models the shape of the graylevel image histogram by three Gaussian functions (as demonstrated in [Wikipedia mixture-histogram gif](https://commons.wikimedia.org/wiki/File:Movie.gif)). Interested reader is redirected to [Wikipedia article](https://en.wikipedia.org/wiki/Mixture_model).
Ellipse fit at the end is just an elementary OpenCV-tool.
In C++, but in analogue to OpenCV-Python
```
#include "opencv2/ml.hpp"
#include "opencv2/photo.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
void gaussianMixture(const cv::Mat &src, cv::Mat &dst, int nClasses )
{
if ( src.type()!=CV_8UC1 )
CV_Error(CV_StsError,"src is not 8-bit grayscale");
// reshape
cv::Mat samples( src.rows * src.cols, 1, CV_32FC1 );
src.convertTo( cv::Mat( src.size(), CV_32FC1, samples.data ), CV_32F );
cv::Mat labels;
cv::Ptr<cv::ml::EM> em = cv::ml::EM::create();
em->setClustersNumber( nClasses );
em->setTermCriteria( cv::TermCriteria(CV_TERMCRIT_ITER, 4, 0.0 ) );
em->trainEM( samples );
if ( dst.type()!=CV_8UC1 || dst.size()!=src.size() )
dst = cv::Mat( src.size(),CV_8UC1 );
for(int y=0;y<src.rows;++y)
{
for(int x=0;x<src.cols;++x)
{
dst.at<unsigned char>(y,x) = em->predict( src.at<unsigned char>(y,x) );
}
}
}
void automate()
{
cv::Mat input = cv::imread( /* input image in color */,cv::IMREAD_COLOR);
cv::Mat inputDenoised;
cv::fastNlMeansDenoising( input, inputDenoised, 8.0, 5, 17 );
cv::Mat gray;
cv::cvtColor(inputDenoised,gray,cv::COLOR_BGR2GRAY );
gaussianMixture(gray,gray,3 );
typedef std::vector< std::vector< cv::Point > > VecOfVec;
VecOfVec contours;
cv::Mat objectPixels = gray>0;
cv::findContours( objectPixels, contours, cv::RETR_LIST, cv::CHAIN_APPROX_NONE );
cv::Mat inputcopy; // for drawing of ellipses
input.copyTo( inputcopy );
for ( size_t i=0;i<contours.size();++i )
{
if ( contours[i].size() < 5 )
continue;
cv::drawContours( input, VecOfVec{contours[i]}, -1, cv::Scalar(0,0,255), 2 );
cv::RotatedRect rect = cv::fitEllipse( contours[i] );
cv::ellipse( inputcopy, rect, cv::Scalar(0,0,255), 2 );
}
}
```
I should have cleaned the very small contours (in upper row second) (larger than the minimum 5 points) before drawing ellipses.
**\* edit \***
added Python predictor without the denoising and find-contours part. After learning the model, the time to predict is about 1.1 second
```
img = cv.imread('D:/tmp/8b3Lm.jpg', cv.IMREAD_GRAYSCALE )
class Predictor :
def train( self, img ):
self.em = cv.ml.EM_create()
self.em.setClustersNumber( 3 )
self.em.setTermCriteria( ( cv.TERM_CRITERIA_COUNT,4,0 ) )
samples = np.reshape( img, (img.shape[0]*img.shape[1], -1) ).astype('float')
self.em.trainEM( samples )
def predict( self, img ):
samples = np.reshape( img, (img.shape[0]*img.shape[1], -1) ).astype('float')
labels = np.zeros( samples.shape, 'uint8' )
for i in range ( samples.shape[0] ):
retval, probs = self.em.predict2( samples[i] )
labels[i] = retval[1] * (255/3) # make it [0,255] for imshow
return np.reshape( labels, img.shape )
predictor = Predictor()
predictor.train( img )
t = time.perf_counter()
predictor.train( img )
t = time.perf_counter() - t
print ( "train %s s" %t )
t = time.perf_counter()
labels = predictor.predict( img )
t = time.perf_counter() - t
print ( "predict %s s" %t )
cv.imshow( "prediction", labels )
cv.waitKey( 0 )
```
[](https://i.stack.imgur.com/8b3Lm.jpg)
[](https://i.stack.imgur.com/NFacS.jpg)
[](https://i.stack.imgur.com/cahH8.jpg)
[](https://i.stack.imgur.com/OXn51.png) | I would go with the `HoughCircles` method, from openCV. It will give you all the circles in the image. Then it will be easy to compute the radius and the position of each circles.
Look at : <https://docs.opencv.org/3.4/d4/d70/tutorial_hough_circle.html> | 17,160 |
12,285,754 | >
> **Possible Duplicate:**
>
> [Python dictionaries - find second character in a 2-character string which yields minimum value](https://stackoverflow.com/questions/12284913/python-dictionaries-find-second-character-in-a-2-character-string-which-yields)
>
>
>
I would like to submit the first item of a tuple key and return the remaining item of that key which minimizes the tuple key value.
For example:
```
d = {('a','b'): 100,
('a','c'): 200,
('a','d'): 500}
```
If I were to pass in `'a'`, I would like to return `'b'`. | 2012/09/05 | [
"https://Stackoverflow.com/questions/12285754",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1096991/"
] | ```
def func(d,y):
lis=sorted((x for x in d.items() if x[0][0]==y),key=lambda x:x[1])
return lis[0][0][1]
d ={('a','b'):100,('a','c'):200,('a','d'):500,('b','c'):1000,('b','e'):100}
```
output:
```
>>> func(d,'a')
'b'
>>> func(d,'b')
'e'
``` | ```
def minval(my_dict,var_name):
return min(filter(lambda x: x[0][0] == var_name,my_dict.items()),key=lambda x:x[1])[0][1]
print minval(d,'a')
```
I think Ashwins answer is probably better by pythonic simple is better than complex standards and they probably perform simillarly on a time scale ... his may even be faster ... | 17,162 |
17,499,757 | I have configured a keyboard shortcut using xbindkeys to run a python script.
Now, while editing any vim file if that user press that keyboard shortcut-
* I want my python script to run this command to paste the path and line no to the system clipboard-
`:let @+=expand("%") . ':' . line(".")`
* Then I want my script to copy that path from the system clipboard and process it further
Can you please suggest me a good solution for this.
Thanks In advance | 2013/07/06 | [
"https://Stackoverflow.com/questions/17499757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1908544/"
] | Try adding:
```
position:absolute;
bottom: 0;
```
to your footer selector. | Well maybe it's because you have a min-height of 95%. If not, you can try:
```
#footer {
position: absolute;
bottom: 0;
margin: 0 auto;
}
``` | 17,164 |
12,080,786 | I am trying to execute a mysql query, which needs to contain % characters... While building the query, I run into a problem of python using % and trying to stick it as a variable:
```
statmt="select id from %s WHERE `email` LIKE %blah%" % (tbl)
self.cursor.execute(statmt)
```
This naturally barfs with:
```
statmt="select id from %s WHERE `email` LIKE %blah%" % (tbl)
TypeError: not enough arguments for format string
```
How should I fix this so Python stops reading this as a variable, and takes it in as part of the string?
Thanks! | 2012/08/22 | [
"https://Stackoverflow.com/questions/12080786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/650424/"
] | When needing a literal `%` inside a Python formatting expression, use `%%`:
```
statmt="select id from %s WHERE `email` LIKE '%%blah%%'" % (tbl)
```
See the documentation [section 5.6.2. String Formatting Operations](http://docs.python.org/library/stdtypes.html#string-formatting-operations) for more information. | You don't need to use string interpolation. The execute method handles it for you, so you can do this instead:
```
statmt="select id from %s WHERE `email` LIKE %blah%"
self.cursor.execute(statmt, tbl)
``` | 17,173 |
32,239,094 | I have text files which look like this (much longer, this is just some lines from it):
```
ATOM 6 H2 ACD Z 1 47.434 34.593 -4.121 1.000
ATOM 7 C ACT Z 2 47.465 33.050 -2.458 1.000
ATOM 8 O ACT Z 2 48.004 33.835 -1.687 1.000
ATOM 9 CH1 ACT Z 2 47.586 33.197 -3.960 1.000
ATOM 79 H1 EOL Z 14 46.340 32.495 2.495 1.000
ATOM 80 C1 PHN Z 15 46.992 33.059 -2.874 1.000
ATOM 81 C2 PHN Z 15 46.876 32.765 -1.499 1.000
ATOM 82 C3 PHN Z 15 46.836 31.422 -1.079 1.000
```
In the 6. "column" I have to change all of the numbers to 1, without changing the other numbers. I tried to define a function but it doesn't work (replace\_numbers). This means that the script runs and everything is ok just the numbers don't change.
Here is an example what I want:
```
ATOM 6 H2 LIG Z 1 47.434 34.593 -4.121 1.000
ATOM 7 C LIG Z 1 47.465 33.050 -2.458 1.000
ATOM 8 O LIG Z 1 48.004 33.835 -1.687 1.000
ATOM 9 CH1 LIG Z 1 47.586 33.197 -3.960 1.000
ATOM 79 H1 LIG Z 1 46.340 32.495 2.495 1.000
ATOM 80 C1 LIG Z 1 46.992 33.059 -2.874 1.000
ATOM 81 C2 LIG Z 1 46.876 32.765 -1.499 1.000
ATOM 82 C3 LIG Z 1 46.836 31.422 -1.079 1.000
```
I copy my whole script. Comments are in Hungarian.
```
#!/usr/bin/python
import os
import sys
import re
# en kodom
molecules = {"ETH":"LIG", "EOL":"LIG", "THS":"LIG", "BUT":"LIG", "ACN":"LIG",
"AMN":"LIG", "DFO":"LIG", "DME":"LIG", "BDY":"LIG", "BEN":"LIG",
"CHX":"LIG", "PHN":"LIG", "ACD":"LIG", "ACT":"LIG", "ADY":"LIG",
"URE":"LIG"}
numbers = {x: '1' for x in range(1, 50)}
def replace_numbers(i):
i_list = i.split()
if i_list[0] == "ATOM":
i_list[5] = '1 '
i_list[0] = i_list[0] + ' '
i_list[1] = i_list[1] + ' '
i_list[2] = i_list[2] + ' '
i_list[3] = i_list[3] + ' '
i_list[4] = i_list[4] + ' '
i_list[6] = i_list[6] + ' '
i_list[7] = i_list[7] + ' '
i_list[8] = i_list[8] + ' '
i = ''.join(i_list)
return i
def replace_all(text, dic):
for z, zs in dic.iteritems():
text = text.replace(z, zs)
return text
# en kodom end
def split_file(f, n, dirname):
data = f.readlines() # az input fajl minden sorat olvassa es listat csinal a sorokbol
concat_data = "".join(data) # egy olyan szoveget ad vissza ami a data-bol all
split_data = concat_data.split("HEADER ") # felbontja a concat_data-t, a hatarolojel a HEADER - nincs benne
header = ""
result = []
for i in split_data:
if i.startswith("protein"):
header = i
if i.startswith("crosscluster"):
crs_cluster_num = int(re.findall(r"crosscluster\.\d*\.(\d*)\.pdb", i)[0])
# ez hogy csinalja amit csinal?
if crs_cluster_num > 16:
#en kodom
i = replace_all(i, molecules)
i = replace_numbers(i)
#en kodom end
result.append(i) # hozzaadja a result nevu listahoz
for output in result:
with open(str(dirname) + "_output"+str(n)+".pdb","w") as out_file: # az aoutput nevet es helyet adja meg
out_file.write("HEADER " + header) # hozzaadja a HEADER-t es beirja a proteint
out_file.write("HEADER " + output) # hozzaadja a HEADER szoveget illetve proteineket egyesevel
out_file.write("#PROTEINTAG\n#PROBETAG ZYXWVUTSR") # hozzaadja az utolso sorokat a vegehez
out_file.close()
n += 1 # ?
return n
if __name__ == "__main__": # ?
n = 1
for dirname, dirnames, filenames in os.walk("/home/georgina/proba"):
for filename in filenames:
file_path = str(os.path.join(dirname, filename))
print dirname
if filename.endswith(".pdb"):
file_to_split = open(file_path, "r") # megnyitja a szetbontando fajlt
n = split_file(file_to_split, n, dirname) # a split_file funkcio behivasa
file_to_split.close()
```
I tried this with regular expression, with same result.
```
def replace_numbers(text):
expr = re.compile(r'(LIG )([A-Z])\s*\d*(\s*)')
expr.sub(r'\1\2 1,\3', text)
return text
``` | 2015/08/27 | [
"https://Stackoverflow.com/questions/32239094",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5261433/"
] | I was looking for the same thing. I ended up with the following solution:
```
figure = plt.figure(figsize=(6,9), dpi=100);
graph = figure.add_subplot(111);
freq = pandas.value_counts(data)
bins = freq.index
x=graph.bar(bins, freq.values) #gives the graph without NaN
graphmissing = figure.add_subplot(111)
y = graphmissing.bar([0], freq[numpy.NaN]) #gives a bar for the number of missing values at x=0
figure.show()
```
This gave me a histogram with a column at 0 showing the number of missing values in the data. | As pointed out by [Sreeram TP](https://stackoverflow.com/users/7896849/sreeram-tp), it is possible to use the argument dropna=False in the function value\_counts to include the counts of NaNs.
```
df = pd.DataFrame({'feature1': [1, 2, 2, 4, 3, 2, 3, 4, np.NaN],
'feature2': [4, 4, 3, 4, 1, 4, 3, np.NaN, np.NaN]})
# Calculates the histogram for feature1
counts = df['feature1'].value_counts(dropna=False)
counts.plot.bar(title='feat1', grid=True)
```
I can not insert images. So, here is the result:
[image plot here](https://i.stack.imgur.com/fdXtl.png) | 17,178 |
8,774,032 | I'm trying to send a POST request to a web app. I'm using the mechanize module (itself a wrapper of urllib2). Anyway, when I try to send a POST request, I get `UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 0: ordinal not in range(128)`. I tried putting the `unicode(string)`, the `unicode(string, encoding="utf-8")`, `unicode(string).encode()` etc, nothing worked - either returned the error above, or the `TypeError: decoding Unicode is not supported`
I looked at the other SO answers to similar questions, but none helped.
Thanks in advance!
**EDIT**: Example that produces an error:
```
prda = "šđćč" #valid UTF-8 characters
prda # typing in python shell
'\xc5\xa1\xc4\x91\xc4\x87\xc4\x8d'
print prda # in shell
šđćč
prda.encode("utf-8") #in shell
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 0: ordinal not in range(128)
unicode(prda)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 0: ordinal not in range(128)
``` | 2012/01/07 | [
"https://Stackoverflow.com/questions/8774032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/647897/"
] | I assume you're using Python 2.x.
Given a unicode object:
```
myUnicode = u'\u4f60\u597d'
```
encode it using utf-8:
```
mystr = myUnicode.encode('utf-8')
```
Note that you need to specify the encoding explicitly. By default it'll (usually) use ascii. | You don't need to wrap your chars in `unicode` calls, because they're already encoded :) if anything, you need to **DE**-code it to get a unicode object:
```
>>> s = '\xc5\xa1\xc4\x91\xc4\x87\xc4\x8d' # your string
>>> s.decode('utf-8')
u'\u0161\u0111\u0107\u010d'
>>> type(s.decode('utf-8'))
<type 'unicode'>
```
I don't know `mechanize` so I don't know exactly whether it handles it correctly or not, I'm afraid.
What I'd do with a regular `urllib2` POST call, would be to use `urlencode` :
```
>>> from urllib import urlencode
>>> postData = urlencode({'test': s }) # note I'm NOT decoding it
>>> postData
'test=%C5%A1%C4%91%C4%87%C4%8D'
>>> urllib2.urlopen(url, postData) # etc etc etc
``` | 17,180 |
60,230,124 | I am trying to read a stream from kafka using pyspark. I am using **spark version 3.0.0-preview2** and **spark-streaming-kafka-0-10\_2.12**
Before this I just stat zookeeper, kafka and create a new topic:
```
/usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties
/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties
/usr/local/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic data_wm
```
This is my code:
```
import pandas as pd
import os
import findspark
findspark.init("/usr/local/spark")
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("TestApp").getOrCreate()
df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("subscribe", "data_wm") \
.load()
value = df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
```
This how I run my script:
>
> sudo --preserve-env=pyspark /usr/local/spark/bin/pyspark --packages
> org.apache.spark:spark-streaming-kafka-0-10\_2.12:3.0.0-preview
>
>
>
As result for this command I have this :
```
: resolving dependencies :: org.apache.spark#spark-submit-parent-0d7b2a8d-a860-4766-a4c7-141a902d8365;1.0
confs: [default]
found org.apache.spark#spark-streaming-kafka-0-10_2.12;3.0.0-preview in central
found org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.0.0-preview in central
found org.apache.kafka#kafka-clients;2.3.1 in central
found com.github.luben#zstd-jni;1.4.3-1 in central
found org.lz4#lz4-java;1.6.0 in central
found org.xerial.snappy#snappy-java;1.1.7.3 in central
found org.slf4j#slf4j-api;1.7.16 in central
found org.spark-project.spark#unused;1.0.0 in central :: resolution report :: resolve 380ms :: artifacts dl 7ms
:: modules in use:
com.github.luben#zstd-jni;1.4.3-1 from central in [default]
org.apache.kafka#kafka-clients;2.3.1 from central in [default]
org.apache.spark#spark-streaming-kafka-0-10_2.12;3.0.0-preview from central in [default]
org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.0.0-preview from central in [default]
org.lz4#lz4-java;1.6.0 from central in [default]
org.slf4j#slf4j-api;1.7.16 from central in [default]
org.spark-project.spark#unused;1.0.0 from central in [default]
org.xerial.snappy#snappy-java;1.1.7.3 from central in [default]
```
But I have always this error:
d> f = spark \ ... .readStream \ ... .format("kafka") \ ...
>
> .option("kafka.bootstrap.servers", "localhost:9092") \ ...
>
> .option("subscribe", "data\_wm") \ ... .load() Traceback (most
> recent call last): File "", line 5, in File
> "/usr/local/spark/python/pyspark/sql/streaming.py", line 406, in load
> return self.\_df(self.\_jreader.load()) File "/usr/local/spark/python/lib/py4j-0.10.8.1-src.zip/py4j/java\_gateway.py",
> line 1286, in **call** File
> "/usr/local/spark/python/pyspark/sql/utils.py", line 102, in deco
> **raise converted pyspark.sql.utils.AnalysisException: Failed to find data source: kafka. Please deploy the application as per the
> deployment section of "Structured Streaming + Kafka Integration**
> Guide".;
>
>
>
I don't know the cause of this error, please help | 2020/02/14 | [
"https://Stackoverflow.com/questions/60230124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5674606/"
] | I have successfully resolved this error on Spark 3.0.1 (using PySpark).
I would keep things simple and provide the desired packages through the `--packages` argument:
```bash
spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1 MyPythonScript.py
```
**Mind the order of arguments otherwise it will throw an error.**
Where `MyPythonScript.py` has:
```py
KAFKA_TOPIC = "data_wm"
KAFKA_SERVER = "localhost:9092"
# creating an instance of SparkSession
spark_session = SparkSession \
.builder \
.appName("Python Spark create RDD") \
.getOrCreate()
# Subscribe to 1 topic
df = spark_session \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", KAFKA_SERVER) \
.option("subscribe", KAFKA_TOPIC) \
.load()
print(df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)"))
``` | If you check the documentation mentioned in the error, it indicates to download a different package - `spark-sql-kafka`, **not** `spark-streaming-kafka`. You can see in your `resolving dependencies` log section, you do not have that.
You can also add packages via findspark rather than at the CLI | 17,183 |
57,372,207 | ```
G:\Git\advsol\projects\autotune>conda env create -f env.yml -n auto-tune
Using Anaconda API: https://api.anaconda.org
Fetching package metadata .................
ResolvePackageNotFound:
- matplotlib 2.1.1 py35_0
G:\Git\advsol\projects\autotune>
```
I have tried "conda install matplotlib==2.1.1"
it doesn't work
this is the env.yml file
```
name: auto-tune
channels:
- menpo
- conda-forge
- defaults
dependencies:
- backports=1.0=py35_1
- backports.functools_lru_cache=1.4=py35_1
- boost-cpp=1.64.0=vc14_1
- ca-certificates=2017.7.27.1=0
- certifi=2017.7.27.1=py35_0
- cloudpickle=0.4.0=py35_0
- colorama=0.3.9=py35_0
- cycler=0.10.0=py35_0
- dask-core=0.16.0=py_0
- decorator=4.1.2=py35_0
- eigen=3.3.3=0
- expat=2.2.5=vc14_0
- freetype=2.7=vc14_2
- future=0.16.0=py35_0
- icu=58.1=vc14_1
- imageio=2.2.0=py35_0
- ipykernel=4.6.1=py35_0
- ipyparallel=6.0.2=py35_0
- ipython=6.2.1=py35_0
- ipython_genutils=0.2.0=py35_0
- jedi=0.10.2=py35_0
- jpeg=9b=vc14_2
- jupyter_client=5.2.2=py35_0
- jupyter_core=4.4.0=py_0
- libiconv=1.14=vc14_4
- libpng=1.6.28=vc14_2
- libsodium=1.0.15=vc14_1
- libtiff=4.0.7=vc14_1
- libxml2=2.9.5=vc14_1
- matplotlib=2.1.1=py35_0
- mayavi=4.5.0=np111py35_vc14_1
- networkx=1.11=py35_0
- nlopt=2.4.2=py35_vc14_2
- olefile=0.44=py35_0
- openssl=1.0.2l=vc14_0
- pagmo=2.4=vc14_1
- pickleshare=0.7.4=py35_0
- pillow=4.3.0=py35_1
- prompt_toolkit=1.0.15=py35_0
- pygments=2.2.0=py35_0
- pygmo=2.4=np111py35_0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_4
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.3=py_2
- pywavelets=0.5.2=py35_1
- pyzmq=16.0.2=py35_3
- qt=5.6.2=vc14_1
- scikit-image=0.13.0=py35_3
- setuptools=36.6.0=py35_1
- simplegeneric=0.8.1=py35_0
- sip=4.18=py35_1
- six=1.11.0=py35_1
- sqlite=3.20.1=vc14_2
- tbb=2018_20170919=vc14_0
- tk=8.5.19=vc14_1
- toolz=0.8.2=py_2
- tornado=4.5.2=py35_0
- traitlets=4.3.2=py35_0
- traits=4.6.0=py35_1
- vc=14=0
- vs2015_runtime=14.0.25420=0
- vtk=7.1.0=py35_vc14_4
- wcwidth=0.1.7=py35_0
- wheel=0.30.0=py35_2
- win_unicode_console=0.5=py35_0
- wincertstore=0.2=py35_0
- yaml=0.1.7=vc14_0
- zeromq=4.2.3=vc14_2
- zlib=1.2.11=vc14_0
- configobj=5.0.6=py35_0
- icc_rt=2017.0.4=h97af966_0
- intel-openmp=2018.0.0=hcd89f80_7
- mkl=2018.0.0=h36b65af_4
- numpy=1.11.3=py35h4fc39be_3
- pip=9.0.1=py35_0
- scikit-learn=0.19.1=py35h2037775_0
- scipy=1.0.0=py35h75710e8_0
- apptools=4.4.0=py35_0
- boost=1.63.0=py35_vc14_2
- envisage=4.5.1=py35_0
- opencv3=3.1.0=py35_0
- pyface=5.1.0=py35_0
- traitsui=5.1.0=py35_0
- pip:
- absl-py==0.1.7
- backports.functools-lru-cache==1.4
- bleach==1.5.0
- colour-demosaicing==0.1.2
- colour-science==0.3.10
- dask==0.16.0
- entrypoints==0.2.3
- exifread==2.1.2
- gast==0.2.0
- html5lib==0.9999999
- ipython-genutils==0.2.0
- ipywidgets==7.1.1
- jinja2==2.10
- jsonschema==2.6.0
- jupyter==1.0.0
- jupyter-client==5.2.2
- jupyter-console==5.2.0
- jupyter-core==4.4.0
- markdown==2.6.10
- markupsafe==1.0
- mistune==0.8.3
- nbconvert==5.3.1
- nbformat==4.4.0
- nose==1.3.7
- notebook==5.4.0
- pandocfilters==1.4.2
- prompt-toolkit==1.0.15
- protobuf==3.5.1
- pywinpty==0.5.1
- qtconsole==4.3.1
- rawpy==0.10.1
- send2trash==1.4.2
- tb-nightly==1.5.0a20180102
- terminado==0.8.1
- testpath==0.3.1
- tf-nightly==1.5.0.dev20180102
- werkzeug==0.14.1
- widgetsnbextension==3.1.3
- win-unicode-console==0.5
prefix: G:\ProgramData\Anaconda3_501\envs\auto-tune
```
**Update I tried conda install matplotlib=2.1.1**
```
G:\Git\advsol\projects\autotune>conda install matplotlib=2.1.1
Fetching package metadata .............
Solving package specifications: .
Package plan for installation in environment C:\ProgramData\Anaconda3:
The following NEW packages will be INSTALLED:
blas: 1.0-mkl
conda-package-handling: 1.3.11-py36_0
libarchive: 3.3.3-h798a506_1
lz4-c: 1.8.1.2-h2fa13f4_0
python-libarchive-c: 2.8-py36_11
tqdm: 4.32.1-py_0
xz: 5.2.4-h2fa13f4_4
zstd: 1.3.3-hfe6a214_0
The following packages will be UPDATED:
conda: 4.3.30-py36h7e176b0_0 --> 4.7.10-py36_0
conda-env: 2.6.0-h36134e3_1 --> 2.6.0-1
libxml2: 2.9.4-vc14h8fd0f11_5 --> 2.9.8-hadb2253_1
matplotlib: 2.1.0-py36h11b4b9c_0 --> 2.1.1-py36h2062329_0
menuinst: 1.4.10-py36h42196fb_0 --> 1.4.14-py36hfa6e2cd_0
openssl: 1.0.2l-vc14hcac20b0_2 --> 1.0.2p-hfa6e2cd_0
pycosat: 0.6.2-py36hf17546d_1 --> 0.6.3-py36hfa6e2cd_0
Proceed ([y]/n)? y
menuinst-1.4.1 100% |###############################| Time: 0:00:00 11.74 MB/s
blas-1.0-mkl.t 100% |###############################| Time: 0:00:00 1.55 MB/s
conda-env-2.6. 100% |###############################| Time: 0:00:00 938.32 kB/s
lz4-c-1.8.1.2- 100% |###############################| Time: 0:00:00 18.51 MB/s
openssl-1.0.2p 100% |###############################| Time: 0:00:00 34.74 MB/s
xz-5.2.4-h2fa1 100% |###############################| Time: 0:00:00 16.77 MB/s
libxml2-2.9.8- 100% |###############################| Time: 0:00:00 28.75 MB/s
pycosat-0.6.3- 100% |###############################| Time: 0:00:00 16.66 MB/s
tqdm-4.32.1-py 100% |###############################| Time: 0:00:00 11.74 MB/s
zstd-1.3.3-hfe 100% |###############################| Time: 0:00:00 20.92 MB/s
libarchive-3.3 100% |###############################| Time: 0:00:00 22.73 MB/s
python-libarch 100% |###############################| Time: 0:00:00 7.77 MB/s
conda-package- 100% |###############################| Time: 0:00:00 28.76 MB/s
matplotlib-2.1 100% |###############################| Time: 0:00:00 29.00 MB/s
conda-4.7.10-p 100% |###############################| Time: 0:00:00 27.90 MB/s
ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'defaults::tqdm-4.32.1-py_0'.
CondaError: Cannot link a source that does not exist. C:\ProgramData\Anaconda3\Scripts\conda.exe
Running `conda clean --packages` may resolve your problem.
Attempting to roll back.
CondaError: Cannot link a source that does not exist. C:\ProgramData\Anaconda3\Scripts\conda.exe
Running `conda clean --packages` may resolve your problem.
``` | 2019/08/06 | [
"https://Stackoverflow.com/questions/57372207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1577580/"
] | Try
`conda install matplotlib=2.1.1` | create a new environment and try the below commands
```
conda install -c conda-forge matplotlib
```
or
```
conda install matplotlib
``` | 17,184 |
37,477,755 | [](https://i.stack.imgur.com/3PXvF.png)I am running a python script in linux and i am encountering a problem in running a program multiple times. When i execute the program ,the program runs normally and i give it a SIGTSTP signal ctrl+z to kill the program. However, when i run the program again,the program does not execute and linux prompts me to enter a new command. I tried killing pids and killing processes but its not resolve the problem. I have to restart the system in order to get the program working again. Kindly please advise for a solution so i can kill a program and run it again without having to restart my system. | 2016/05/27 | [
"https://Stackoverflow.com/questions/37477755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4444867/"
] | `SIGSTOP` does not terminate the program, it pauses it, so it is not killed. you should send `SIGCONT` to the program or type `fg` to continue it. | SIGTSTP OR SIGSTOP signal is suspend, can't kill this program, use SIGCONT signal you can wake and continue it
```
Signal Description Signal number on Linux x86[1]
SIGABRT Process aborted 6
SIGALRM Signal raised by alarm 14
SIGBUS Bus error: "access to undefined portion of memory object" 7
SIGCHLD Child process terminated, stopped (or continued*) 17
SIGCONT Continue if stopped 18
SIGFPE Floating point exception: "erroneous arithmetic operation" 8
SIGHUP Hangup 1
SIGILL Illegal instruction 4
SIGINT Interrupt 2
SIGKILL Kill (terminate immediately) 9
SIGPIPE Write to pipe with no one reading 13
SIGQUIT Quit and dump core 3
SIGSEGV Segmentation violation 11
SIGSTOP Stop executing temporarily 19
SIGTERM Termination (request to terminate) 15
SIGTSTP Terminal stop signal 20
SIGTTIN Background process attempting to read from tty ("in") 21
SIGTTOU Background process attempting to write to tty ("out") 22
SIGUSR1 User-defined 1 10
SIGUSR2 User-defined 2 12
SIGPOLL Pollable event 29
SIGPROF Profiling timer expired 27
SIGSYS Bad syscall 31
SIGTRAP Trace/breakpoint trap 5
SIGURG Urgent data available on socket 23
SIGVTALRM Signal raised by timer counting virtual time: "virtual timer expired" 26
SIGXCPU CPU time limit exceeded 24
SIGXFSZ File size limit exceeded 25
``` | 17,185 |
43,131,671 | Given two list I need to make a third list which contains elements that occur only twice in over all list 1 and list 2.
How to do it efficienlty with reasonable time and space complexity ?
my solution:
using dictionary:
```
from collections import defaultdict
L=['a','b','c','d','a','d','e','e','g','h']
K=['a','g','i','g','g','i','r','r']
d=defaultdict(int)
for i in L:
d[i]+=1
for j in K:
d[j]+=1
print d
result=[]
for key,val in d.iteritems():
if val == 2:
result.append(key)
print result
```
My desired output would be:
```
['e', 'd', 'i', 'r']
```
Can I get a better pythonic solution?
Thanks. | 2017/03/31 | [
"https://Stackoverflow.com/questions/43131671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4516609/"
] | You can use python `Counter` for getting count of each word in the list. <https://docs.python.org/2/library/collections.html#counter-objects>
```
>>> L=['a','b','c','d','a','d','e','e','g','h']
>>> from collections import Counter
>>> c = Counter(L)
>>> c
Counter({'a': 2, 'd': 2, 'e': 2, 'b': 1, 'c': 1, 'g': 1, 'h': 1})
```
After doing this iterate over counter object and add elements to third list which have value 2. | This will work well with respect to space complexity, it's also pythonic, but I'm not too sure about the run time
```
set([x for x in L.extend(K) if L.extend(K).count(x) == 2])
```
Notice that this returns a set and not a list! | 17,193 |
40,639,665 | Not able to solve what is the error.
```
django.db.utils.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
```
I keep on getting the Trace when i run any of the command below
1. python manage.py makemigrations
2. python manage.py runserver
```
Unhandled exception in thread started by <function wrapper at 0x0000000003DAC4A8>
Traceback (most recent call last): File "C:\Python27\lib\site-packages\django\utils\autoreload.py", line 226, in wrapper fn(*args, **kwargs)
File "C:\Python27\lib\site packages\django\core\management\commands\runserver.py", line 124, in inner_run
self.check_migrations()
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 437, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "C:\Python27\lib\site-packages\django\db\migrations\executor.py", line 20, in __init__
self.loader = MigrationLoader(self.connection)
File "C:\Python27\lib\site-packages\django\db\migrations\loader.py", line 52, in __init__
self.build_graph()
File "C:\Python27\lib\site-packages\django\db\migrations\loader.py", line 203, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "C:\Python27\lib\site-packages\django\db\migrations\recorder.py", line 65, in applied_migrations
self.ensure_schema()
File "C:\Python27\lib\site-packages\django\db\migrations\recorder.py", line 52, in ensure_schema
if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()):
File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 231, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 204, in _cursor
self.ensure_connection()
File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 199, in ensure_connection
self.connect()
File "C:\Python27\lib\site-packages\django\db\utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 199, in ensure_connection
self.connect()
File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "C:\Python27\lib\site-packages\django\db\backends\postgresql\base.py", line 176, in get_new_connection
connection = Database.connect(**conn_params)
File "C:\Python27\lib\site-packages\psycopg2\__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
django.db.utils.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
```
Thanks in advance. | 2016/11/16 | [
"https://Stackoverflow.com/questions/40639665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] | This usually means that your PostgreSQL server is not running properly. You may want to restart it by
Linux
```
sudo /etc/init.d/postgresql restart
```
Windows
```
sc stop postgresql
sc start postgresql
```
Mac OS X
[How to start PostgreSQL server on Mac OS X?](https://stackoverflow.com/questions/7975556/how-to-start-postgresql-server-on-mac-os-x)
If restart fixes it, note that the root cause of the previous server failure still needs investigation. | I solved this problem uninstalling and installing postgresql again.
**On Mac**
Uninstall:
```
brew uninstall --force postgres
```
Install:
```
brew install postgres
```
PS: Change commands for Linux or Windows.
After, run makemigrations and migrate. | 17,196 |
22,720,012 | I've been bashing my head on this problem for a while now.
I'm dealing with properties setting using the DBus-java bindings for DBus. When Set is called, the value to set is wrapped in a org.freedesktop.types.Variant object from which I have to extract it. Normally if the data is a primitive I can use generics in the Set parameters and the bindings does the type conversion before calling the Set method.
However I am trying to set using a org.freedesktop.types.DBusStructType which is a complex type and which needs to be manually unpacked. So far I've gotten to the point where I can extract the type from the variant but I can't cast the value wrapped in the Variant to a DBusStructType even though it is clearly identified as a DBusStructType
The following code throws a `ClassCastException: Cannot cast [Ljava.lang.Object; to org.freedesktop.dbus.types.DBusStructType` when called with a DBus.Struct from a python dbus test. I have checked the variant signature is packed right, but I can't find a way to cast the object returned by the Variant.getValue() to the type specified by Variant.getType() and access the structure fields.
```
public void Set(String interface_name, String property_name, Variant new_value) throws Exception {
Type t = new_value.getType();
Object s = new_value.getValue();
t.getClass().cast(s);
System.out.println("Object cast to "+s.getClass().getName());
}
```
Any pointers would be really appreciated, I have started digging more into reflection as I'm still new to it but there is probably something I am missing. | 2014/03/28 | [
"https://Stackoverflow.com/questions/22720012",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1444649/"
] | Found the root cause. Changing the SpringServlet's Url mappings to "Rest" resources specific path fixed it.
Earlier "/\*" was also interpreted by SpringServlet and was not able to render the index.html.
```
class Application extends SpringBootServletInitializer {
public static void main(String[] args) {
SpringApplication.run([Application, "classpath:/META-INF/com/my/package/mgmt/bootstrap.xml"] as Object[], args)
}
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(Application);
}
@Bean
ServletRegistrationBean jerseyServlet() {
ServletRegistrationBean registration = new ServletRegistrationBean(new SpringServlet(), "/rest/*");
Map<String, String> params = ["com.sun.jersey.config.property.packages": "com.my.package.mgmt.impl;com.wordnik.swagger.jersey.listing"]
registration.setInitParameters(params)
return registration;
}
@Bean
ServletRegistrationBean jerseyJaxrsConfig() {
ServletRegistrationBean registration = new ServletRegistrationBean(new DefaultJaxrsConfig(), "/api/*");
Map<String, String> params = ["swagger.api.basepath": "http://localhost:8080/api", "api.version": "0.1.0"]
registration.setInitParameters(params)
return registration;
}
``` | ```
@Configuration
public class WebConfig implements WebMvcConfigurer {
/** do not interpret .123 extension as a lotus spreadsheet */
@Override
public void configureContentNegotiation(ContentNegotiationConfigurer configurer)
{
configurer.favorPathExtension(false);
}
/**
./resources/public is not working without this
*/
@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
registry.addResourceHandler("/**")
.addResourceLocations("classpath:/public/");
}
```
} | 17,198 |
30,958,835 | I would like to have a function as an optional argument
of another function in python but it is not clear for me how I can do that.
For example I define the following function:
```
import os, time, datetime
def f(t=datetime.datetime.now()):
return t.timetuple()
```
I have placed `t=datetime.datetime.now()`
in order for the argument to be optional so
to be able to call `f()` with no arguments.
Now whenever in time I execute `f()` I get the same datetime A (which is wrong according to what I expect), but whenever in time I execute `f(datetime.datetime.now())` I get different datetimes (which is correct as expected).
For example
```
>>> f()
time.struct_time(tm_year=2015, tm_mon=6, tm_mday=20, tm_hour=15, tm_min=36, tm_sec=2, tm_wday=5, tm_yday=171, tm_isdst=-1)
>>> f()
time.struct_time(tm_year=2015, tm_mon=6, tm_mday=20, tm_hour=15, tm_min=36, tm_sec=2, tm_wday=5, tm_yday=171, tm_isdst=-1)
>>> f(datetime.datetime.now())
time.struct_time(tm_year=2015, tm_mon=6, tm_mday=20, tm_hour=15, tm_min=37, tm_sec=1, tm_wday=5, tm_yday=171, tm_isdst=-1)
>>> f()
time.struct_time(tm_year=2015, tm_mon=6, tm_mday=20, tm_hour=15, tm_min=36, tm_sec=2, tm_wday=5, tm_yday=171, tm_isdst=-1)
```
So why the fourth call returns me back to min 36 and sec 2
while the call was made before that?
Why the first two calls give the same exact time even if I let plenty of time between them? | 2015/06/20 | [
"https://Stackoverflow.com/questions/30958835",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/805417/"
] | As mentioned by flask, the default value is evaluated when the function is parsed, so it will be set to one time.
The typical solution to this, is to not have the default a mutable value. You can do the followings:
```
def f(t=None):
if not t:
t = datetime.datetime.now()
return t.timetuple()
```
BTW, for the readers' benefit, you should try to use meaningful method and variable names. | The default parameter value is evaluated only once when the function is defined. | 17,199 |
8,030,264 | Python imports drive me crazy (my experience with python imports sometime doesn't correspond at all to idiom 'Explicit is better than implicit' :( ):
```
[app]
start.py
from package1 import module1
[package1]
__init__.py
print('Init package1')
module1.py
print('Init package1.module1')
from . import module2
module2.py
print('Init package1.module2')
import sys, pprint
pprint.pprint(sys.modules)
from . import module1
```
I get:
```
vic@ubuntu:~/Desktop/app2$ python3 start.py
Init package1
Init package1.module1
Init package1.module2
{'__main__': <module '__main__' from 'start.py'>,
...
'package1': <module 'package1' from '/home/vic/Desktop/app2/package1/__init__.py'>,
'package1.module1': <module 'package1.module1' from '/home/vic/Desktop/app2/package1/module1.py'>,
'package1.module2': <module 'package1.module2' from '/home/vic/Desktop/app2/package1/module2.py'>,
...
Traceback (most recent call last):
File "start.py", line 3, in <module>
from package1 import module1
File "/home/vic/Desktop/app2/package1/module1.py", line 3, in <module>
from . import module2
File "/home/vic/Desktop/app2/package1/module2.py", line 5, in <module>
from . import module1
ImportError: cannot import name module1
vic@ubuntu:~/Desktop/app2$
```
`import package1.module1` works, but i want to use `from . import module1` because i want to make `package1` portable for my other applications, that's why i want to use relative paths.
I am using python 3.
I need circular imports. A function in module1 asserts that one of its parameter is instance of a class defined in module2 and viceversa.
**In other words:**
`sys.modules` contains `'package1.module1': <module 'package1.module1' from '/home/vic/Desktop/app2/package1/module1.py'>`. I want to get a reference to it in form `from . import module1`, but it tries to get a name, not a package like in case `import package1.module1` (which works fine). I tried `import .module1 as m1` - but that's a syntax error.
Also, `from . import module2` in `module1` works fine, but `from . import module1` in `module2` doesn't work...
**UPDATE:**
This hack works (but i am looking for the 'official' way):
```
print('Init package1.module2')
import sys, pprint
pprint.pprint(sys.modules)
#from . import module1
parent_module_name = __name__.rpartition('.')[0]
module1 = sys.modules[parent_module_name + '.module1']
``` | 2011/11/06 | [
"https://Stackoverflow.com/questions/8030264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/248296/"
] | Your update emulates what the absolute import does: `import package1.module1` if you do it while `module1` being imported. If you'd like to use a dynamic parent package name then to import `module1` in the `module2.py`:
```
import importlib
module1 = importlib.import_module('.module1', __package__)
```
---
>
> I need circular imports. A function in module1 asserts that one of its
> parameter is instance of a class defined in module2 and viceversa.
>
>
>
You could move one the classes to a separate module to resolve the circular dependency or make the import at a function level if you don't want to use absolute imports.
```
.
├── start.py
# from package1 import module1
└── package1
├── __init__.py
# print("Init package1")
# from . import module1, module2
├── c1.py
# print("Init package1.c1")
# class C1:
# pass
├── module1.py
# print("Init package1.module1")
# from .c1 import C1
# from .module2 import C2
└── module2.py
# print("Init package1.module2")
# from .c1 import C1
# class C2:
# pass
# def f():
# from .module1 import C1
```
### Output
```
Init package1
Init package1.module1
Init package1.c1
Init package1.module2
```
Another option that might be simpler than refactoring out `c1.py` is to merge `module{1,2}.py` into a single `common.py`. `module{1,2}.py` make the imports from `common` in this case. | I ran into this same issue today, and it seems this is indeed broken in python3.4, but works in python3.5.
The [changelog](https://docs.python.org/3/whatsnew/3.5.html) has an entry:
>
> Circular imports involving relative imports are now supported. (Contributed by Brett Cannon and Antoine Pitrou in [bpo-17636](https://bugs.python.org/issue17636)).
>
>
>
Looking through the bugreport, it seems that this not so much a buf fixed, as well as a new feature in the way imports work. Referring to [poke's answer above](https://stackoverflow.com/a/8030886/740048), he shows that `from . import foo` means to load `__init__.py` and get `foo` from it (possibly from the implicitly loaded list of submodules). Since python3.5, `from . import foo` will do the same, but if `foo` is not available as an attribute, it will fall back to looking through the lists of loaded modules (`sys.modules`) to see if it is already present there, which fixes this particular case. I'm not 100% sure I properly presented how this works, though. | 17,204 |
5,226,958 | I need to setup environment with the path to a binary. In the shell, I can use `which` to find the path. Is there an equivalent in python?
This is my code.
```py
cmd = ["which","abc"]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
res = p.stdout.readlines()
if len(res) == 0: return False
return True
``` | 2011/03/08 | [
"https://Stackoverflow.com/questions/5226958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] | There's not a command to do that, but you can iterate over `environ["PATH"]` and look if the file exists, which is actually what `which` does.
```
import os
def which(file):
for path in os.environ["PATH"].split(os.pathsep):
if os.path.exists(os.path.join(path, file)):
return os.path.join(path, file)
return None
```
Good luck! | You could try something like the following:
```
import os
import os.path
def which(filename):
"""docstring for which"""
locations = os.environ.get("PATH").split(os.pathsep)
candidates = []
for location in locations:
candidate = os.path.join(location, filename)
if os.path.isfile(candidate):
candidates.append(candidate)
return candidates
``` | 17,214 |
4,420,218 | I have a VPS running a fresh install of Ubuntu 10.04 LTS. I'm trying to set up a live application using the Flask microframework, but it's giving me trouble. I took notes while I tried to get it running and here's my play-by-play in an effort to pinpoint exactly where I went wrong.
INSTALLATION
============
<http://flask.pocoo.org/docs/installation/#installation>
```
$ adduser myapp
$ sudo apt-get install python-setuptools
$ sudo easy_install pip
$ sudo pip install virtualenv
/home/myapp/
-- www/
$ sudo pip install virtualenv
/home/myapp/
-- www/
-- env/
$ . env/bin/activate
$ easy_install Flask
```
MOD\_WSGI
=========
<http://flask.pocoo.org/docs/deploying/mod_wsgi/>
```
$ sudo apt-get install apache2
$ sudo apt-get install libapache2-mod-wsgi
```
Creating WSGI file
==================
```
$ nano /home/myapp/www/myapp.wsgi
--myapp.wsgi contents:--------------------------
activate_this = '/home/myapp/env/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
from myapp import app as application
/home/myapp/
-- www/
-- myapp.wsgi
-- env/
```
Configuring Apache
==================
```
$ nano /etc/apache2/sites-available/myapp.com
-----myapp.com file contents ---------------------
<VirtualHost *:80>
ServerName myapp.com
WSGIDaemonProcess myapp user=myapp group=myapp threads=5 python-path=/home/myapp/env/lib/python2.6/site-packages
WSGIScriptAlias / /home/myapp/www/myapp.wsgi
<Directory /home/myapp/www>
WSGIProcessGroup myapp
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
```
Enable the virtual host file I just created
===========================================
```
$ cd /etc/apache2/sites-enabled
$ ln -s ../sites-available/myapp.com
```
Restart Apache
==============
```
$ /etc/init.d/apache2 restart
```
Servers me a 500 server error page. Here's the latest error log:
```
mod_wsgi (pid=3514): Target WSGI script '/home/myapp/www/myapp.wsgi' cannot be loaded as Python module.
mod_wsgi (pid=3514): Exception occurred processing WSGI script '/home/myapp/www/myapp.wsgi'.
Traceback (most recent call last):
File "/home/myapp/www/myapp.wsgi", line 4, in <module>
from myapp import app as application
ImportError: No module named myapp
```
The errors allude that it's something strikingly obvious, but I'm quite lost. | 2010/12/12 | [
"https://Stackoverflow.com/questions/4420218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/511200/"
] | Obviously, it cannot find your "`myapp`" package. You should add it to the path in your `myapp.wsgi` file like this:
```
import sys
sys.path.append(DIRECTORY_WHERE_YOUR_PACKAGE_IS_LOCATED)
from myapp import app
```
Also, if `myapp` module is a package, you should put and empty `__init__.py` file into its directory. | Edit line `sys.path.append`, it needs to be a string.
```
import sys
sys.path.append('directory/where/package/is/located')
```
**Notice** the single quotes. | 17,224 |
3,103,178 | I need to get the info under what environment the software is running.
Does python have a library for this purpose?
I want to know the following info.
* OS name/version
* Name of the CPU, clock speed
* Number of CPU core
* Size of memory | 2010/06/23 | [
"https://Stackoverflow.com/questions/3103178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] | ```
#Shamelessly combined from google and other stackoverflow like sites to form a single function
import platform,socket,re,uuid,json,psutil,logging
def getSystemInfo():
try:
info={}
info['platform']=platform.system()
info['platform-release']=platform.release()
info['platform-version']=platform.version()
info['architecture']=platform.machine()
info['hostname']=socket.gethostname()
info['ip-address']=socket.gethostbyname(socket.gethostname())
info['mac-address']=':'.join(re.findall('..', '%012x' % uuid.getnode()))
info['processor']=platform.processor()
info['ram']=str(round(psutil.virtual_memory().total / (1024.0 **3)))+" GB"
return json.dumps(info)
except Exception as e:
logging.exception(e)
json.loads(getSystemInfo())
```
---
Output Sample:
```
{
'platform': 'Linux',
'platform-release': '5.3.0-29-generic',
'platform-version': '#31-Ubuntu SMP Fri Jan 17 17:27:26 UTC 2020',
'architecture': 'x86_64',
'hostname': 'naret-vm',
'ip-address': '127.0.1.1',
'mac-address': 'bb:cc:dd:ee:bc:ff',
'processor': 'x86_64',
'ram': '4 GB'
}
``` | ```
import psutil
import platform
from datetime import datetime
import cpuinfo
import socket
import uuid
import re
def get_size(bytes, suffix="B"):
"""
Scale bytes to its proper format
e.g:
1253656 => '1.20MB'
1253656678 => '1.17GB'
"""
factor = 1024
for unit in ["", "K", "M", "G", "T", "P"]:
if bytes < factor:
return f"{bytes:.2f}{unit}{suffix}"
bytes /= factor
def System_information():
print("="*40, "System Information", "="*40)
uname = platform.uname()
print(f"System: {uname.system}")
print(f"Node Name: {uname.node}")
print(f"Release: {uname.release}")
print(f"Version: {uname.version}")
print(f"Machine: {uname.machine}")
print(f"Processor: {uname.processor}")
print(f"Processor: {cpuinfo.get_cpu_info()['brand_raw']}")
print(f"Ip-Address: {socket.gethostbyname(socket.gethostname())}")
print(f"Mac-Address: {':'.join(re.findall('..', '%012x' % uuid.getnode()))}")
# Boot Time
print("="*40, "Boot Time", "="*40)
boot_time_timestamp = psutil.boot_time()
bt = datetime.fromtimestamp(boot_time_timestamp)
print(f"Boot Time: {bt.year}/{bt.month}/{bt.day} {bt.hour}:{bt.minute}:{bt.second}")
# print CPU information
print("="*40, "CPU Info", "="*40)
# number of cores
print("Physical cores:", psutil.cpu_count(logical=False))
print("Total cores:", psutil.cpu_count(logical=True))
# CPU frequencies
cpufreq = psutil.cpu_freq()
print(f"Max Frequency: {cpufreq.max:.2f}Mhz")
print(f"Min Frequency: {cpufreq.min:.2f}Mhz")
print(f"Current Frequency: {cpufreq.current:.2f}Mhz")
# CPU usage
print("CPU Usage Per Core:")
for i, percentage in enumerate(psutil.cpu_percent(percpu=True, interval=1)):
print(f"Core {i}: {percentage}%")
print(f"Total CPU Usage: {psutil.cpu_percent()}%")
# Memory Information
print("="*40, "Memory Information", "="*40)
# get the memory details
svmem = psutil.virtual_memory()
print(f"Total: {get_size(svmem.total)}")
print(f"Available: {get_size(svmem.available)}")
print(f"Used: {get_size(svmem.used)}")
print(f"Percentage: {svmem.percent}%")
print("="*20, "SWAP", "="*20)
# get the swap memory details (if exists)
swap = psutil.swap_memory()
print(f"Total: {get_size(swap.total)}")
print(f"Free: {get_size(swap.free)}")
print(f"Used: {get_size(swap.used)}")
print(f"Percentage: {swap.percent}%")
# Disk Information
print("="*40, "Disk Information", "="*40)
print("Partitions and Usage:")
# get all disk partitions
partitions = psutil.disk_partitions()
for partition in partitions:
print(f"=== Device: {partition.device} ===")
print(f" Mountpoint: {partition.mountpoint}")
print(f" File system type: {partition.fstype}")
try:
partition_usage = psutil.disk_usage(partition.mountpoint)
except PermissionError:
# this can be catched due to the disk that
# isn't ready
continue
print(f" Total Size: {get_size(partition_usage.total)}")
print(f" Used: {get_size(partition_usage.used)}")
print(f" Free: {get_size(partition_usage.free)}")
print(f" Percentage: {partition_usage.percent}%")
# get IO statistics since boot
disk_io = psutil.disk_io_counters()
print(f"Total read: {get_size(disk_io.read_bytes)}")
print(f"Total write: {get_size(disk_io.write_bytes)}")
## Network information
print("="*40, "Network Information", "="*40)
## get all network interfaces (virtual and physical)
if_addrs = psutil.net_if_addrs()
for interface_name, interface_addresses in if_addrs.items():
for address in interface_addresses:
print(f"=== Interface: {interface_name} ===")
if str(address.family) == 'AddressFamily.AF_INET':
print(f" IP Address: {address.address}")
print(f" Netmask: {address.netmask}")
print(f" Broadcast IP: {address.broadcast}")
elif str(address.family) == 'AddressFamily.AF_PACKET':
print(f" MAC Address: {address.address}")
print(f" Netmask: {address.netmask}")
print(f" Broadcast MAC: {address.broadcast}")
##get IO statistics since boot
net_io = psutil.net_io_counters()
print(f"Total Bytes Sent: {get_size(net_io.bytes_sent)}")
print(f"Total Bytes Received: {get_size(net_io.bytes_recv)}")
if __name__ == "__main__":
System_information()
``` | 17,225 |
13,337,140 | Brand new to using python, need help figuring out why my command line is spitting out huge strings of numbers and not the fib sequence up to the var I pass in. Here is what I have so far:
```
import sys
def fib(n):
a, b = 0, 1
while a < n:
print a
a, b = b, a+b
if __name__ == "__main__":
fib(sys.argv[1])
```
Now before I did sys.argv[1] or [1:] I was able to put in a sequence in n up to the number I wanted. I.e if I entered n as 12 I would get 0,1,1,3,5,8 which is correct. However I cannot get this to work. I did a print statement after the def fib(n): as print n. It would return my sys.argv pass in.
Where am I going wrong? Thanks for your time. | 2012/11/12 | [
"https://Stackoverflow.com/questions/13337140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1816962/"
] | Don't forget to convert the input argument (a string) into an integer type:
```
fib(int(sys.argv[1]))
``` | Try `fib(int(sys.argv[1]))`, that might be the problem, but I didn't try it. | 17,235 |
52,557,158 | I am new to python. I got this pre written code that downloads data in to report. But I am getting the error
>
> "write() argument must be str, not bytes".
>
>
>
See below code
```
def _download_report(service, response, ostream):
logger.info('Downloading keyword report')
written_header = False
for fragment in range(len(response.files)):
file_request = service.reports().getFile(
reportId=response.id_, reportFragment=fragment)
istream = io.BytesIO(file_request.execute())
if written_header:
istream.readline()
else:
written_header = True
ostream.write(istream.read())
``` | 2018/09/28 | [
"https://Stackoverflow.com/questions/52557158",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10429839/"
] | you'll need to change the last line to
```
ostream.write(istream.read().decode('utf-8'))
```
PS. you may need to replace `'utf-8`` with whatever encoding the data is in | To elaborate more on @sgDysregulation's answer:
One peculiarity with python 3 is that strings (`'hello, world'`) and binary strings (`b'hello, world'`) are basically incompatible. As an example, if you're familiar with basic file I/O, there are two types of modes to read a file in - you could use `open('file.txt', 'r')`, which returns unicode strings when you read from the file, or `open('file,txt', 'rb')`, which returns binary strings. The same applies for writing - you can't write strings correctly in mode `'wb'`, and can't write binary strings in mode `'w'`.
In this case, your `istream` returns binary strings when read from, whereas your `ostream` expects to write a unicode string. The solution is to change encoding from one to the other, and do what sgDysregulation recommends:
```
ostream.write(istream.read().decode('utf-8'))
```
this assumes that the binary string is encoded in utf-8 format, which it probably is. You might have to use a different format otherwise. | 17,236 |
21,322,568 | This is my first time asking a question. I am just starting to get into programming, so i am beginning with Python. So I've basically got a random number generator inside of a while loop, thats inside of my "r()' function. What I want to do is take all of the numbers (basically like an infinite amount until i shut down idle) and put them into a text file. Now i have looked for this on the world wide web and have found solutions for this, but on a windows computer. I have a mac with python 2.7. ANY HELP IS VERY MUCH APPRECIATED! My current code is below
```
from random import randrange
def r():
while True:
print randrange(1,10)
``` | 2014/01/24 | [
"https://Stackoverflow.com/questions/21322568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2701400/"
] | You can use
```
numpy.stack(arrays, axis=0)
```
if you have an array of arrays. You can specify the axis in case you want to stack columns and not rows. | You can just call `np.array` on the list of 1D arrays.
```
>>> import numpy as np
>>> arrs = [np.array([1,2,3]), np.array([4,5,6]), np.array([7,8,9])]
>>> arrs
[array([1, 2, 3]), array([4, 5, 6]), array([7, 8, 9])]
>>> arr2d = np.array(arrs)
>>> arr2d.shape
(3, 3)
>>> arr2d
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
``` | 17,238 |
48,683,238 | I have this error
```
onecheck(sys.argv[1],sys.argv[2],sys.argv[3])
IndexError: list index out of range
```
I try to make loop a python script .
This is code :
```
with open(file) as k:
for line in k:
aa, bb, cc = line.split(':')
time.sleep(5)
os.system("python checkfile.py " + cc + " " + aa + " " + bb)
```
Last line from file is working | 2018/02/08 | [
"https://Stackoverflow.com/questions/48683238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8868451/"
] | A fairly simple way of finding groups as you described would be to convert data to a boolean array with ones for data inside groups and 0 for data outside the groups and compute the difference of two consecutive value, this way you'll have 1 for the start of a group and -1 for the end.
Here's an example of that :
```
import numpy as np
mydata = [0.0, 0.0, 0.0, 0.0, 0.0, 0.143, 0.0, 0.22, 0.135, 0.44, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.33, 0.65, 0.22, 0.0, 0.0, 0.0, 0.0, 0.0]
arr = np.array(mydata)
mask = (arr!=0).astype(np.int) #array that contains 1 for every non zero value, zero other wise
padded_mask = np.pad(mask,(1,),"constant") #add a zero at the start and at the end to handle edge cases
edge_mask = padded_mask[1:] - padded_mask[:-1] #diff between a value and the following one
#if there's a 1 in edge mask it's a group start
#if there's a -1 it's a group stop
#where gives us the index of those starts and stops
starts = np.where(edge_mask == 1)[0]
stops = np.where(edge_mask == -1)[0]
print(starts,stops)
#we format groups and drop groups that are too small
groups = [group for group in zip(starts,stops) if (group[0]+2 < group[1])]
for group in groups:
print("start,stop : {} middle : {}".format(group,(group[0]+group[1])/2) )
```
And the output :
```
[ 5 7 19] [ 6 11 22]
start,stop : (7, 11) middle : 9.0
start,stop : (19, 22) middle : 20.5
``` | Your smoothed data has no zeros left:
```
import numpy as np
def smooth(y, box_pts):
box = np.ones(box_pts)/box_pts
print(box)
y_smooth = np.convolve(y, box, mode='same')
return y_smooth
mydata = [0.0, 0.0, 0.0, 0.0,-0.2, 0.143,
0.0, 0.22, 0.135, 0.44, 0.1, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.33, 0.65, 0.22, 0.0, 0.0, 0.0,
0.0, 0.0]
y_smooth = smooth(mydata, 27)
print(y_smooth)
```
Output:
```
[ 0.0469 0.0519 0.0519 0.0519 0.0519 0.0519
0.0519 0.0519 0.0519 0.0519 0.0684 0.1009
0.1119 0.1119 0.1119 0.1119 0.10475 0.10475
0.09375 0.087 0.065 0.06 0.06 0.06
0.06 0.06 0.06 ]
```
A way to find it in your original data would be:
```
def findGroups(data, minGrpSize=1):
startpos = -1
endpos = -1
pospos = []
for idx,v in enumerate(mydata):
if v > 0 and startpos == -1:
startpos = idx
elif v == 0.0:
if startpos > -1:
if idx < (len(mydata)-1) and mydata[idx+1] != 0.0:
pass # ignore one 0.0 in a run
else:
endpos = idx
if startpos > -1:
if endpos >-1 or idx == len(mydata)-1: # both set or last one
if (endpos - startpos) >= minGrpSize:
pospos.append((startpos,endpos))
startpos = -1
endpos = -1
return pospos
pos = findGroups(mydata,1)
print(*map(lambda x: sum(x) // len(x), pos))
pos = findGroups(mydata,3)
print(*map(lambda x: sum(x) // len(x), pos))
pos = findGroups(mydata,5)
print(*map(lambda x: sum(x) // len(x), pos))
```
Output:
```
8 20
8 20
8
``` | 17,240 |
37,646,174 | I need to read an analog signal in raspberry and for this purpose I bought an MCP3002 circuit. i plug it in with the correct connections and i have found sample codes over the internet but it doesn't work.
Do I need to have an interface or I can do the job without it? Do you have any ideas what can go wrong?
Do you have a simple code to read the analog input?
The code I used is the following:
```
#!/usr/bin/env python
# Written by Limor "Ladyada" Fried for Adafruit Industries, (c) 2015
# This code is released into the public domain
import time
import os
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
DEBUG = 1
# read SPI data from MCP3008 chip, 8 possible adc's (0 thru 7)
def readadc(adcnum, clockpin, mosipin, misopin, cspin):
if ((adcnum > 7) or (adcnum < 0)):
return -1
GPIO.output(cspin, True)
GPIO.output(clockpin, False) # start clock low
GPIO.output(cspin, False) # bring CS low
commandout = adcnum
commandout |= 0x18 # start bit + single-ended bit
commandout <<= 3 # we only need to send 5 bits here
for i in range(5):
if (commandout & 0x80):
GPIO.output(mosipin, True)
else:
GPIO.output(mosipin, False)
commandout <<= 1
GPIO.output(clockpin, True)
GPIO.output(clockpin, False)
adcout = 0
# read in one empty bit, one null bit and 10 ADC bits
for i in range(12):
GPIO.output(clockpin, True)
GPIO.output(clockpin, False)
adcout <<= 1
if (GPIO.input(misopin)):
adcout |= 0x1
GPIO.output(cspin, True)
adcout >>= 1 # first bit is 'null' so drop it
return adcout
# change these as desired - they're the pins connected from the
# SPI port on the ADC to the Cobbler
SPICLK = 18
SPIMISO = 23
SPIMOSI = 24
SPICS = 25
# set up the SPI interface pins
GPIO.setup(SPIMOSI, GPIO.OUT)
GPIO.setup(SPIMISO, GPIO.IN)
GPIO.setup(SPICLK, GPIO.OUT)
GPIO.setup(SPICS, GPIO.OUT)
# 10k trim pot connected to adc #0
potentiometer_adc = 0;
last_read = 0 # this keeps track of the last potentiometer value
tolerance = 5 # to keep from being jittery we'll only change
# volume when the pot has moved more than 5 'counts'
while True:
# we'll assume that the pot didn't move
trim_pot_changed = False
# read the analog pin
trim_pot = readadc(potentiometer_adc, SPICLK, SPIMOSI, SPIMISO, SPICS)
# how much has it changed since the last read?
pot_adjust = abs(trim_pot - last_read)
if DEBUG:
print "trim_pot:", trim_pot
print "pot_adjust:", pot_adjust
print "last_read", last_read
if ( pot_adjust > tolerance ):
trim_pot_changed = True
if DEBUG:
print "trim_pot_changed", trim_pot_changed
if ( trim_pot_changed ):
set_volume = trim_pot / 10.24 # convert 10bit adc0 (0-1024) trim pot read into 0-100 volume level
set_volume = round(set_volume) # round out decimal value
set_volume = int(set_volume) # cast volume as integer
print 'Volume = {volume}%' .format(volume = set_volume)
set_vol_cmd = 'sudo amixer cset numid=1 -- {volume}% > /dev/null' .format(volume = set_volume)
os.system(set_vol_cmd) # set volume
if DEBUG:
print "set_volume", set_volume
print "tri_pot_changed", set_volume
# save the potentiometer reading for the next loop
last_read = trim_pot
# hang out and do nothing for a half second
time.sleep(0.5)
``` | 2016/06/05 | [
"https://Stackoverflow.com/questions/37646174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3254786/"
] | My `pagesize` was set to `0`. I don't know why this would cause the column headers to disappear, but it did. If someone can explain better than me, I'll gladly accept their answer in leu of mine.
I set `pagesize` to `14`, and my column headers appeared. | SQL\*Plus has changed the default behavior in ORACLE 12c.
With
```
SQL> set head on
```
you get back to the previous behavior.
With
```
SQL> set pagesize *n*
```
every *n* rows the header will be repeated. | 17,245 |
10,368,678 | I'm attempting to install the DrEdit sample app for python onto GAE. The app runs, but saving or opening a file results in an **HTTP 403 "Access Not Configured Error"**.
**client.json** has **client\_id** and **client\_secret** set per the **API Access>Client ID for Drive SDK values**. I have also attempted to use the values for **API Access>Client ID for web applications**.
The **Google Drive SDK> OAuth Client ID** has also been set variously to the Drive SDK and web app Client IDs.
What am I doing wrong? | 2012/04/29 | [
"https://Stackoverflow.com/questions/10368678",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1363559/"
] | In the services section of the Google API console there are two services relating to drive development, SDK and API. When you create a new Drive SDK entry, Drive API service is not automatically enabled (which doesn't make sense, I don't see when you'd create a drive enabled application without using the drive API). Switch the Drive API service on for the project and try again.
@lurking\_googlers I think a lot of people will fall for this, doesn't it make sense to enable the API when the SDK is enabled? | And your must also identify in your code the following
```
DriveService.Scope.DriveFile, DriveService.Scope.Drive
```
good luck | 17,246 |
25,190,026 | Link shows a graphic visualization taken form census website. Link for the same is shared below. I want to create graphic visualization of the same kind in my python program.
Link for the graphic visualization:
<http://www.census.gov/dataviz/visualizations/stem/stem-html/>
Which kind of visualization is this? is it a piechart or any other different kind of graphic visualization.
Are there any APIs available to create such a visualization or do i need to use any web service? | 2014/08/07 | [
"https://Stackoverflow.com/questions/25190026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3888977/"
] | I don't see a graph that is exactly like the one listed, but matplotlib provides a huge number of options.
<http://matplotlib.org/gallery.html>
It supports Sankey graphs as well:
<http://matplotlib.org/api/sankey_api.html?highlight=sankey#module-matplotlib.sankey> | It's essentially a [weighted graph](http://en.wikipedia.org/wiki/Weighted_graph#Weighted_graphs_and_networks). It looks a lot like a [Sankey diagram](http://en.wikipedia.org/wiki/Sankey_diagram).
There is specialized software for visualizing graphs, e.g. [graphviz](http://www.graphviz.org/). There are several Python bindings for it.
You would have to look at the documentation if it can create this style of graph. | 17,247 |
9,548,139 | Disclaimer: I am new to python and django but have programmed in Drupal
I am developing a web-based Wizard (like on Microsoft Windows installation screens) with explanatory text followed by Previous and Next buttons (which are big green left and right arrows). So far, so good.
However, my current Wizard page (in project.html, loaded by my django apps views.py) now uses a form (instance of ModelForm) which asks the user to type in a "project" name, such as My Project. Normally, such an HTML form would use a Submit button, but because this is a Wizard, I need the Next button to act as the Submit button, hiding the Submit button entirely. Also, the arrow icons appear after the form ends.
How would you do this? Sure, I could use jquery, but is there a better pythonic or django way?
Some code:
```
#project.html
{% extends "base.html" %}
{% load i18n %}
<h3><span>{% trans 'Project details' %}</span></h3>
<p>{% trans 'What is the name of this project?' %}
<form method="post" action="">
{{ form.as_table }}
<input type="submit" value="Submit"/>
</form>
</p>
{% endblock %}
{% block buttonbar %}
<a href="/"><img src="/static/img/Button-Previous-icon-50.png" width="50" height="50" alt="Previous"><span>{% trans 'Previous' %}</span></a>
<a href="/profile"><img src="/static/img/Button-Next-icon-50.png" width="50" height="50" alt="Next button"><span>{% trans 'Next' %}</span></a>
{% endblock %}
```
Thanks! | 2012/03/03 | [
"https://Stackoverflow.com/questions/9548139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231693/"
] | `<input type="submit" value="Next"/>`
This gives you a button with the value 'Next' which acts as a submit button. If this is not what you've wanted, rephrase your question and/or give an example of what action should take place after pressing next. | You might want to use the Django Form wizard, in this case:
<https://docs.djangoproject.com/en/dev/ref/contrib/formtools/form-wizard/> | 17,252 |
15,187,184 | I am trying to extend the fft code that works fine for 1D arrays in python for images. Actually i know the problem is in logic in extension. I don't know much about FFTs and i have to submit assignments for Image Processing. I will be thankful for any hints or solutions
Here is the code, Actually, I'm trying to create a module for FFT in python, and it already worked fine for 1D with helps from rosetta Code's site.
```
from cmath import exp, pi
from math import log, ceil
def fft(f):
N = len(f)
if N <= 1: return f
even = fft(f[0::2])
odd = fft(f[1::2])
return [even[k] + exp(-2j*pi*k/N)*odd[k] for k in xrange(N/2)] + \
[even[k] - exp(-2j*pi*k/N)*odd[k] for k in xrange(N/2)]
def pad(f):
n = len(f)
N = 2 ** int(ceil(log(n, 2)))
F = f + [0] * (N - n)
return F, n
def unpad(F, n):
return F[0 : n]
def pad2(f):
m, n = len(f), len(f[0])
M, N = 2 ** int(ceil(log(m, 2))), 2 ** int(ceil(log(n, 2)))
F = [ [0]*N for _ in xrange(M) ]
for i in range(0, m):
for j in range(0, n):
F[i][j] = f[i][j]
return F, m, n
def fft1D(f):
Fu, n = pad(f)
return fft(Fu), n
def fft2D(f):
F, m, n = pad2(f)
M, N = len(F), len(F[0])
Fuv = [ [0]*N for _ in xrange(M) ]
for i in range(0, M):
Fxv = fft(F[i])
for j in range(0, N):
Fuv[i][j] = (fft(Fxv))[j]
return Fuv, [m, n]
```
I called this module with tis code:
```
from FFT import *
f= [0, 2, 3, 4]
F = fft1D(f)
print f, F
X, s = fft2D([[1,2,1,1],[2,1,2,2],[0,1,1,0], [0,1,1,1]])
for i in range(0, len(X)):
print X[i]
```
It's output is :
```
[0, 2, 3, 4] ([(9+0j), (-3+2j), (-3+0j), (-3-2j)], 4)
[(4+0j), (4-2.4492935982947064e-16j), (4+0j), (8+2.4492935982947064e-16j)]
[(8+0j), (8+2.4492935982947064e-16j), (8+0j), (4-2.4492935982947064e-16j)]
[0j, -2.33486982377251e-16j, (4+0j), (4+2.33486982377251e-16j)]
[0j, (4+0j), (4+0j), (4+0j)]
```
The first one for 1d is fine as i verified result with Matlab's output but for 2nd one the Matlab's output is:
```
>> fft([1,2,1,1;2,1,2,2;0,1,1,0;0,1,1,1])
ans =
3.0000 5.0000 5.0000 4.0000
1.0000 - 2.0000i 1.0000 0 - 1.0000i 1.0000 - 1.0000i
-1.0000 1.0000 -1.0000 -2.0000
1.0000 + 2.0000i 1.0000 0 + 1.0000i 1.0000 + 1.0000i
```
The output is different ,which means i'm doing something wrong in the code's logic.Please help without bothering as i have not studied FFT formally till now so i'm not able to understand the mathematics copmpletely, maybe after i studied it, i may figure the problem out. | 2013/03/03 | [
"https://Stackoverflow.com/questions/15187184",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1442667/"
] | Your code is a little hard to follow, but it looks like you are taking the FFT along the same direction both times. Look up the integral from of the FT, you will see that the `x` and `y` integrations are independent. That is (sorry, this notation is awful, `'` indicates a function in Fourier space)
```
FT(f(x, y), x) -> f'(k, y)
FT(f'(k, y), y) -> f''(k, w)
```
So what you want to do is take the FFT of each *row* (than is do N 1D FFTs) and shove the results into a new array (which takes you from `f(x, y) -> f'(k, y)`). Then take the FFT of each *column* of that result array (doing M 1D FFTs) and shove those results into another new array (which takes you from `f'(k, y) -> f''(k, w)`. | I agree with isedev that you should use numpy. It already has a great fft package that can do transforms in n-dimensions.
<http://docs.scipy.org/doc/numpy/reference/routines.fft.html>
<http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.fft.fft.html> | 17,253 |
7,233,991 | I have created a `FileManager` for my personal files. The launcher for this manager is launched by following script.
```
#!/usr/bin/python
from ui.MovieManager import MovieManager
MovieManager().showView()
```
Movie manager and other modules are situated in the `ui` and `core` packages, but when executing the file as script, I do get following error.
```
vsd@homeworks:~/homework/ws-python/movie-database$ sh Launcher.py
from: can't read /var/mail/ui.MovieManager
```
I am not able to identify why this script is not picking up `MovieManager` module under the current folder? However when I execute command `python Launcher.py`, It works well. | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/275097/"
] | It's not python which generates the error.
Check this out:
```
blubb@nemo:~$ from ui.MovieManager import MovieManager
from: can't read /var/mail/ui.MovieManager
```
Mind you, this is the console, which is a logical consequence of you calling the script with `sh Launcher.py`. Instead, use `./Launcher.py`. For this to work your file needs to be marked as executable, though. | Have you tried going to the folder where Launcher.py is and running
```
./Launcher.py
``` | 17,254 |
39,075,309 | what
I
met
is
a code question below:
<https://www.patest.cn/contests/pat-a-practise/1001>
>
> Calculate a + b and output the sum in standard format -- that is, the digits must be separated into groups of three by commas (unless there are less than four digits).
>
>
> Input
>
>
> Each input file contains one test case. Each case contains a pair of integers a and b where -1000000 <= a, b <= 1000000. The numbers are separated by a space.
>
>
> Output
>
>
> For each test case, you should output the sum of a and b in one line. The sum must be written in the standard format.
>
>
> Sample Input
>
>
> -1000000 9
>
>
> Sample Output
>
>
> -999,991
>
>
>
This is my code below:
```
if __name__ == "__main__":
aline = input()
astr,bstr = aline.strip().split()
a,b = int(astr),int(bstr)
sum = a + b
sumstr= str(sum)
result = ''
while sumstr:
sumstr, aslice = sumstr[:-3], sumstr[-3:]
if sumstr:
result = ',' + aslice + result
else:
result = aslice + result
print(result)
```
And the test result turn out to be :
>
> 时间(Time) 结果(test result) 得分(score) 题目(question number)
>
>
> 语言(programe language) 用时(ms)[time consume] 内存(kB)[memory] 用户[user]
>
>
> 8月22日 15:46 **部分正确[Partial Correct]**(Why?!!!) 11 1001
>
>
> Python (python3 3.4.2) 25 3184 polar9527
>
>
> 测试点[test point] 结果[result] 用时(ms)[time consume] 内存(kB)[memory] 得分[score]/满分[full credit]
>
>
> 0 答案错误[wrong] 25 3056 0/9
>
>
> 1 答案正确[correct] 19 3056 1/1
>
>
> 10 答案正确[correct] 18 3184 1/1
>
>
> 11 答案正确[correct] 19 3176 1/1
>
>
> 2 答案正确[correct] 17 3180 1/1
>
>
> 3 答案正确[correct] 16 3056 1/1
>
>
> 4 答案正确[correct] 14 3184 1/1
>
>
> 5 答案正确[correct] 17 3056 1/1
>
>
> 6 答案正确[correct] 19 3168 1/1
>
>
> 7 答案正确[correct] 22 3184 1/1
>
>
> 8 答案正确[correct] 21 3164 1/1
>
>
> 9 答案正确[correct] 15 3184 1/1
>
>
> | 2016/08/22 | [
"https://Stackoverflow.com/questions/39075309",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6419115/"
] | I can give you a simple that doesn't match answer, when you enter -1000000, 9 as a, b in your input, you'll get -,999,991.which is wrong.
To get the right answer, you really should get to know format in python.
To solve this question, you can just write your code like this.
`if __name__ == "__main__":
aline = input()
astr,bstr = aline.strip().split()
a,b = int(astr),int(bstr)
sum = a + b
print('{:,}'.format(sum))` | Notice the behavior of your code when you input -1000 and 1. You need to handle the minus sign, because it is not a digit. | 17,255 |
21,346,725 | I am using python with Pyqt4 for building app on Ubuntu and seems I have trouble with menubar that doesn't show up, thanks for any help. here is the code:
```
import sys
from PyQt4 import QtGui
class Example(QtGui.QMainWindow):
def __init__(self):
super(Example, self).__init__()
self.initUI()
def initUI(self):
exitAction = QtGui.QAction(QtGui.QIcon('exit.png'), '&Exit', self)
exitAction.setShortcut('Ctrl+Q')
exitAction.setStatusTip('Exit application')
exitAction.triggered.connect(QtGui.qApp.quit)
self.statusBar()
menubar = self.menuBar()
fileMenu = menubar.addMenu('&File')
fileMenu.addAction(exitAction)
self.setGeometry(300, 300, 300, 200)
self.setWindowTitle('Menubar')
self.show()
def main():
app = QtGui.QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
```
Excuse me if the indentation wouldn't be correct but I think it is. | 2014/01/25 | [
"https://Stackoverflow.com/questions/21346725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1233240/"
] | In ubuntu menubar is outside the application . You can find it in global menu | There is nothing wrong in your code. First you should run your code and maximize your GUI(Graphical User Interface) and you can see that your code run fine and you can understand what actually happen in Ubuntu. Actually Ubuntu always show the menu bar (also your GUI) at the top of the screen no matter what the size of your application. | 17,256 |
13,903,467 | I am using Win 8, Eclipse and Pydev. I installed Pydev and it can run simple python script.
Unfortunately I want to use math module and it gets error sign next to math command.

Undefined variable.
I would be very thankful if you can help me to get rid of the error sign.
Best regards,
Peter | 2012/12/16 | [
"https://Stackoverflow.com/questions/13903467",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/619324/"
] | 'math' should be marked as a 'forced builtin' in window > preferences > pydev > interpreter - python (if it's not, that's your problem).
If it's properly configured, it probably means that PyDev wasn't able to spawn a shell to inspect the math module, in which case it usually means that there's some firewall blocking that communication (if so, usually there are entries in your error log -- see: <http://pydev.org/faq.html#when_i_do_a_code_completion_pydev_hangs_what_can> for more details). | I cannot see the screenshot very well, but i see you are doing on the first line:
```
from math import *
```
and then
```
print math.whatever
```
Clearly `math` is an undefined variable here, as you should have used `import math` instead of `from math import *` | 17,262 |
43,407,522 | I am used to connect to a local server by using putty. But now I need to create a file by using a script python, this file has a huge size, so I must put it in local server;
by using puty, I must entre my host adresse, password, name and the port.
How do I do that?
This is my script:
```
import numpy as np
import glob
import os
P_Result_File_Path ="Path_To_the_Result_File"
Folder_path =r'Path_To_my_numpy_files'
os.chdir(Folder_path)
npfiles= glob.glob("*.npy")
npfiles.sort(key=os.path.getmtime)
print (npfiles)
loadedFiles = [np.load(npf) for npf in npfiles]
PArray=np.concatenate(loadedFiles, axis=0 )
np.save(Power_Result_File_Path, PArray)
```
`P_Result_File_Path` file has a huge size, so I need to save it in a local server, the problem in this case that `Path_To_the_Result_File= /home/user/result.npy`, so this path is unknown, I need to connect to this server in order to create and put the resulted file. | 2017/04/14 | [
"https://Stackoverflow.com/questions/43407522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6690199/"
] | In the end, I figured out myself how to achieve Laravel Echo working with Pusher but without Vue.js
1. Follow all the instructions found [here](https://laravel.com/docs/5.4/broadcasting).
2. Assuming you have Pusher installed and configured and Laravel Echo installed via npm, go to `your-project-folder/node_modules/laravel-echo/dist` and copy `echo.js` in your Laravel public folder (e.g. `your-project-folder/public/lib/js`). I use Grunt, so I automated this process, it's just for sake of simplicity.
3. Add the refer in your Blade template:
`<script type="text/javascript" src="{{asset('lib/js/echo.js')}}"></script>`
4. At the beginning of your Blade Template, in the point marked below, insert this line of code (it's just to avoid a JS error using echo.js directly):
```
<script>
window.Laravel = <?php echo json_encode([
'csrfToken' => csrf_token(),
]); ?>;
var module = { }; /* <-----THIS LINE */
</script>
```
5. In your footer, after the inclusion of all the JS files, call Laravel Echo this way:
```
<script>
window.Echo = new Echo({
broadcaster: 'pusher',
key: '{{env("PUSHER_KEY")}}',
cluster: 'eu',
encrypted: true,
authEndpoint: '{{env("APP_URL")}}/broadcasting/auth'
});
</script>
```
6. If you want to listen for a channel, e.g. the notifications one, you can do it like this:
```
<script>
window.Echo.private('App.User.{{Auth::user()->id}}')
.notification((notification) => {
doSomeAmazingStuff();
});
</script>
``` | First create event for broadcasting data as per the laravel document. And check console debug that your data being broadcasted or not. If your data is broadcasting than use javascript to listening data as given in pusher document.
Here you can check example : <https://pusher.com/docs/javascript_quick_start> | 17,265 |
69,658,798 | I have the following python code to convert csv file into json file.
```
def make_json_from_csv(csv_file_path, json_file_path, unique_column_name):
import csv
import json
# create a dictionary
data = {}
# Open a csv reader called DictReader
with open(csv_file_path, encoding='utf-8') as csvf:
csv_reader = csv.DictReader(csvf)
primary_key_column_name = unique_column_name.lstrip() # remove leading space in string
# Convert each row into a dictionary
# and add it to data
for rows in csv_reader:
key = rows[primary_key_column_name]
data[key] = rows
# Open a json writer, and use the json.dumps()
# function to dump data
with open(json_file_path, 'w', encoding='utf-8') as jsonf:
jsonf.write(json.dumps(data, indent=4))
return None
```
The code above will convert ALL the rows in the CSV file into json file. I want to convert only the last X number of rows into json.
I am using python v3. | 2021/10/21 | [
"https://Stackoverflow.com/questions/69658798",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] | In Python 3.6+ the dict keep the insertion order, so to fetch the last rows of a dictionary, just do:
```
from itertools import islice
x = 5
d = {}
for i, v in enumerate("abcdedfghi"):
d[i] = v
d = dict(islice(d.items(), len(d) - x, len(d)))
print(d)
```
**Output**
```
{5: 'd', 6: 'f', 7: 'g', 8: 'h', 9: 'i'}
```
Basically add (change) these lines into your code:
```
from itertools import islice
x = 5
data = dict(islice(data.items(), len(data) - x, len(data)))
# Open a json writer, and use the json.dumps()
# function to dump data
with open(json_file_path, 'w', encoding='utf-8') as jsonf:
jsonf.write(json.dumps(data, indent=4))
``` | I would like to answer my own question by building on Dani Mesejo's answer. The credit goes entirely to him.
```
def make_json(csv_file_path, json_file_path,
unique_column_name, no_of_rows_to_extract):
import csv
import json
from itertools import islice
# create a dictionary
data = {}
# Open a csv reader called DictReader
with open(csv_file_path, encoding='utf-8') as csvf:
csv_reader = csv.DictReader(csvf)
primary_key_column_name = unique_column_name.lstrip() # remove leading space in string
# Convert each row into a dictionary
# and add it to data
for rows in csv_reader:
key = rows[primary_key_column_name]
data[key] = rows
data = dict(islice(data.items(), len(data) - no_of_rows_to_extract, len(data)))
# Open a json writer, and use the json.dumps()
# function to dump data
with open(json_file_path, 'w', encoding='utf-8') as jsonf:
jsonf.write(json.dumps(data, indent=4))
return None
``` | 17,268 |
23,322,025 | I am currently using python `pandas` and want to know if there is a way to output the data from pandas into julia `Dataframes` and vice versa. (I think you can call python from Julia with `Pycall` but I am not sure if it works with dataframes) Is there a way to call Julia from python and have it take in `panda`s dataframes? (without saving to another file format like csv)
When would it be advantageous to use Julia Dataframes than Pandas other than extremely large datasets and running things with many loops(like neural networks)? | 2014/04/27 | [
"https://Stackoverflow.com/questions/23322025",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3159981/"
] | So there is a library developed for this
`PyJulia` is a library used to interface with Julia using Python 2 and 3
<https://github.com/JuliaLang/pyjulia>
It is experimental but somewhat works
Secondly Julia also has a front end for `pandas` which is `pandas.jl`
<https://github.com/malmaud/Pandas.jl>
It looks to be just a wrapper for pandas but you might be able to execute multiple functions using julia's parallel features.
As for the which is better so far `pandas` has faster I/O according to this [reading csv in Julia is slow compared to Python](https://stackoverflow.com/questions/21890893/reading-csv-in-julia-is-slow-compared-to-python) | I'm a novice at this sort of thing but have definitely been using both as of late. Truth be told, they seem very quite comparable but there is far more documentation, Stack Overflow questions, etc pertaining to Pandas so I would give it a slight edge. Do not let that fact discourage you however because Julia has some amazing functionality that I'm only beginning to understand. With large datasets, say over a couple gigs, both packages are pretty slow but again Pandas seems to have a slight edge (by no means would I consider my benchmarking to be definitive). Without a more nuanced understanding of what you are trying to achieve, it's difficult for me to envision a circumstance where you would even want to call a Pandas function while working with a Julia DataFrame or vice versa. Unless you are doing something pretty cerebral or working with really large datasets, I can't see going too wrong with either. When you say "output the data" what do you mean? Couldn't you write the Pandas data object to a file and then open/manipulate that file in a Julia DataFrame (as you mention)? Again, unless you have a really good machine reading gigs of data into either pandas or a Julia DataFrame is tedious and can be prohibitively slow. | 17,269 |
40,452,603 | I have written a simple Python3 program like below:
```
import sys
input = sys.stdin.read()
tokens = input.split()
print (tokens)
a = int(tokens[0])
b = int(tokens[1])
if ((a + b)> 18):
print ("Input numbers should be between 0 and 9")
else:
print(a + b)
```
but while running this like below:
```
C:\Python_Class>python APlusB.py
3 5<- pressed enter after this
```
but output is not coming until I hit ctrl+C (in windows)
```
C:\Python_Class>python APlusB.py
3 5
['3', '5']
8
Traceback (most recent call last):
File "APlusB.py", line 20, in <module>
print(a + b)
KeyboardInterrupt
``` | 2016/11/06 | [
"https://Stackoverflow.com/questions/40452603",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7123148/"
] | `sys.stdin.read()` will read until an EOF (end of file) is encountered. That's why "pressing enter" doesn't seem to do anything. You can send an EOF on Windows by typing `Ctrl`+`Z`, or on \*nix systems with `Ctrl`+`D`.
(Note that you probably still need to hit `Enter` before hitting `Ctrl`+`Z`. I don't think the terminal treats the EOF correctly if it's not at the start of a line.)
If you just want to read input until a newline, use [`input()`](https://docs.python.org/3/library/functions.html#input) instead of `sys.stdin.read()`. | This happens because `sys.stdin.read` attempts to read *all the data* that the standard input can provide, including new lines, spaces, tabs, *whatever*. It will stop reading only if the interpreter's interrupted or it hits an EndOfFile (Ctrl+D on UNIX-like systems and Ctrl+Z on Windows).
The standard function that asks for input is simply `input()` | 17,270 |
69,425,666 | I'm currently working on an Applescript math library, which mimics the python `math` module. The python `math` module has some constants, such as [Euler's number](https://en.wikipedia.org/wiki/E_%28mathematical_constant%29) and others. Currently, you can do something like this:
```applescript
set math to script "Math"
log math's E -- logs (*2.718281828459*)
set math's E to 10
log math's E -- logs (*10*)
```
So I tried searching for Applescript constants and came across [the official documentation](https://developer.apple.com/library/archive/documentation/AppleScript/Conceptual/AppleScriptLangGuide/reference/ASLR_classes.html#//apple_ref/doc/uid/TP40000983-CH1g-BBCECDHC), where it is stated, that `You cannot define constants in scripts; constants can be defined only by applications and by AppleScript.`
Is there a clever workaround for this or would I have to write a .sdef file for this sort of thing?
### EDIT:
I have now also tried this:
```applescript
log pi -- logs (*3.14159265359*)
set pi to 10
log pi -- logs (*10*)
```
`pi` is an Applescript constant. If you run the script a second time without compiling again, it looks something like this:
```applescript
log pi -- logs (*10*)
set pi to 10
log pi -- logs (*10*)
```
I don't want to mimic this behavior, but more so the behavior of other constants like `ask`, `yes`, `no`, etc. which complain, even if you try to set them to themselves. | 2021/10/03 | [
"https://Stackoverflow.com/questions/69425666",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15209993/"
] | There is no way to explicitly define a constant in AppleScript. There are three approaches that might suffice, depending on what you're trying to achieve.
---
If you're using a Scripting Definition (sdef) in your library, you can add an enumeration to define terms you want to reserve, then handle them by cases in code. For instance, if you want to assign constant values to the terms 'tau', 'gamma', and 'lambda', you define an enumeration like so:
```
<enumeration name="Constants" code="CVal" description="defined constant values.">
<enumerator name="tau" code="tau&" description="tau constant."/>
<enumerator name="gamma" code="gam&" description="gamma constant."/>
<enumerator name="lambda" code="lmd!" description="lambda constant."/>
</enumeration>
```
Then in code have a handler to resolve them, and call it when needed:
```
to resolveConstant(cnst)
if cnst is tau then
return pi/2
else if cnst is gamma then
return 17.4683
else if cnst is lambda then
return "wallabies forever"
else
return missing value
end
end resolveConstant
```
---
Create handlers for each of your constants, and call them as functions:
```
on tau()
return pi/2
end tau
set x to 2 * tau() * 3^2 -- x = 28.2743
```
---
If you want *true* constants, you're going to have to shift away from a script library and code a faceless background app (like System Events or Image Events). From the perspective of an end-user it won't make much difference, save that they'll have to authorize having the application run, but it might mean a serious increase in labor on your end. | Every handler name with parameters is constant in the AppleScript. You can use this fact. Here, you can't change the name of the handler, so you can consider it like your constant pi identifier. It is true constant because you can't set it, but you can get it whatever you want:
```
on constantPi()
3.14159265359
end constantPi
get constantPi() --> 3.14159265359
set constantPi() to 10 --> Access not allowed
```
Note: do not remove parentheses in the last code line, otherways you create additional variable constantPi instead of your "constant" | 17,272 |
47,724,709 | I am trying to insert into a postgresql database in python 3.6 and currently am trying to execute this line
```
cur.execute("INSERT INTO "+table_name+"(price, buy, sell, timestamp) VALUES (%s, %s, %s, %s)",(exchange_rate, buy_rate, sell_rate, date))
```
but every time it tries to run the table name has ' ' around it so it turns out like INSERT INTO table\_name('12', ..., ..., ...) ... instead of
INSERT INTO table\_name(12, ..., ..., ...) ... how can I make the string formatter leave the quotes out or remove them or something? It is causing a syntax error around the 12 because it doesn't need the single quotes. | 2017/12/09 | [
"https://Stackoverflow.com/questions/47724709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1771791/"
] | Use it with triple quotes. Also you may pass table\_name as a element of second parameter, too.
```
cur.execute("""INSERT INTO %s (price, buy, sell, timestamp) VALUES (%s, %s, %s, %s)""",(table_name, exchange_rate, buy_rate, sell_rate, date))
```
More detailed approach;
* Triple qoutes give developers a change to write SQL query as multi-lines.
* Also it allows you to use single and double qoutes without escaping from them. (It is beneficiary for complex SQL Queries but you don't need that on your case) | Use the new string formatting to have a clean representation. `%s` is explicitly converting to a string, you don't want that. Format chooses the most fitting type for you.
```
table_name = "myTable"
exchange_rate = 1
buy_rate = 2
sell_rate = 3
date = 123
x = "INSERT INTO {0} (price, buy, sell, timestamp) VALUES ({1}, {2}, {2}, {4})".format(
table_name, exchange_rate, buy_rate, sell_rate, date)
print x
>INSERT INTO myTable (price, buy, sell, timestamp) VALUES (1, 2, 2, 123)
``` | 17,273 |
19,612,822 | I know that there are different ways to do this, but I just want to know why my regex isn't working. This isn't actually something that I need to do, I just wanted to see if I could do this with a regex, and I have no idea why my code isn't working.
Given a string S, I want to find all non-overlapping substrings that contain a subsequence Q that obeys certain rules. Now, let's suppose that I am searching for the subsequence `"abc"`. I want to match a substring of S that contains `'a'` followed at some point by `'b'` followed at some point by `'c'` with the restriction that no `'a'` follows `'a'`, and no `'a'` or `'b'` follows `'b'`. The regex I am using is as follows (in python):
```
regex = re.compile(r'a[^a]*?b[^ab]*?c')
match = re.finditer(regex, string)
for m in match:
print m.group(0)
```
To me this breaks down and reads as follows:
`a[^a]*?b`: `'a'` followed the smallest # of characters not including `'a'` and terminating with a `'b'`
`[^ab]*?c`: the smallest # of characters not including `'a'` or `'b'` and terminating with a `'c'`
So putting this all together, I assumed that I would match non-overlapping substrings of S that contain the subsequence "abc" that obeys my rules of exclusion.
This **works fine** for something like:
`S = "aqwertybwertcaabcc"`, which gives me `"aqwertybwertc"` and `"abc"`,
but it **fails** to work for `S = "abbc"`, as in it matches to `"abbc"`. | 2013/10/26 | [
"https://Stackoverflow.com/questions/19612822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2073001/"
] | Assuming what you actually want is for the subsequence Q to contain no `a`s between the first `a` and the first `b` and no `a`s or `b`s between the first `b` and the first `c` after the first `b`, the correct regex to use is:
```
r'a[^ab]*b[^abc]*c'
```
The regex that you're currently using will do everything that it can to succeed on a string, including matching the literal `b` to a `b` after the first `b`, which is why `"abbc"` is matched. Only by specifically excluding `b` in the first character class can this be avoided and the `b` be made to match only the first `b` after the `a`. | It could help if you look at the inverse class.
In all cases `abc` is the trivial solution.
And, in this case non-greedy probably doesn't apply because
there are fixed sets of characters used in the example inverse classes.
```
# Type 1 :
# ( b or c can be between A,B )
# ( a or b can be between B,C )
# ------------------------------
a # 'a'
[b-z]*? # [^a]
b # 'b'
[abd-z]*? # [^c]
c # 'c'
# Type 2, yours :
# ( b or c can be between A,B )
# ( c can be between B,C )
# ------------------------------
a # 'a'
[b-z]*? # [^a]
b # 'b'
[c-z]*? # [^ab]
c # 'c'
# Type 3 :
# ( c can be between A,B )
# ------------------------------
a # 'a'
[c-z]*? # [^ab]
b # 'b'
[d-z]*? # [^abc]
c # 'c'
# Type 4 :
# ( distinct A,B,C ) :
# ------------------------------
a # 'a'
[d-z]*? # [^abc]
b # 'b'
[d-z]*? # [^abc]
c # 'c'
``` | 17,274 |
64,146,892 | I'm trying to create a word counter in python that prints the longest word, then sorts all words over 5 letters by frequency. The longest word works, and the counter works, I just can't figure out how to make it check only over 5 letters. If I run it, it works, but the words under 5 letters are still there.
Here's the code that I have:
```
print(max(declarationWords,key=len))
for word in declarationWords:
if len(word) >= 5:
declarationWords.remove(word)
print(Counter(declarationWords).most_common())
``` | 2020/09/30 | [
"https://Stackoverflow.com/questions/64146892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14370546/"
] | Okay, so I took a different approach and changed my code, the following code is now functional although I have no idea what was causing the original issue still.
```
public CharacterController controller;
private float speed;
public float walkSpeed = 5f;
public float runSpeed = 10f;
public float turnSpeed = 90f;
public float jumpSpeed = 8f;
public float gravity = 9.8f;
private float vSpeed = 0f;
void Update()
{
if (Input.GetButton("Fire3"))
{
transform.Rotate(0, Input.GetAxis("Horizontal") * turnSpeed * Time.deltaTime, 0);
Vector3 vel = transform.forward * Input.GetAxis("Vertical") * runSpeed;
if (controller.isGrounded)
{
vSpeed = 0;
if (Input.GetButtonDown("Jump"))
{
vSpeed = jumpSpeed;
}
}
vSpeed -= gravity * Time.deltaTime;
vel.y = vSpeed;
controller.Move(vel * Time.deltaTime);
}
else
{
speed = walkSpeed;
transform.Rotate(0, Input.GetAxis("Horizontal") * turnSpeed * Time.deltaTime, 0);
Vector3 vel = transform.forward * Input.GetAxis("Vertical") * walkSpeed;
if (controller.isGrounded)
{
vSpeed = 0;
if (Input.GetButtonDown("Jump"))
{
vSpeed = jumpSpeed;
}
}
vSpeed -= gravity * Time.deltaTime;
vel.y = vSpeed;
controller.Move(vel * Time.deltaTime);
}
``` | This happens because the character controller has a gravity so when you enable it, it uses gravity to the player and drag your player down. To fix this, you will need to write in the script that the player's position is upwards.
```
public float walkSpeed = 3f;
public float runSpeed = 6f;
public float gravity = -9.81f;
public float jumpHeight = 4f;
if ((Input.GetButtonDown("Jump")))
{
Vector3 antigravity = new Vector3(0, Mathf.Sqrt(jumpHeight * -2f * gravity), 0);
controller.Move(antigravity);
}
if ((Input.GetAxis("Horizontal") != 0) || (Input.GetAxis("Vertical") != 0))
{
float xInput = Input.GetAxis("Horizontal");
float zInput = Input.GetAxis("Vertical");
float running = Input.GetAxis("Fire3");
Vector3 move = transform.right * xInput + transform.forward * zInput;
if (running > 0)
{
controller.Move((move * runSpeed * Time.deltaTime));
}
else
{
controller.Move(move * walkSpeed * Time.deltaTime);
}
void Update()
{
transform.position = new Vector3(0, 15, 0); //Set your player position
}
} // If this doesn't help, tell me what's the problem.
``` | 17,275 |
32,162,757 | I am using mongoDB with python . I want user to enter a document in the JSON format so that i can insert that into some collection in my db .How can this be done ? | 2015/08/23 | [
"https://Stackoverflow.com/questions/32162757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4784437/"
] | Just use conditional aggregation:
```
select Id,
(sum(case when Value > 3.0 then 1 else 0 end) -
sum(case when Value < 3.0 then 1 else 0 end) -- or maybe 2.9
) as TotalVotes
from [Ratings]
group by Id
order by Id desc;
```
Alternatively, you could write:
```
select id, sum(case when Value > 3.0 then 1 else -1 end)
``` | SQL Server allows you to specify condition in aggregate functions.In your case, you need to use SUM with conditions..
So, this is how your final query looks like
```
select Id, Value,SUM(CASE WHEN Value>3.0 THEN 1 ELSE -1 END) AS VoteCount
from [Ratings]
group by Id
order by Id desc
``` | 17,278 |
50,616,254 | I need to do the following operation in python:
I have a list of tuples
```
data = [("John", 14, 12132.213, "Y", 34), ("Andrew", 23, 2121.21, "N", 66)]
```
I have a list of fields:
```
fields = ["name", "age", "vol", "status", "limit"]
```
Each tuple of the data is for each of the fields in order.
I have a dict
```
desc = { "name" : "string", "age" : "int", "vol" : "double", "status" : "byte", "limit" : "int" }
```
I need to generate a message to be sent over in the following format :
```
[{"column": "name", "value": {"String": "John"}}, {"column": "age", "value": {"Int": 14}}, {"column": "vol", "value": {"Double": 12132.213}}, {"column": "status", "value": {"Byte": 89}}, {"column": "limit", "value": {"Int": 34}},
{"column": "name", "value": {"String": "Andrew"}}, {"column": "age", "value": {"Int": 23}}, {"column": "vol", "value": {"Double":2121.21}}, {"column": "status", "value": {"Byte": 78}}, {"column": "limit", "value": {"Int": 66}}]
```
I have two functions that generates this :
```
def get_value(data_type, res):
if data_type == 'string':
return {'String' : res.strip()}
elif data_type == 'byte' :
return {'Byte' : ord(res[0])}
elif data_type == 'int':
return {'Int' : int(res)}
elif data_type == 'double':
return {'Double' : float(res)}
def generate_message(data, fields, desc):
result = []
for row in data:
for field, res in zip(fields, row):
data_type = desc[field]
val = {'column' : field,
'value' : get_value(data_type, res)}
result.append(val)
return result
```
However, the data is really large with a huge number of tuples (~200,000). It takes a lot of time to generate the above message format for each of them. Is there an efficient way of doing this.
P.S Need such a message as i am sending this on a queue and the consumer is a C++ client that needs the type information. | 2018/05/31 | [
"https://Stackoverflow.com/questions/50616254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3434649/"
] | I don't remember having this problem but in at least one case I did something that will work around the issue.
I put an index.js in the root folder that runs the actual dependency in dist. Then the bin that npm looks for is a file that's present, and it shouldn't freak out.
It won't work until tsc is run, of course. But it should resolve your chicken and egg problem. | It looks like `preinstall` script is what you need
Add in your `package.json` file as
```
{
"scripts": {
"preinstall" : "tsc ..." // < build stuff
}
}
```
Reference <https://docs.npmjs.com/misc/scripts> | 17,279 |
56,501,297 | I'm trying to setup Visual Studio Code for python and everything is good except Kivy.
I have simple code
```
import kivy
from kivy.app import App
from kivy.uix.label import Label
from kivy.uix.gridlayout import GridLayout
from kivy.uix.textinput import TextInput
from kivy.uix.button import Button
from kivy.uix.widget import Widget
class MyGrid(Widget):
pass
class MyApp(App):
def build(self):
return MyGrid()
if __name__ == "__main__":
MyApp().run()
```
and simple kivy file
```
#:kivy
<MyGrid>:
GridLayout:
cols:1
size: root.width, root.height
GridLayout:
cols:2
Label:
text: "Name: "
TextInput:
multinline:False
Label:
text: "Email: "
TextInput:
multiline:False
Button:
text:"Submit"
```
And when I'm trying to run python file I got
*kivy.lang.parser.ParserException: Parser: File "c:\Users\Paweł\Documents\projects vscode\WeatherProject\my.kv", line 1:
1:#:kivy
2::
3:GridLayout:
Unknown directive*
Google isn't helpful at all. Please tell me what should I do. | 2019/06/07 | [
"https://Stackoverflow.com/questions/56501297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7869295/"
] | Maybe it should fix it!
```
MyGrid:
<MyGrid>:
GridLayout:
cols:1
size: root.width, root.height
GridLayout:
cols:2
Label:
text: "Name: "
TextInput:
multinline:False
Label:
text: "Email: "
TextInput:
multiline:False
Button:
text:"Submit"
``` | remove the indent from
GridLayout: | 17,288 |
46,999,929 | I want to create a Telegram Messenger bot with framework *python-telegram-bot*!
Now, the bot must send a message with a specific font. This means the bot sends a message with a different and beautiful font - a font different from the Telegram Messenger font.
How can I do it? | 2017/10/29 | [
"https://Stackoverflow.com/questions/46999929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8444979/"
] | No one (even you the official) can send messages in a different font/color, but you can make a suggestion to [@Telegram](https://twitter.com/telegram). They will consider adding this as a feature.
There have limited [formatting options](https://core.telegram.org/bots/api#formatting-options) in the message text, and you might like it. | The only color that you can use is red or set the background color to gray.
```
str = "`Hello`" #this will turn the text red on Telegram.
str = "```Hello```" #this will turn the background color gray of the text on Telegram
```
Then at the **sendMessage** function, you need to add the parameter **parse\_mode** and set it to **"Markdown"**. | 17,291 |
33,511,259 | **How to find the majority votes for a list that can contain -1s, 1s and 0s?**
For example, given a list of:
```
x = [-1, -1, -1, -1, 0]
```
The majority is -1 , so the output should return `-1`
Another example, given a list of:
```
x = [1, 1, 1, 0, 0, -1]
```
The majority vote would be `1`
And when we have a tie, the majority vote should return 0, e.g.:
```
x = [1, 1, 1, -1, -1, -1]
```
This should also return zero:
```
x = [1, 1, 0, 0, -1, -1]
```
The simplest case to get the majority vote seem to sum the list up and check whether it's negative, positive or 0.
```
>>> x = [-1, -1, -1, -1, 0]
>>> sum(x) # So majority -> 0
-4
>>> x = [-1, 1, 1, 1, 0]
>>> sum(x) # So majority -> 1
2
>>> x = [-1, -1, 1, 1, 0]
>>> sum(x) # So majority is tied, i.e. -> 0
0
```
After the sum, I could do this check to get the majority vote, i.e.:
```
>>> x = [-1, 1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
1
>>> x = [-1, -1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
0
```
But as noted previously, it's ugly: [Python putting an if-elif-else statement on one line](https://stackoverflow.com/questions/14029245/python-putting-an-if-elif-else-statement-on-one-line) and not pythonic.
So the solution seems to be
```
>>> x = [-1, -1, 1, 1, 0]
>>> if sum(x) == 0:
... majority = 0
... else:
... majority = -1 if sum(x) < 0 else 1
...
>>> majority
0
```
---
EDITED
======
But there are cases that `sum()` won't work, @RobertB's e.g.
```
>>> x = [-1, -1, 0, 0, 0, 0]
>>> sum(x)
-2
```
But in this case the majority vote should be 0!! | 2015/11/03 | [
"https://Stackoverflow.com/questions/33511259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] | I am assuming that votes for 0 count as votes. So `sum` is not a reasonable option.
Try a Counter:
```
>>> from collections import Counter
>>> x = Counter([-1,-1,-1, 1,1,1,1,0,0,0,0,0,0,0,0])
>>> x
Counter({0: 8, 1: 4, -1: 3})
>>> x.most_common(1)
[(0, 8)]
>>> x.most_common(1)[0][0]
0
```
So you could write code like:
```
from collections import Counter
def find_majority(votes):
vote_count = Counter(votes)
top_two = vote_count.most_common(2)
if len(top_two)>1 and top_two[0][1] == top_two[1][1]:
# It is a tie
return 0
return top_two[0][0]
>>> find_majority([1,1,-1,-1,0]) # It is a tie
0
>>> find_majority([1,1,1,1, -1,-1,-1,0])
1
>>> find_majority([-1,-1,0,0,0]) # Votes for zero win
0
>>> find_majority(['a','a','b',]) # Totally not asked for, but would work
'a'
``` | You can [count occurences](https://stackoverflow.com/questions/2600191/how-can-i-count-the-occurrences-of-a-list-item-in-python) of 0 and test if they are majority.
```
>>> x = [1, 1, 0, 0, 0]
>>> if sum(x) == 0 or x.count(0) >= len(x) / 2.0:
... majority = 0
... else:
... majority = -1 if (sum(x) < 0) else 1
... majority
0
``` | 17,294 |
7,692,121 | I saw [this question](https://stackoverflow.com/questions/4978738/is-there-a-python-equivalent-of-the-c-null-coalescing-operator) but it uses the ?? operator as a null check, I want to use it as a bool true/false test.
I have this code in Python:
```
if self.trait == self.spouse.trait:
trait = self.trait
else:
trait = defualtTrait
```
In C# I could write this as:
```
trait = this.trait == this.spouse.trait ? this.trait : defualtTrait;
```
Is there a similar way to do this in Python? | 2011/10/07 | [
"https://Stackoverflow.com/questions/7692121",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/116286/"
] | Yes, you can write:
```
trait = self.trait if self.trait == self.spouse.trait else defaultTrait
```
This is called a [Conditional Expression](http://docs.python.org/reference/expressions.html#conditional-expressions) in Python. | On the null-coalescing operator in C#, what you have in the question isn't a correct usage. That would fail at compile time.
In C#, the correct way to write what you're attempting would be this:
```
trait = this.trait == this.spouse.trait ? self.trait : defaultTrait
```
Null coalesce in C# returns the first value that isn't null in a chain of values (or null if there are no non-null values). For example, what you'd write in C# to return the first non-null trait or a default trait if all the others were null is actually this:
```
trait = this.spouse.trait ?? self.trait ?? defaultTrait;
``` | 17,304 |
42,871,090 | As the title says, is there a way to change the default pip to pip2.7
When I run `sudo which pip`, I get `/usr/local/bin/pip`
When I run `sudo pip -V`, I get `pip 1.5.6 from /usr/lib/python3/dist-packages (python 3.4)`
If there is no problem at all with this mixed version, please do tell. If there is a problem with downloading dependencies from different pip versions, how can I change to pip2.7?
I know I can `pip2.7 install somePackage` but I don't like it. I feel I could forget to do this at any point.
Other info: Ubuntu 15.10 | 2017/03/18 | [
"https://Stackoverflow.com/questions/42871090",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2400585/"
] | * You can use `alias pip = 'pip2.7'`Put this in your `.bashrc` file(If you're using bash,if zsh it should be `.zshrc`).
By the way,you should know that `sudo` command change current user,default `root`.So if you have to change user to `root`,maybe you should put it in `/root/.bashrc`
* Or you can make a link
```
ln -s /usr/local/bin/pip2.7 /usr/local/bin/pip
```
Also you can try to use `virtualenv`,it's the best choice for multiple versions in my opinion. | A very intuitive and straightforward method is just modify the settings in `/usr/local/bin/pip`. You don't need alias and symbolic links. For mine:
1. Check the infor:
===================
```
lerner@lerner:~/$ pip -V
```
>
> `pip 1.5.4 from /usr/lib/python3/dist-packages (python 3.4)`
>
>
>
```
lerner@lerner:~/$ pip2 -V
```
>
> `pip 9.0.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)`
>
>
>
```
lerner@lerner:~/$ whereis pip
```
>
>
> ```
> pip: /usr/local/bin/pip3.4 /usr/local/bin/pip2.7 /usr/local/bin/pip
>
> ```
>
>
2. Change the setting:
======================
Change the python3 to python2, be careful of its version(1.5.4 to 9.0.1 everywhere). And I just change the pip file to this:
```
lerner@lerner:~/$ sudo vim /usr/local/bin/pip
```
>
>
> ```
> #!/usr/bin/python2
> # EASY-INSTALL-ENTRY-SCRIPT: 'pip==9.0.1','console_scripts','pip'
> __requires__ = 'pip==9.0.1' import sys from pkg_resources import load_entry_point
>
> if __name__ == '__main__':
> sys.exit(
> load_entry_point('pip==9.0.1', 'console_scripts', 'pip')()
> )
>
> ```
>
>
3. Now save and check:
======================
```
lerner@lerner:~/$ pip -V
```
>
> `pip 9.0.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)`
>
>
>
Done. | 17,305 |
67,948,945 | I want to force the Huggingface transformer (BERT) to make use of CUDA.
nvidia-smi showed that all my CPU cores were maxed out during the code execution, but my GPU was at 0% utilization. Unfortunately, I'm new to the Hugginface library as well as PyTorch and don't know where to place the CUDA attributes `device = cuda:0` or `.to(cuda:0)`.
The code below is basically a customized part from [german sentiment BERT working example](https://huggingface.co/oliverguhr/german-sentiment-bert)
```
class SentimentModel_t(pt.nn.Module):
def __init__(self, model_name: str = "oliverguhr/german-sentiment-bert"):
DEVICE = "cuda:0" if pt.cuda.is_available() else "cpu"
print(DEVICE)
super(SentimentModel_t,self).__init__()
self.model = AutoModelForSequenceClassification.from_pretrained(model_name).to(DEVICE)
self.tokenizer = BertTokenizerFast.from_pretrained(model_name)
def predict_sentiment(self, texts: List[str])-> List[str]:
texts = [self.clean_text(text) for text in texts]
# Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model.
input_ids = self.tokenizer.batch_encode_plus(texts,padding=True, add_special_tokens=True, truncation=True, max_length=self.tokenizer.max_len_single_sentence)
input_ids = pt.tensor(input_ids["input_ids"])
with pt.no_grad():
logits = self.model(input_ids)
label_ids = pt.argmax(logits[0], axis=1)
labels = [self.model.config.id2label[label_id] for label_id in label_ids.tolist()]
return labels
```
EDIT: After applying the suggestions of @KonstantinosKokos (see edited code above) I got a
```
RuntimeError: Input, output and indices must be on the current device
```
pointing to
```
with pt.no_grad():
logits = self.model(input_ids)
```
The full error code can be obtained down below:
```
<ipython-input-15-b843edd87a1a> in predict_sentiment(self, texts)
23
24 with pt.no_grad():
---> 25 logits = self.model(input_ids)
26
27 label_ids = pt.argmax(logits[0], axis=1)
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1364 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1365
-> 1366 outputs = self.bert(
1367 input_ids,
1368 attention_mask=attention_mask,
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict)
859 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
860
--> 861 embedding_output = self.embeddings(
862 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
863 )
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
196
197 if inputs_embeds is None:
--> 198 inputs_embeds = self.word_embeddings(input_ids)
199 token_type_embeddings = self.token_type_embeddings(token_type_ids)
200
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input)
122
123 def forward(self, input: Tensor) -> Tensor:
--> 124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
126 self.norm_type, self.scale_grad_by_freq, self.sparse)
~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
``` | 2021/06/12 | [
"https://Stackoverflow.com/questions/67948945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15445597/"
] | You can make the entire class inherit `torch.nn.Module` like so:
```
class SentimentModel_t(torch.nn.Module):
def __init___(...)
super(SentimentModel_t, self).__init__()
...
```
Upon initializing your model you can then call `.to(device)` to cast it to the device of your choice, like so:
```
sentiment_model = SentimentModel_t(...)
sentiment_model.to('cuda')
```
The `.to()` recursively applies to all submodules of the class, `model` being one of them (hugging face model inherit `torch.nn.Module`, thus providing an implementation for `to()`).
Note that this makes choosing device in the `__init__()` redundant: its now an external context that you can switch to/from easily.
---
Alternatively, you can hardcode the device by casting the contained BERT model directly into cuda (less elegant):
```
class SentimentModel_t():
def __init__(self, ...):
DEVICE = "cuda:0" if pt.cuda.is_available() else "cpu"
print(DEVICE)
self.model = AutoModelForSequenceClassification.from_pretrained(model_name).to(DEVICE)
``` | I am a bit late to the party. The python package that I wrote already uses your GPU. You can have a look at the [code to see how it was implemented](https://github.com/oliverguhr/german-sentiment-lib/blob/master/germansentiment/sentimentmodel.py)
Just install the package:
```
pip install germansentiment
```
and run the code:
```
from germansentiment import SentimentModel
model = SentimentModel()
texts = [
"Mit keinem guten Ergebniss","Das ist gar nicht mal so gut",
"Total awesome!","nicht so schlecht wie erwartet",
"Der Test verlief positiv.","Sie fährt ein grünes Auto."]
result = model.predict_sentiment(texts)
print(result)
```
**Important:** If you write your own code to use the model, you need to run the preprocessing code as well. Otherwise the results can be off. | 17,308 |
39,502,345 | I have two columns in a pandas dataframe that are supposed to be identical. Each column has many NaN values. I would like to compare the columns, producing a 3rd column containing True / False values; *True* when the columns match, *False* when they do not.
This is what I have tried:
```
df['new_column'] = (df['column_one'] == df['column_two'])
```
The above works for the numbers, but not the NaN values.
I know I could replace the NaNs with a value that doesn't make sense to be in each row (for my data this could be -9999), and then remove it later when I'm ready to echo out the comparison results, however I was wondering if there was a more pythonic method I was overlooking. | 2016/09/15 | [
"https://Stackoverflow.com/questions/39502345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762492/"
] | Or you could just use the `equals` method:
```
df['new_column'] = df['column_one'].equals(df['column_two'])
```
It is a batteries included approach, and will work no matter the `dtype` or the content of the cells. You can also put it in a loop, if you want. | To my understanding, Pandas does not consider NaNs different in element-wise equality and inequality comparison methods. While it does when comparing entire Pandas objects (Series, DataFrame, Panel).
>
> NaN values are considered different (i.e. NaN != NaN). - [source](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eq.html)
>
>
>
**Element-wise equality assertion [`.eq()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eq.html#pandas.DataFrame.eq)**
Compare the values of 2 columns for each row individually. This will return a Series of assertions.
*Option 1*: Chain the [`.eq()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eq.html#pandas.DataFrame.eq) method with [`.fillna()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html).
```py
df['new_column'] = df['column_one'].fillna('-').eq(df['column_two'].fillna('-'))
```
Option 2: Replace the NaN assertions afterwards using [`.loc()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html) and [`.isna()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isna.html).
```py
df['new_column'] = df['column_one'].eq(df['column_two'])
df.loc[test['column_one'].isna() & test['column_two'].isna(),'new_column'] = True
```
Note that both options are non-destructive regarding the source data in *column\_one* and *column\_two*. It is also worth having a look at the [working with missing data](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html) guide in the Pandas docs.
**Object-wise equality assertion [`.equals()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.equals.html)**
Compare Pandas objects (Series, DataFrame, Panel) as a whole, interpreting all rows and their order as a single value. This will return a single boolean value (scalar).
```
df['column_one'].equals(df['column_two'])
``` | 17,309 |
65,583,958 | I've a Python program as follows:
```
class a:
def __init__(self,n):
self.n=n
def __del__(self,n):
print('dest',self.n,n)
def b():
d=a('d')
c=a('c')
d.__del__(8)
b()
```
Here, I have given a parameter `n` in `__del__()` just to clear my doubt. Its output :
```
$ python des.py
dest d 8
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
```
In classical programming languages like C++ we can't give parameters for the destructor. To know if it is applicable for python too, I've executed this program. Why does the interpreter allow the parameter `n` to be given as a parameter for the destructor? How can I specify value for that `n` then? As a try to give an argument for `__del__()` and it goes fine. But without it how can I specify the value for `n`? | 2021/01/05 | [
"https://Stackoverflow.com/questions/65583958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | you cannot. pre-defined dunder methods (methods with leading and trailing double underscore) like `__del__` have a fixed signature.
If you define them with another signature, then when python calls them using the non-dunder interface (`del`, `len`, ...), the number of arguments is wrong and it fails.
To pass `n` to `del`, you'll have to define it as an object member. | Python objects become a candidate for garbage collection when there are no more references to them (object tagging), so you do not need to create such a destructor.
If you want to add optional arguments to a method, it's common to set them to `None` or an empty tuple `()`
```
def other_del(self, x=None):
...
``` | 17,312 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.