qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
49,411,277
I'm using Python to automate some reporting, but I am stuck trying to connect to an SSAS cube. I am on Windows 7 using Anaconda 4.4, and I am unable to install any libraries beyond those included in Anaconda. I have used pyodbc+pandas to connect to SQL Server databases and extract data with SQL queries, and the goal now is to do something similar on an SSAS cube, using an MDX query to extract data, but I can't get a successful connection. This first connection string is very similar to the strings that I used to connect to the SQL Server databases, but it gives me an authentication error. I can access the cube no problem using SQL Server Management Studio so I know that my Windows credentials have access. ``` connection = pyodbc.connect('Trusted_Connection=yes',DRIVER='{SQL Server}',SERVER='Cube Server', database='Cube') query = "MDX query" report_df = pandas.read_sql(query, connection) Error: ('28000', "[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user '*****'. (18456) (SQLDriverConnect)") ``` When I tried to replicate the attempts at [Question1](https://stackoverflow.com/questions/24712994/connect-to-sql-server-analysis-service-from-python) and [Question2](https://stackoverflow.com/questions/38985729/connect-to-an-olap-cube-using-python-on-linux) I got a different error: ``` Error: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') ``` Any help/guidance would be greatly appreciated. My experience with SSAS cubes is minimal, so it is possible that I am on the completely wrong path for this task and that even if the connection issue gets solved, there will be another issue loading the data into pandas, etc.
2018/03/21
[ "https://Stackoverflow.com/questions/49411277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9529670/" ]
SSAS doesn't support [ODBC clients](https://learn.microsoft.com/en-us/sql/analysis-services/instances/data-providers-used-for-analysis-services-connections) . It does provide HTTP access through IIS, which requires [a few configuration steps](https://learn.microsoft.com/en-us/sql/analysis-services/instances/configure-http-access-to-analysis-services-on-iis-8-0). Once configured, any client can issue XMLA queries over HTTP. The [xmla package](https://pypi.python.org/pypi/xmla/) can connect to various OLAP sources, including SSAS over HTTP
Perhaps this solution will help you <https://stackoverflow.com/a/65434789/14872543> the idea is to use the construct on linced MSSQL Server ``` SELECT olap.* from OpenRowset ('"+ olap_conn_string+"',' " + mdx_string +"') "+ 'as olap' ```
17,554
65,605,972
Before downgrading my GCC, I want to know if there's a way to figure which programs/frameworks or dependencies in my machine will break and if there is a better way to do this for openpose installation? (e.g. changing something in CMake) Is there a hack to fix this without changing my system GCC version and potentially breaking other things? ``` [10889:10881 0:2009] 09:21:36 Wed Jan 06 [mona@goku:pts/0 +1] ~/research/code/openpose/build $ make -j`nproc` [ 12%] Performing configure step for 'openpose_lib' CMake Warning (dev) at cmake/Misc.cmake:32 (set): implicitly converting 'BOOLEAN' to 'STRING' type. Call Stack (most recent call first): CMakeLists.txt:25 (include) This warning is for project developers. Use -Wno-dev to suppress it. -- Found gflags (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so) -- Found glog (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so) -- Found PROTOBUF Compiler: /usr/local/bin/protoc -- HDF5: Using hdf5 compiler wrapper to determine C configuration -- HDF5: Using hdf5 compiler wrapper to determine CXX configuration -- CUDA detected: 10.1 -- Added CUDA NVCC flags for: sm_75 -- Found Atlas: /usr/include/x86_64-linux-gnu -- Found Atlas (include: /usr/include/x86_64-linux-gnu library: /usr/lib/x86_64-linux-gnu/libatlas.so lapack: /usr/lib/x86_64-linux-gnu/liblapack.so -- Python interface is disabled or not all required dependencies found. Building without it... -- Found Git: /usr/bin/git (found version "2.25.1") -- -- ******************* Caffe Configuration Summary ******************* -- General: -- Version : 1.0.0 -- Git : 1.0-149-g1807aada -- System : Linux -- C++ compiler : /usr/bin/c++ -- Release CXX flags : -O3 -DNDEBUG -fPIC -Wall -std=c++11 -Wno-sign-compare -Wno-uninitialized -- Debug CXX flags : -g -fPIC -Wall -std=c++11 -Wno-sign-compare -Wno-uninitialized -- Build type : Release -- -- BUILD_SHARED_LIBS : ON -- BUILD_python : OFF -- BUILD_matlab : OFF -- BUILD_docs : OFF -- CPU_ONLY : OFF -- USE_OPENCV : OFF -- USE_LEVELDB : OFF -- USE_LMDB : OFF -- USE_NCCL : OFF -- ALLOW_LMDB_NOLOCK : OFF -- USE_HDF5 : ON -- -- Dependencies: -- BLAS : Yes (Atlas) -- Boost : Yes (ver. 1.71) -- glog : Yes -- gflags : Yes -- protobuf : Yes (ver. 3.6.1) -- CUDA : Yes (ver. 10.1) -- -- NVIDIA CUDA: -- Target GPU(s) : Auto -- GPU arch(s) : sm_75 -- cuDNN : Disabled -- -- Install: -- Install path : /home/mona/research/code/openpose/build/caffe -- -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project: CUDA_ARCH_BIN -- Build files have been written to: /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build [ 25%] Performing build step for 'openpose_lib' [ 1%] Running C++/Python protocol buffer compiler on /home/mona/research/code/openpose/3rdparty/caffe/src/caffe/proto/caffe.proto Scanning dependencies of target caffeproto [ 1%] Building CXX object src/caffe/CMakeFiles/caffeproto.dir/__/__/include/caffe/proto/caffe.pb.cc.o [ 1%] Linking CXX static library ../../lib/libcaffeproto.a [ 1%] Built target caffeproto [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/util/cuda_compile_1_generated_math_functions.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_bnll_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_accuracy_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_batch_reindex_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_batch_norm_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_bias_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_base_data_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_concat_layer.cu.o [ 5%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_clip_layer.cu.o [ 6%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_absval_layer.cu.o [ 6%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_conv_layer.cu.o [ 6%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_contrastive_loss_layer.cu.o In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /home/mona/research/code/openpose/3rdparty/caffe/src/caffe/util/math_functions.cu:1: /usr/include/math_functions.h:54:2: warning: #warning "math_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead." [-Wcpp] 54 | #warning "math_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead." | ^~~~~~~ CMake Error at cuda_compile_1_generated_clip_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_clip_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:114: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_clip_layer.cu.o] Error 1 make[5]: *** Waiting for unfinished jobs.... CMake Error at cuda_compile_1_generated_absval_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_absval_layer.cu.o CMake Error at cuda_compile_1_generated_concat_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_concat_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:65: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_absval_layer.cu.o] Error 1 make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:121: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_concat_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_batch_reindex_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_batch_reindex_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:93: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_batch_reindex_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_bias_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_bias_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:100: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_bias_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_batch_norm_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_batch_norm_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:86: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_batch_norm_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_contrastive_loss_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_contrastive_loss_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:128: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_contrastive_loss_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_conv_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_conv_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:135: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_conv_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_accuracy_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_accuracy_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:72: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_accuracy_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_base_data_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_base_data_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:79: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_base_data_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_bnll_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_bnll_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:107: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_bnll_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_math_functions.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/util/./cuda_compile_1_generated_math_functions.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:499: src/caffe/CMakeFiles/cuda_compile_1.dir/util/cuda_compile_1_generated_math_functions.cu.o] Error 1 make[4]: *** [CMakeFiles/Makefile2:371: src/caffe/CMakeFiles/caffe.dir/all] Error 2 make[3]: *** [Makefile:130: all] Error 2 make[2]: *** [CMakeFiles/openpose_lib.dir/build.make:112: caffe/src/openpose_lib-stamp/openpose_lib-build] Error 2 make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/openpose_lib.dir/all] Error 2 make: *** [Makefile:84: all] Error 2 21834/31772MB(openpose) [10889:10881 0:2010] 09:21:55 Wed Jan 06 [mona@goku:pts/0 +1] ~/research/code/openpose/build $ ``` I have: ``` $ gcc --version gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Copyright (C) 2019 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ``` I am following the compilation instructions here on Ubuntu 20.04: <https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation/README.md#prerequisites>
2021/01/07
[ "https://Stackoverflow.com/questions/65605972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2414957/" ]
Solved by downgrading the GCC from 9.3.0 to 7: ``` $ sudo apt remove gcc $ sudo apt-get install gcc-7 g++-7 -y $ sudo ln -s /usr/bin/gcc-7 /usr/bin/gcc $ sudo ln -s /usr/bin/g++-7 /usr/bin/g++ $ sudo ln -s /usr/bin/gcc-7 /usr/bin/cc $ sudo ln -s /usr/bin/g++-7 /usr/bin/c++ $ gcc --version gcc (Ubuntu 7.5.0-6ubuntu2) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ```
You should point to a correct GCC bin file (below 9) from the dependencies in cmake command. no need to downgrade the GCC for example: ``` cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_C_COMPILER=/usr/bin/gcc-8 ```
17,556
53,369,766
Following the [Microsoft Azure documentation for Python developers](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.models.blob?view=azure-python). The `azure.storage.blob.models.Blob` class does have a private method called `__sizeof__()`. But it returns a constant value of 16, whether the blob is empty (0 byte) or 1 GB. Is there any method/attribute of a blob object with which I can dynamically check the size of the object? To be clearer, this is how my source code looks like. ``` for i in blobService.list_blobs(container_name=container, prefix=path): if i.name.endswith('.json') and r'CIJSONTM.json/part' in i.name: #do some stuffs ``` However, the data pool contains many empty blobs having legitimate names, and before I `#do some stuffs`, I want to have an additional check on the size to judge whether I am dealing with an empty blob. Also, bonus for what exactly does the `__sizeof__()` method give, if not the size of the blob object?
2018/11/19
[ "https://Stackoverflow.com/questions/53369766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2604247/" ]
> > I want to have an additional check on the size to judge whether I am dealing with an empty blob. > > > We could use the [BlobProperties().content\_length](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.models.blobproperties?view=azure-python) to check whether it is a empty blob. ``` BlockBlobService.get_blob_properties(block_blob_service,container_name,blob_name).properties.content_length ``` The following is the demo code how to get the blob content\_length : ``` from azure.storage.blob import BlockBlobService block_blob_service = BlockBlobService(account_name='accoutName', account_key='accountKey') container_name ='containerName' block_blob_service.create_container(container_name) generator = block_blob_service.list_blobs(container_name) for blob in generator: length = BlockBlobService.get_blob_properties(block_blob_service,container_name,blob.name).properties.content_length print("\t Blob name: " + blob.name) print(length) ```
``` from azure.storage.blob import BlobServiceClient blob_service_client = BlobServiceClient.from_connection_string(connect_str) blob_list = blob_service_client.get_container_client(my_container).list_blobs() for blob in blob_list: print("\t" + blob.name) print('\tsize=', blob.size) ```
17,557
39,981,667
I installed Robotframework RIDE with my user credentials and trying to access that by logging in with the another user in the same machine. when i copy paste the ride.py(available in C:/Python27/Scripts) file from my user to another user i can access RIDE by double clicking the ride.py file, but when i try to access using ride.py through command line i am not able access RIDE showing a error msg as "ride.py is not recognised as an internal or external command, operable program or batch file ". Installed python for all users and again re installed everything through pip in C:/Users, previously installed in C:/Users/MyUser. While i am trying to re install everything using pip in C:\Users it is showing as "Requirement already satisfied"
2016/10/11
[ "https://Stackoverflow.com/questions/39981667", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5295988/" ]
I'm using gem [breadcrumbs on rails](https://github.com/weppos/breadcrumbs_on_rails) with devise in my project. If you haven't made User model with devise make that first: ``` rails g devise User rake db:migrate rails generate devise:views users ``` My registration\_controller.rb looks like this: ``` # app/controllers/registrations_controller.rb class RegistrationsController < Devise::RegistrationsController add_breadcrumb "home", :root_path add_breadcrumb "contact", :contacts_path end ``` I changed routes: ``` devise_for :users, :controllers => { registrations: 'registrations' } ``` In application.html.erb layout I added breadcrumbs (just above the <%= yield %> ) ``` <%= render_breadcrumbs %> ``` I've just tested it, and it works as you can see from the screenshot. [![enter image description here](https://i.stack.imgur.com/UJI2m.jpg)](https://i.stack.imgur.com/UJI2m.jpg) **EDITED:** In case that you want to add breadcrumbs to other pages of Devise gem, for example Forgot your password page, you can make new controller: ``` # app/controllers/passwords_controller.rb class PasswordsController < Devise::PasswordsController add_breadcrumb "home", :root_path add_breadcrumb "contact", :contacts_path end ``` and update your routes: ``` devise_for :users, controllers: { registrations: 'registrations', passwords: 'passwords' } ``` Please let me know if it works for you.
You can generate the devise views with: `rails generate devise:views users` Make sure to replace `users` with whatever your user model name is if it isn't `User` (e.g. `Admin`, `Manager`, etc) You can then add to those views whatever you need to show breadcrumbs.
17,558
14,672,640
I am trying to use python-twitter api in GAE. I need to import Oauth2 and httplib2. Here is how I did For OAuth2, I downloaded github.com/simplegeo/python-oauth2/tree/master/oauth2. For HTTPLib2, I dowloaded code.google.com/p/httplib2/wiki/Install and extracted folder python2/httplib2 to project root folder. my views.py ``` import twitter def index(request): api = twitter.Api(consumer_key='XNAUYmsmono4gs3LP4T6Pw',consumer_secret='xxxxx',access_token_key='xxxxx',access_token_secret='iHzMkC6RRDipon1kYQtE5QOAYa1bVfYMhH7GFmMFjg',cache=None) return render_to_response('fbtwitter/index.html') ``` I got the error [paste.shehas.net/show/jbXyx2MSJrpjt7LR2Ksc](http://paste.shehas.net/show/jbXyx2MSJrpjt7LR2Ksc) ``` AttributeError AttributeError: 'module' object has no attribute 'SignatureMethod_PLAINTEXT' Traceback (most recent call last) File "D:\PythonProj\fbtwitter\kay\lib\werkzeug\wsgi.py", line 471, in __call__ return app(environ, start_response) File "D:\PythonProj\fbtwitter\kay\app.py", line 478, in __call__ response = self.get_response(request) File "D:\PythonProj\fbtwitter\kay\app.py", line 405, in get_response return self.handle_uncaught_exception(request, exc_info) File "D:\PythonProj\fbtwitter\kay\app.py", line 371, in get_response response = view_func(request, **values) File "D:\PythonProj\fbtwitter\fbtwitter\views.py", line 39, in index access_token_secret='iHzMkC6RRDipon1kYQtE5QOAYa1bVfYMhH7GFmMFjg',cache=None) File "D:\PythonProj\fbtwitter\fbtwitter\twitter.py", line 2235, in __init__ self.SetCredentials(consumer_key, consumer_secret, access_token_key, access_token_secret) File "D:\PythonProj\fbtwitter\fbtwitter\twitter.py", line 2264, in SetCredentials self._signature_method_plaintext = oauth.SignatureMethod_PLAINTEXT() AttributeError: 'module' object has no attribute 'SignatureMethod_PLAINTEXT' ``` It seems I did not import Oauth2 correctly when I tracked the error in twitter.py ``` self._signature_method_plaintext = oauth.SignatureMethod_PLAINTEXT() ``` I even go to twitter.py and add `import oauth2 as oauth` but it couldnt solve the problem Can anybody help?
2013/02/03
[ "https://Stackoverflow.com/questions/14672640", "https://Stackoverflow.com", "https://Stackoverflow.com/users/496837/" ]
``` ‘%A%’; ``` v.s. ``` '%A%'; ``` The first has fancy `'` characters. The usual cause for that is Outlook's AutoCorrect.
Problem with the 1st is the single quote. `SQL` doesn't accept that quote. I dont find the one in my keyboard. May be you copied the query from somewhere.
17,559
53,241,645
In Python 3.6, I can use the `__set_name__` hook to get the class attribute name of a descriptor. How can I achieve this in python 2.x? This is the code which works fine in Python 3.6: ``` class IntField: def __get__(self, instance, owner): if instance is None: return self return instance.__dict__[self.name] def __set__(self, instance, value): if not isinstance(value, int): raise ValueError('expecting integer') instance.__dict__[self.name] = value def __set_name__(self, owner, name): self.name = name class Example: a = IntField() ```
2018/11/10
[ "https://Stackoverflow.com/questions/53241645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5766927/" ]
You may be looking for metaclasses, with it you can process the class attributes at class creation time. ``` class FooDescriptor(object): def __get__(self, obj, objtype): print('calling getter') class FooMeta(type): def __init__(cls, name, bases, attrs): for k, v in attrs.iteritems(): if issubclass(type(v), FooDescriptor): print('FooMeta.__init__, attribute name is "{}"'.format(k)) class Foo(object): __metaclass__ = FooMeta foo = FooDescriptor() f = Foo() f.foo ``` Output: ``` FooMeta.__init__, attribute name is "foo" calling getter ``` If you need to change the class before it is created you need to override `__new__` instead of `__init__` at your metaclass. See this answer for more information on this topic: [Is there any reason to choose \_\_new\_\_ over \_\_init\_\_ when defining a metaclass?](https://stackoverflow.com/questions/1840421/is-there-any-reason-to-choose-new-over-init-when-defining-a-metaclass)
There are various solutions with different degrees of hackishness. I always liked to use a class decorator for this. ``` class IntField(object): def __get__(self, instance, owner): if instance is None: return self return instance.__dict__[self.name] def __set__(self, instance, value): if not isinstance(value, int): raise ValueError('expecting integer') instance.__dict__[self.name] = value def with_intfields(*names): def with_concrete_intfields(cls): for name in names: field = IntField() field.name = name setattr(cls, name, field) return cls return with_concrete_intfields ``` You can use it like this: ``` @with_intfields('a', 'b') class Example(object): pass e = Example() ``` Demo: ``` $ python2.7 -i clsdec.py >>> [x for x in vars(Example) if not x.startswith('_')] ['a', 'b'] >>> Example.a.name 'a' >>> e.a = 3 >>> e.b = 'test' [...] ValueError: expecting integer ``` Make sure to explicitly subclass from `object` in Python 2.7, that got me tripped up when I drafted the first version of this answer.
17,561
41,595,720
I am about to upgrade from Django 1.9 to 1.10 and would like to test if I have some deprecated functionality. However using ``` python -Wall manage.py test ``` will show tons and tons of warnings for Django 2.0. Is there a way to suppress warnings only for 2.0 or show only warnings for 1.10?
2017/01/11
[ "https://Stackoverflow.com/questions/41595720", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5047630/" ]
**Solution 1 - Using groups** ``` Private Sub Workbook_Open() With Sheet1 Dim i As Long, varLast As Long .Cells.ClearOutline varLast = .Cells(.Rows.Count, "A").End(xlUp).Row .Columns("A:A").Insert Shift:=xlToRight 'helper column For i = 1 To varLast .Range("A" & i) = .Range("B" & i).IndentLevel Next Dim rngRows As Range, rngFirst As Range, rngLast As Range, rngCell As Range, rowOffset As Long Set rngFirst = Range("A1") Set rngLast = rngFirst.End(xlDown) Set rngRows = Range(rngFirst, rngLast) For Each rngCell In rngRows rowOffset = 1 Do While rngCell.Offset(rowOffset) > rngCell And rngCell.Offset(rowOffset).Row <= rngLast.Row rowOffset = rowOffset + 1 Loop If rowOffset > 1 Then Range(rngCell.Offset(1), rngCell.Offset(rowOffset - 1)).EntireRow.Group End If Next .Columns("A:A").EntireColumn.Delete End With End Sub ``` [![enter image description here](https://i.stack.imgur.com/9qHIz.jpg)](https://i.stack.imgur.com/9qHIz.jpg) **Solution 2 - In case you don't want to modify the workbook data - workaround** Step 1 - Create a `UserForm` and add `TreeView` Control [![enter image description here](https://i.stack.imgur.com/IqPij.png)](https://i.stack.imgur.com/IqPij.png) Step 2 - Add the following code in the `UserForm` code ``` Private Sub UserForm_Initialize() With Me.TreeView1 .Style = tvwTreelinesPlusMinusText .LineStyle = tvwRootLines End With Call func_GroupData End Sub Private Sub func_GroupData() varRows = CLng(Sheet1.Cells(Sheet1.Rows.Count, "A").End(xlUp).Row) With Me.TreeView1.Nodes .Clear For i = 1 To varRows nodeTxt = Sheet1.Range("A" & i) nodeOrd = Sheet1.Range("A" & i).IndentLevel nodeTxt = Trim(nodeTxt) nodeAmt = Trim(CStr(Format(Sheet1.Range("B" & i), "###,###,###,##0.00"))) Select Case nodeOrd Case 0 'Level 0 - Root node nodeTxt = nodeTxt & Space(80 - Len(nodeTxt & nodeAmt)) & nodeAmt .Add Key:="Node" & i, Text:=Trim(nodeTxt) nodePar1 = "Node" & i Case 1 'Level 1 node nodeTxt = nodeTxt & Space(80 - Len(nodeTxt & nodeAmt)) & nodeAmt .Add Relative:=nodePar1, Relationship:=tvwChild, Key:="Node" & i, Text:=Trim(nodeTxt) nodePar2 = "Node" & i Case 2 'Level 2 node nodeTxt = nodeTxt & Space(80 - Len(nodeTxt & nodeAmt)) & nodeAmt .Add Relative:=nodePar2, Relationship:=tvwChild, Key:="Node" & i, Text:=Trim(nodeTxt) nodePar3 = "Node" & i End Select Next End With End Sub ``` Step 3 - Add the following code in `ThisWorkbook` to show the treeview ``` Private Sub Workbook_Open() UserForm1.Show vbModeless End Sub ``` The result [![enter image description here](https://i.stack.imgur.com/4ucdX.png)](https://i.stack.imgur.com/4ucdX.png)
One possibility would be to add a button to each cell and to hide its children rows on *collapse* and display its children rows on *expand*. Each `Excel.Button` executes one common method `TreeNodeClick` where the `Click` method is called on corresponding instance of `TreeNode`. The child rows are hidden or displayed based on the actual caption of the button. At the beginning the source data range needs to be selected when the method `Main` is executed. Problem is that the collection of Tree-Nodes needs to be filled each time the sheet is opened. So the method `Main` needs to be executed when the sheet is opened othervise it won't work. --- *Standard Module Code:* ``` Option Explicit Public treeNodes As VBA.Collection Sub Main() Dim b As TreeBuilder Set b = New TreeBuilder Set treeNodes = New VBA.Collection ActiveSheet.Buttons.Delete b.Build Selection, treeNodes End Sub Public Sub TreeNodeClick() Dim caller As String caller = Application.caller Dim treeNode As treeNode Set treeNode = treeNodes(caller) If Not treeNode Is Nothing Then treeNode.Click End If End Sub ``` --- *Class Module TreeNode:* ``` Option Explicit Private m_button As Excel.Button Private m_children As Collection Private m_parent As treeNode Private m_range As Range Private Const Collapsed As String = "+" Private Const Expanded As String = "-" Private m_indentLevel As Integer Public Sub Create(ByVal rng As Range, ByVal parent As treeNode) On Error GoTo ErrCreate Set m_range = rng m_range.EntireRow.RowHeight = 25 m_indentLevel = m_range.IndentLevel Set m_parent = parent If Not m_parent Is Nothing Then _ m_parent.AddChild Me Set m_button = rng.parent.Buttons.Add(rng.Left + 3 + 19 * m_indentLevel, rng.Top + 3, 19, 19) With m_button .Caption = Expanded .Name = m_range.Address .OnAction = "TreeNodeClick" .Placement = xlMoveAndSize .PrintObject = False End With With m_range .VerticalAlignment = xlCenter .Value = Strings.Trim(.Value) .Value = Strings.String((m_indentLevel + 11) + m_indentLevel * 5, " ") & .Value End With Exit Sub ErrCreate: MsgBox Err.Description, vbCritical, "TreeNode::Create" End Sub Public Sub Collapse(ByVal hide As Boolean) If hide Then m_range.EntireRow.Hidden = True End If m_button.Caption = Collapsed Dim ch As treeNode For Each ch In m_children ch.Collapse True Next End Sub Public Sub Expand(ByVal unhide As Boolean) If unhide Then m_range.EntireRow.Hidden = False End If m_button.Caption = Expanded Dim ch As treeNode For Each ch In m_children ch.Expand True Next End Sub Public Sub AddChild(ByVal child As treeNode) m_children.Add child End Sub Private Sub Class_Initialize() Set m_children = New VBA.Collection End Sub Public Sub Click() If m_button.Caption = Collapsed Then Expand False Else Collapse False End If End Sub Public Property Get IndentLevel() As Integer IndentLevel = m_indentLevel End Property Public Property Get Cell() As Range Set Cell = m_range End Property ``` --- *Class Module TreeBuilder:* ``` Option Explicit Public Sub Build(ByVal source As Range, ByVal treeNodes As VBA.Collection) Dim currCell As Range Dim newNode As treeNode Dim parentNode As treeNode For Each currCell In source.Columns(1).Cells Set parentNode = FindParent(currCell, source, treeNodes) Set newNode = New treeNode newNode.Create currCell, parentNode treeNodes.Add newNode, currCell.Address Next currCell End Sub Private Function FindParent(ByVal currCell As Range, ByVal source As Range, ByVal treeNodes As VBA.Collection) As treeNode If currCell.IndentLevel = 0 Then Exit Function End If Dim c As Range Dim r As Integer Set c = currCell For r = currCell.Row - 1 To source.Rows(1).Row Step -1 Set c = c.offset(-1, 0) If c.IndentLevel = currCell.IndentLevel - 1 Then Set FindParent = treeNodes(c.Address) Exit Function End If Next r End Function ``` --- *Result:* [![enter image description here](https://i.stack.imgur.com/S0pJd.jpg)](https://i.stack.imgur.com/S0pJd.jpg)
17,562
39,469,409
I've just created Django project and ran the server. It works fine but showed me warnings like ``` You have 14 unapplied migration(s)... ``` Then I ran ``` python manage.py migrate ``` in the terminal. It worked but showed me this ``` ?: (1_7.W001) MIDDLEWARE_CLASSES is not set. HINT: Django 1.7 changed the global defaults for the MIDDLEWARE_CLASSES. django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.messages.middleware.MessageMiddleware were removed from the defaults. If your project needs these middleware then you should configure this setting. ``` And now I have this warning after starting my server. ``` You have 3 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth. ``` So how do I migrate correctly to get rid of this warning? I am using PyCharm and tried to create the project via PyCharm and terminal and have the same issue. ``` ~$ python3.5 --version Python 3.5.2 >>> django.VERSION (1, 10, 1, 'final', 1) ```
2016/09/13
[ "https://Stackoverflow.com/questions/39469409", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4727702/" ]
So my problem was that I used wrong python version for migration. ``` python3.5 manage.py migrate ``` solves the problem.
You are probably using wrong django version. You need `django1.10`
17,563
44,916,289
When I try to install a package for python, the setup.py has the following lines: ``` import os, sys, platform from distutils.core import setup, Extension import subprocess from numpy import get_include from Cython.Distutils import build_ext from Cython.Build import cythonize from Cython.Compiler.Options import get_directive_defaults ``` and I tried to run `python setup.py install` in terminal but I received the following error: ```none Traceback (most recent call last): File "setup.py", line 9, in <module> from Cython.Compiler.Options import get_directive_defaults ImportError: cannot import name 'get_directive_defaults' ``` I would really appreciate if you could let me know how to fix this.
2017/07/05
[ "https://Stackoverflow.com/questions/44916289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8256442/" ]
Your `package.json` is missing `should` as a dependency. Install it via; `npm install --save-dev should` Also I would recommend you look into [chai](http://chaijs.com/api/bdd/) which in my opinion provides a slightly different API.
**should is an expressive, readable, framework-agnostic assertion library. The main goals of this library are to be expressive and to be helpful. It keeps your test code clean, and your error messages helpful. By default (when you require('should')) should extends the Object.prototype with a single non-enumerable getter that allows you to express how that object should behave. It also returns itself when required with require. It is also possible to use should.js without getter (it will not even try to extend Object.prototype), just require('should/as-function'). Or if you already use version that auto add getter, you can call .noConflict function. Results of (something).should getter and should(something) in most situations are the same** Better u install node dependency should with npm as below ``` npm install --save should ``` [should-reference](https://www.npmjs.com/package/should)
17,564
23,421,031
What I put in python: ``` phoneNumber = input("Enter your Phone Number: ") print("Your number is", str(phoneNumber)) ``` What I get if I put 021999888: ``` Enter your Phone Number: 021999888 ``` > > Traceback (most recent call last): File "None", line 1, in > invalid token: , line 1, pos 9 > > > What I get if I put 21: > > Enter your Phone Number: 21 > > > Your Number is 21 > > > What I get if I put 02: > > Enter your Phone Number: 02 > > > Your Number is 2 > > > What I get if I put 021: > > Enter your Phone Number: 021 > > > Your Number is 17 > > > What I get if I put 09: ``` Enter your Phone Number: 09 Traceback (most recent call last): File "None", line 1, in <module> invalid token: <string>, line 1, pos 2 ``` Any ideas what's wrong?
2014/05/02
[ "https://Stackoverflow.com/questions/23421031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3595018/" ]
If you have a `0` before a numeric literal, then it is in octal format. In this case any digit greater than 7 will result in an error. I think you should consider storing the phone number as a string, so use `raw_input()` instead. This will also keep the leading 0's.
@perreal is right. You should use `raw_input` instead: ``` >>> phoneNumber = raw_input("Enter your Phone Number: ") >>> print("Your number is", phoneNumber) Enter your Phone Number: 091234123 Your number is 091234123 ```
17,565
67,360,917
i would like to make a groupby on my data to put together dates that are close. (less than 2 minutes) Here an example of what i get ``` > datas = [['A', 51, 'id1', '2020-05-27 05:50:43.346'], ['A', 51, 'id2', > '2020-05-27 05:51:08.347'], ['B', 45, 'id3', '2020-05-24 > 17:23:55.142'],['B', 45, 'id4', '2020-05-24 17:23:30.141'], ['C', 34, > 'id5', '2020-05-23 17:31:10.341']] > > df = pd.DataFrame(datas, columns = ['col1', 'col2', 'cold_id', > 'dates']) ``` The 2 first rows have close dates, same for the 3th and 4th rows, 5th row is alone. I would like to get something like this : ``` > datas = [['A', 51, 'id1 id2', 'date_1'], ['B', 45, 'id3 id4', > 'date_2'], ['C', 34, 'id5', 'date_3']] > > df = pd.DataFrame(datas, columns = ['col1', 'col2', 'col_id', > 'dates']) ``` Making it in a pythonic way is not that hard, but i have to make it on big dataframe, a pandas way using groupby method would be much efficient. After apply a datetime method on the dates column i tried : ``` > df.groupby([df['dates'].dt.date]).agg(','.join) ``` but the .dt.date method gives a date every day and not every 2 minutes. Do you have a solution ? Thank you
2021/05/02
[ "https://Stackoverflow.com/questions/67360917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15817735/" ]
A compiler is allowed to choose if `char` is signed or unsigned. The standard says that they have to pick, but don't mandate which way they choose. GCC supports `-fsigned-char` and `-funsigned-char` to force this behavior.
The shown output is consistent with `char` being an unsigned data type on the platform in question. The C++ standard allows `char` to be equivalent to either `unsigned char` or `signed char`. If you wish a specific behavior you can explicitly use a cast to `signed char` in your code.
17,568
69,046,120
It shows that tables are successfully created when I do `heroku run -a "app-name" python manage.py migrate` ``` Running python manage.py migrate on ⬢ app_name... up, run.0000 (Free) System check identified some issues: ... Operations to perform: Apply all migrations: admin, auth, blog, contenttypes, home, sessions, taggit, wagtailadmin, wagtailcore, wagtaildocs, wagtailembeds, wagtailforms, wagtailimages, wagtailredirects, wagtailsearch, wagtailusers Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK ... ``` But when I create a superuser, it tells me that there is no table Any suggestions? I’m sticking in it for 3 days now so I will be grateful for any help. P.S. I use heroku postgresql hobby-dev. P.P.S. ``` File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/app/.heroku/python/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute return Database.Cursor.execute(self, query, params) django.db.utils.OperationalError: no such table: auth_user ``` Base settings.py <https://pastebin.com/DLh3KrK7> My production configuration (`settings.py`) ```py from .base import * import dj_database_url import environ DEBUG = False try: from .local import * except ImportError: pass environ.Env.read_env() env = environ.Env() DATABASES = { 'default': env.db() } ```
2021/09/03
[ "https://Stackoverflow.com/questions/69046120", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11235791/" ]
Re-check your database configuration. The error trace shows that it's using sqlite as the database backend, instead of Postgres as expected: ``` File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute ``` This is then failing because the sqlite database is stored on the filesystem, and filesystems on Heroku are not persistent across commands - so the database you created in the `migrate` step no longer exists when you run `createsuperuser`.
please run these command ``` python manage.py syncdb python manage.py migrate python manage.py createsuperuser ``` please make sure that you in your installed app ``` 'django.contrib.auth' ``` and tell me if you still got the same error and then please add your settings.py
17,571
41,875,358
I'm following this guide <https://developers.google.com/sheets/api/quickstart/python> Upon running the sample code they provided (The only thing I changed was the location of the api secret since we already had one set up and the APPLICATION\_NAME) I get this error ``` AttributeError: 'module' object has no attribute 'DEFAULT_MAX_REDIRECTS' ``` Log before the error ``` File "generate_report.py", line 2, in <module> import httplib2 File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 42, in <module> import calendar File "/Users/HarshaGoli/Git/PantherBot/scripts/calendar.py", line 1, in <module> from oauth2client.service_account import ServiceAccountCredentials File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/oauth2client/service_account.py", line 25, in <module> from oauth2client import client File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/oauth2client/client.py", line 39, in <module> from oauth2client import transport File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/oauth2client/transport.py", line 255, in <module> redirections=httplib2.DEFAULT_MAX_REDIRECTS, ```
2017/01/26
[ "https://Stackoverflow.com/questions/41875358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5838056/" ]
I got the same error and investigated on the problem. In my case, it was caused by a file named ''calendar.py" in the same directory. It's said you should avoid using general names that can be used for standard python library.
It may be versioning problem. It could be `python3` version of `httplib2` which cause troubles, try to follow answer from this [post](https://stackoverflow.com/questions/48941042/google-cloud-function-attributeerror-module-object-has-no-attribute-defaul/49970238#49970238)
17,572
33,309,904
On my local environment, with Python 2.7.10, my Django project seems to run perfectly well using .manage.py runserver. But when I tried to deploy the project to my Debian Wheezy server using the same version of python 2.7.10, it encountered 500 internal server error. Upon checking my apache log, I found the error to be alternating between these two: ``` [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Target WSGI script '/var/www/proj/proj/proj_wsgi.py' cannot be loaded as Python module. [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Exception occurred processing WSGI script '/var/www/proj/proj/proj_wsgi.py'. [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] Traceback (most recent call last): [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/proj/proj/proj_wsgi.py", line 21, in <module> [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] application = get_wsgi_application() [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] django.setup() [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/__init__.py", line 18, in setup [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] apps.populate(settings.INSTALLED_APPS) [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/apps/registry.py", line 78, in populate [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] raise RuntimeError("populate() isn't reentrant") [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] RuntimeError: populate() isn't reentrant ``` AND this one: ``` [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Target WSGI script '/var/www/proj/proj/proj_wsgi.py' cannot be loaded as Python module. [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Exception occurred processing WSGI script '/var/www/proj/proj/proj_wsgi.py'. [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] Traceback (most recent call last): [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/proj/proj/proj_wsgi.py", line 21, in <module> [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] application = get_wsgi_application() [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] django.setup() [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/__init__.py", line 18, in setup [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] apps.populate(settings.INSTALLED_APPS) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] app_config.import_models(all_models) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/apps/config.py", line 198, in import_models [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] self.models_module = import_module(models_module_name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] __import__(name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/contrib/auth/models.py", line 41, in <module> [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] class Permission(models.Model): [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/models/base.py", line 139, in __new__ [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] new_class.add_to_class('_meta', Options(meta, **kwargs)) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/models/base.py", line 324, in add_to_class [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] value.contribute_to_class(cls, name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/models/options.py", line 250, in contribute_to_class [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/__init__.py", line 36, in __getattr__ [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] return getattr(connections[DEFAULT_DB_ALIAS], item) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/utils.py", line 240, in __getitem__ [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] backend = load_backend(db['ENGINE']) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/utils.py", line 111, in load_backend [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] return import_module('%s.base' % backend_name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] __import__(name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 24, in <module> [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] ImproperlyConfigured: Error loading psycopg2 module: /var/www/ven/lib/python2.7/site-packages/psycopg2/_psycopg.so: undefined symbol: PyUnicodeUCS2_AsUTF8String ``` I have tried many solutions, such as all these links below via Google but still to no avail. [Django stops working with RuntimeError: populate() isn't reentrant](https://stackoverflow.com/questions/27093746/django-stops-working-with-runtimeerror-populate-isnt-reentrant) [Django populate() isn't reentrant](https://stackoverflow.com/questions/30954398/django-populate-isnt-reentrant) I tried moving to python 2.7.3 and the django project managed to work but I need some encoding features in pickle contained in the 2.7.10 version so I need to use that. I have even tried reinstalling a brand new Django 1.8.5 project from scratch on python 2.7.10 but it did not work, giving out the same errors. My proj\_wgsi.py is: ``` import os import sys import site from django.core.wsgi import get_wsgi_application # Add the site-packages of the chosen virtualenv to work with site.addsitedir('/var/www/ven/lib/python2.7/site-packages') # Add the app's directory to the PYTHONPATH sys.path.append('/var/www/proj') sys.path.append('/var/www/proj/proj') # Activate your virtual env activate_env=os.path.expanduser('/var/www/ven/bin/activate_this.py') execfile(activate_env, dict(__file__=activate_env)) os.environ.setdefault("DJANGO_SETTINGS_MODULE", "proj.settings") application = get_wsgi_application() ``` My virtual host conf in apache in /etc/apache2/sites-enabled/000-default is ``` <VirtualHost *:80> ServerName 128.133.218.444 ServerAdmin webmaster@localhost ServerAlias 128.133.218.444 WSGIDaemonProcess 128.133.218.444 python-path="/var/www/proj:/var/www/ven/lib/python2.7/site-packages" WSGIProcessGroup 128.199.218.180 WSGIScriptAlias / /var/www/proj/proj/proj_wsgi.py process-group=128.199.218.180 WSGIPassAuthorization On DocumentRoot /var/www/proj #<Directory /> # Options FollowSymLinks # AllowOverride None #</Directory> #<Directory /var/www/> # Options Indexes FollowSymLinks MultiViews # AllowOverride None # Order allow,deny # allow from all #</Directory> <Directory /var/www/proj> Order allow,deny Allow from all </Directory> <Directory /var/www/proj/proj/static> Order deny,allow Allow from all </Directory> <Directory /var/www/proj/proj/media> Order deny,allow Allow from all </Directory> <Directory /var/www/proj/proj> <Files wsgi.py> Order allow,deny allow from all </Files> </Directory> #ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ #<Directory "/usr/lib/cgi-bin"> # AllowOverride None # Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch # Order allow,deny # Allow from all #</Directory> ErrorLog ${APACHE_LOG_DIR}/error.log #ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> ``` I have been trying to solve this issue for a couple of days so any help will be highly appreciated.Thank you!
2015/10/23
[ "https://Stackoverflow.com/questions/33309904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2970242/" ]
writing solution in answer separately for readability of others. ``` for i in [i for i, x in enumerate(hanksArray) if x == hanksYear]: print(hanksArray[i-1]) print(hanksArray[i]) print(hanksArray[i+1]) ```
Quick solution for you will be ``` for i in [i for i, x in enumerate(hanksArray) if x == hanksYear]: print("\n".join(hanksArray[i-1:i+2])) ``` There are numerous other problems with your code anyway
17,573
39,091,551
I am planning on making a game with pygame using gpio buttons. Here is the code: ``` from gpiozero import Button import pygame from time import sleep from sys import exit up = Button(2) left = Button(3) right = Button(4) down = Button(14) fps = pygame.time.Clock() pygame.init() surface = pygame.display.set_mode((1300, 700)) x = 50 y = 50 while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: break if up.is_pressed: y -= 5 if down.is_pressed: y += 5 if left.is_pressed: x -= 5 if right.is_pressed: x += 5 surface.fill((0, 0, 0)) pygame.draw.circle(surface, (255, 255, 255), (x, y), 20, 0) pygame.display.update() fps.tick(30) ``` However, when I press on the X button on the top of the window, it doesn't close. Is there a possible solution for this? **EDIT:** Everyone is giving the same answer, that I am not adding a for loop to check events and quit. I did put that, here in my code: ``` while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: break ``` I have also tried `sys.exit()`. **EDIT 2**: @Shahrukhkhan asked me to put a print statement inside the `for event in pygame.event.get():` loop, which made the loop like this: ``` while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: print "X pressed" break root@raspberrypi:~/Desktop# python game.py X pressed X pressed ```
2016/08/23
[ "https://Stackoverflow.com/questions/39091551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2945954/" ]
There are two possible ways to close the pygame window . 1. after the end of while loop simply write ``` import sys while 1: ....... pygame.quit() sys.exit() ``` 2.instead of putting a break statement ,replace break in for loop immediately after while as ``` while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() ...... ```
You need to make a event and within it you need to quit the pygame ``` for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() ```
17,575
14,086,830
I'm punching way above my weight here, but please bear with this Python amateur. I'm a PHP developer by trade and I've hardly touched this language before. What I'm trying to do is call a method in a class...sounds simple enough? I'm utterly baffled about what 'self' refers to, and what is the correct procedure to call such a method inside a class and outside a class. Could someone *explain* to me, how to call the `move` method with the variable `RIGHT`. I've tried researching this on several 'learn python' sites and searches on StackOverflow, but to no avail. Any help will be appreciated. The following class works in Scott's Python script which is accessed by a terminal GUI (urwid). The function I'm working with is a Scott Weston's missile launcher Python script, which I'm trying to hook into a PHP web-server. ``` class MissileDevice: INITA = (85, 83, 66, 67, 0, 0, 4, 0) INITB = (85, 83, 66, 67, 0, 64, 2, 0) CMDFILL = ( 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) STOP = ( 0, 0, 0, 0, 0, 0) LEFT = ( 0, 1, 0, 0, 0, 0) RIGHT = ( 0, 0, 1, 0, 0, 0) UP = ( 0, 0, 0, 1, 0, 0) DOWN = ( 0, 0, 0, 0, 1, 0) LEFTUP = ( 0, 1, 0, 1, 0, 0) RIGHTUP = ( 0, 0, 1, 1, 0, 0) LEFTDOWN = ( 0, 1, 0, 0, 1, 0) RIGHTDOWN = ( 0, 0, 1, 0, 1, 0) FIRE = ( 0, 0, 0, 0, 0, 1) def __init__(self, battery): try: self.dev=UsbDevice(0x1130, 0x0202, battery) self.dev.open() self.dev.handle.reset() except NoMissilesError, e: raise NoMissilesError() def move(self, direction): self.dev.handle.controlMsg(0x21, 0x09, self.INITA, 0x02, 0x01) self.dev.handle.controlMsg(0x21, 0x09, self.INITB, 0x02, 0x01) self.dev.handle.controlMsg(0x21, 0x09, direction+self.CMDFILL, 0x02, 0x01) ```
2012/12/29
[ "https://Stackoverflow.com/questions/14086830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122776/" ]
The first argument of all methods is usually called `self`. It refers to the instance for which the method is being called. Let's say you have: ``` class A(object): def foo(self): print 'Foo' def bar(self, an_argument): print 'Bar', an_argument ``` Then, doing: ``` a = A() a.foo() #prints 'Foo' a.bar('Arg!') #prints 'Bar Arg!' ``` --- There's nothing special about this being called `self`, you could do the following: ``` class B(object): def foo(self): print 'Foo' def bar(this_object): this_object.foo() ``` Then, doing: ``` b = B() b.bar() # prints 'Foo' ``` --- In your specific case: ``` dangerous_device = MissileDevice(some_battery) dangerous_device.move(dangerous_device.RIGHT) ``` (As suggested in comments `MissileDevice.RIGHT` could be more appropriate here!) You **could** declare all your constants at module level though, so you could do: ``` dangerous_device.move(RIGHT) ``` This, however, is going to depend on how you want your code to be organized!
> > Could someone explain to me, how to call the move method with the variable RIGHT > > > ``` >>> myMissile = MissileDevice(myBattery) # looks like you need a battery, don't know what that is, you figure it out. >>> myMissile.move(MissileDevice.RIGHT) ``` If you have programmed in any other language with classes, besides python, this sort of thing ``` class Foo: bar = "baz" ``` is probably unfamiliar. In python, the class is a factory for objects, but it is itself an object; and variables defined in its scope are attached to the *class*, not the instances returned by the class. to refer to `bar`, above, you can just call it `Foo.bar`; you can also access class attributes through instances of the class, like `Foo().bar`. --- > > Im utterly baffled about what 'self' refers too, > > > ``` >>> class Foo: ... def quux(self): ... print self ... print self.bar ... bar = 'baz' ... >>> Foo.quux <unbound method Foo.quux> >>> Foo.bar 'baz' >>> f = Foo() >>> f.bar 'baz' >>> f <__main__.Foo instance at 0x0286A058> >>> f.quux <bound method Foo.quux of <__main__.Foo instance at 0x0286A058>> >>> f.quux() <__main__.Foo instance at 0x0286A058> baz >>> ``` When you acecss an attribute on a python object, the interpreter will notice, when the looked up attribute was on the class, and is a function, that it should return a "bound" method instead of the function itself. All this does is arrange for the instance to be passed as the first argument.
17,576
74,663,591
I'm trying to remake Tic-Tac-Toe on python. But, it wont work. I tried ` ``` game_board = ['_'] * 9 print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2]) print(game_board[3]) + ' | ' + (game_board[4]) + ' | ' + (game_board[5]) print(game_board[6]) + ' | ' + (game_board[7]) + ' | ' + (game_board[8]) ``` ` but it returns ` ``` Traceback (most recent call last): File "C:\Users\username\PycharmProjects\pythonProject\tutorial.py", line 2, in <module> print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2]) ~~~~~~~~~~~~~~~~~~~~~^~~~~~~ TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ``` `
2022/12/03
[ "https://Stackoverflow.com/questions/74663591", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20671383/" ]
``` function put() { var num0 = document.getElementById("text") var num1 = Number(num0.value) var num4 = document.getElementById("text2") var num2 = Number(num4.value) var sub = document.getElementById("submit") var res = num1 + num2 document.getElementById("myp").innerHTML = num1 + num2 } ```
You can use the `+` operator, like that: ``` var num1 = +num0.value; ... var num2 = +num4.value; ``` and this will turn your string number into a *floating* point number ```html <input type="text" id="text" placeholder="Number 1" /> <input type="text" id="text2" placeholder="Number 2" /> <button type="submit" id="submit" onclick="put()">Click Me</button> <p id="myp"></p> <script> function put() { var num0 = document.getElementById("text"); var num1 = +num0.value; var num4 = document.getElementById("text2"); var num2 = +num4.value; var sub = document.getElementById("submit"); var res = num1 + num2; document.getElementById("myp").innerHTML = res; } </script> ```
17,579
43,708,668
I have a simplified python code looking like the following: ``` a = 100 x = 0 for i in range(0, a): x = x + i / float(a) ``` Is there a way to access the maximum amount of iterations inside a `for` loop? Basically the code would change to: ``` x = 0 for i in range(0, 100): x = x + i / float(thisloopsmaxcount) ``` where `thisloopsmaxcount` is some fancy python method. Another option would be to implement a whole class for this behaviour.
2017/04/30
[ "https://Stackoverflow.com/questions/43708668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6786718/" ]
Yeah, you can.. ``` a = 100 x = 0 r = range(0,a) for i in r: x = x + i / r.stop ``` but if the range isn't counting 1,2,3... then the `stop` won't be the number of steps, e.g. `range(10,12)` doesn't have 12 steps it has 2 steps. And `range(0,100,10)` counts in tens, so it doesn't have 100 steps. So you need to take into account `(.stop - .start) / .step` as appropriate. And it only works for range, in general a `for` loop could be reading from a network, or something based on user input, where the only way to know when the loop stops and how many loops is when it happens to get to the end.
There's nothing built-in, but you can easily compute it yourself: ``` x = 0 myrange = range(0, 100) thisloopsmaxcount = sum(1 for _ in myrange) for i in myrange: x = x + i / float(thisloopsmaxcount) ```
17,581
42,212,502
I have a list of strings, for example: ``` py python co comp computer ``` I simply want to get a string, which contains the biggest possible amount of prefixes. The result should be 'computer' because its prefixes are 'co' and 'comp' (2 prefixes). I have this code (wordlist is a dictionary): ``` for i in wordlist: word = str(i) for j in wordlist: if word.startswith(j): wordlist[i] += 1 result = max(wordlist, key=wordlist.get) ``` Is there any better, faster way to do that?
2017/02/13
[ "https://Stackoverflow.com/questions/42212502", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7396899/" ]
The data structure you are looking for is called a [trie](https://en.wikipedia.org/wiki/Trie). The Wikipedia article about this kind of search tree is certainly worth reading. The key property of the trie that comes in handy here is this: > > All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. > > > The code could look as follows: ``` words = """py python co comp computer""".split() def make_trie(ws): """Build trie from word list `ws`.""" r = {} # trie root for w in ws: d = r for c in w: d = d.setdefault(c, {}) # get c, set to {} if missing d['$'] = '$' # end marker return r def num_pref(t, ws): """Use trie `t` to find word with max num of prefixes in `ws`.""" b, m = -1, '' # max prefixes, corresp. word for w in ws: d, p = t, 1 for c in w: if '$' in d: p += 1 d = d[c] # navigate down one level if p > b: b, m = p, w return b, m t = make_trie(words) print(num_pref(t, words)) ``` `make_trie` builds the trie, `num_pref` uses it to determine the word with maximum number of prefixes. It prints `(3, 'computer')`. Obviously, the two methods could be combined. I kept them separate to make the process of building a trie more clear.
For a large amount of words, you could build a [trie](https://en.wikipedia.org/wiki/Trie). You could then iterate over all the leaves and count the amount of nodes (terminal nodes) with a value between the root and the leaf. With n words, this should require `O(n)` steps compared to your `O(n**2)` solution. This [package](https://github.com/google/pygtrie) looks good, and here's a related [thread](https://stackoverflow.com/questions/11015320/how-to-create-a-trie-in-python).
17,584
52,884,584
I have this array: ``` countOverlaps = [numA, numB, numC, numD, numE, numF, numG, numH, numI, numJ, numK, numL] ``` and then I condense this array by getting rid of all 0 values: ``` countOverlaps = [x for x in countOverlaps if x != 0] ``` When I do this, I get an output like this: [2, 1, 3, 2, 3, 1, 1] Which is what it should, so that makes sense. Now I want to add values to the array so that each number adds itself to the array the number of times it appears. Like this: Original: [2, 1, 3, 2, 3, 1, 1] What I want: [2,2,1,3,3,3,2,2,3,3,3,1,1] Is something like this possible in python? Thanks
2018/10/19
[ "https://Stackoverflow.com/questions/52884584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7010858/" ]
**Updated** Please check below: ``` >>> a = [2, 1, 3, 2, 3, 1, 1] >>> [b for b in a for _ in range(b)] [2, 2, 1, 3, 3, 3, 2, 2, 3, 3, 3, 1, 1] ```
This can be done using list comprehension. So far you had: ``` countOverlaps = [10,25,11,0,10,6,9,0,12,6,0,6,6,11,18] countOverlaps = [x for x in countOverlaps if x != 0] ``` This gives us all non=0 numbers. Then we can do what you want with the following code: ``` mylist = [number for number in list(set(countOverlaps)) for i in range(0, countOverlaps.count(number)) ] ``` This turns 'mylist' into the following output, which is what you're after: ``` [6, 6, 6, 6, 9, 10, 10, 11, 11, 12, 18, 25] ```
17,586
42,066,449
So I have a function in python which generates a dict like so: ``` player_data = { "player": "death-eater-01", "guild": "monster", "points": 50 } ``` I get this data by calling a function. Once I get this data I want to write this into a file, so I call: ``` g = open('team.json', 'a') with g as outfile: json.dump(player_data, outfile) ``` This works fine. However my problem is that since a team consists of multiple players I call the function again to get a new player data: ``` player_data = { "player": "moon-master", "guild": "mage", "points": 250 } ``` Now when I write this data into the same file, the JSON breaks... as in, it show up like so (missing comma between two nodes): ``` { "player": "death-eater-01", "guild": "monster", "points": 50 } { "player": "moon-master", "guild": "mage", "points": 250 } ``` What I want is to store both this data as a proper JSON into the file. For various reasons I cannot prepare the full JSON object upfront and then save in a single shot. I have to do it incrementally due to network breakage, performance and other issues. Can anyone guide me on how to do this? I am using Python.
2017/02/06
[ "https://Stackoverflow.com/questions/42066449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1591731/" ]
You shouldn't append data to an existing file. Rather, you should build up a list in Python first which contains all the dicts you want to write, and only then dump it to JSON and write it to the file. If you really can't do that, one option would be to load the existing file, convert it back to Python, then append your new dict, dump to JSON and write it back replacing the whole file.
To produce valid JSON you will need to load the previous contents of the file, append the new data to that and then write it back to the file. Like so: ``` def append_player_data(player_data, file_name="team.json"): if os.path.exists(file_name): with open(file_name, 'r') as f: all_data = json.load(f) else: all_data = [] all_data.append(player_data) with open(file_name, 'w') as f: json.dump(all_data, f) ```
17,587
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
Since it seems like you want to run one, and only one, function depending on the arguments given, I would suggest you use a mandatory positional argument `./prog command`, instead of optional arguments (`./prog --command1` or `./prog --command2`). so, something like this should do it: ``` FUNCTION_MAP = {'top20' : my_top20_func, 'listapps' : my_listapps_func } parser.add_argument('command', choices=FUNCTION_MAP.keys()) args = parser.parse_args() func = FUNCTION_MAP[args.command] func() ```
``` # based on parser input to invoke either regression/classification plus other params import argparse parser = argparse.ArgumentParser() parser.add_argument("--path", type=str) parser.add_argument("--target", type=str) parser.add_argument("--type", type=str) parser.add_argument("--deviceType", type=str) args = parser.parse_args() df = pd.read_csv(args.path) df = df.loc[:, ~df.columns.str.contains('^Unnamed')] if args.type == "classification": classify = AutoML(df, args.target, args.type, args.deviceType) classify.class_dist() classify.classification() elif args.type == "regression": reg = AutoML(df, args.target, args.type, args.deviceType) reg.regression() else: ValueError("Invalid argument passed") # Values passed as : python app.py --path C:\Users\Abhishek\Downloads\adult.csv --target income --type classification --deviceType GPU ```
17,588
48,643,925
I am looking through some code and found the following lines: ``` def get_char_count(tokens): return sum(len(t) for t in tokens) def get_long_words_ratio(tokens, nro_tokens): ratio = sum(1 for t in tokens if len(t) > 6) / nro_tokens return ratio ``` As you can see, in the first case the complete expression is returned, whereas in the second case the expression is first evaluated and stored into a variable, which is then returned. My question is, which way is the better, more pythonic way? I am not entirely sure how Python handles returns from functions. Does it return by reference, or does it return the value directly? Does it resolve the expression and returns that? In summary, is it better to store an expression's value into a variable and return the variable, or is it also perfectly fine (efficiency, and PEP-wise) to return the expression as a whole?
2018/02/06
[ "https://Stackoverflow.com/questions/48643925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1150683/" ]
> > Does it return by reference[?] > > > Effectively yes. When you return an object, the id (i.e. memory address) of the object inside the function is the same as the id of the object outside the function. It doesn't make a copy or anything. > > [...] or does it return the value directly? > > > If you're saying "is it like the 'pass-by-value' argument passing system of many programming languages, where a copy is made and changes to the new value don't affect the original one? Except for returning values instead of passing them?", then no, it's not like that. Python does not make a copy of anything unless you explicitly tell it to. > > Does it resolve the expression and returns that? > > > Yes. Expressions are almost always resolved immediately. Times when they aren't include * when you have defined a function (but haven't executed it), the expressions in that function will not have been resolved, even though Python had to "pass over" those lines to create the function object * when you create a lambda object, (but haven't executed it), ... etc etc. > > In summary, is it better to store an expression's value into a variable and return the variable, or is it also perfectly fine (efficiency, and PEP-wise) to return the expression as a whole? > > > From the perspective of any code outside of your functions, both of the approaches are completely identical. You can't distinguish between "a returned expression" and "a returned variable", because they have the same outcome. Your second function is slightly slower than the first because it needs to allocate space for the variable name and deallocate it when the function ends. So you may as well use the first approach and save yourself a line of code and a millionth of a second of run-time. Here's an example breakdown of the byte code for two functions that use these different approaches: ``` def f(): return 2 + 2 def g(): x = 2 + 2 return x import dis print("Byte code for f:") dis.dis(f) print("Byte code for g:") dis.dis(g) ``` Result: ``` Byte code for f: 2 0 LOAD_CONST 2 (4) 2 RETURN_VALUE Byte code for g: 5 0 LOAD_CONST 2 (4) 2 STORE_FAST 0 (x) 6 4 LOAD_FAST 0 (x) 6 RETURN_VALUE ``` Notice that they both end with `RETURN_VALUE`. There's no individual `RETURN_EXPRESSION` and `RETURN_VARIABLE` codes.
While I prefer the first approach (Since it uses less memory), both expressions are equivalent in behavior. The PEP8 Style Guide doesn't really say anything about this, other than being consistent with your return statements. > > Be consistent in return statements. Either all return statements in a function should return an expression, or none of them should. If any return statement returns an expression, any return statements where no value is returned should explicitly state this as return None, and an explicit return statement should be present at the end of the function (if reachable). > > >
17,598
57,948,945
I have a very large square matrix of order around 570,000 x 570,000 and I want to power it by 2. The data is in json format casting to associative array in array (dict inside dict in python) form Let's say I want to represent this matrix: ``` [ [0, 0, 0], [1, 0, 5], [2, 0, 0] ] ``` In json it's stored like: ``` {"3": {"1": 2}, "2": {"1": 1, "3": 5}} ``` Which for example `"3": {"1": 2}` means the number in 3rd row and 1st column is 2. I want the output to be the same as json, but powered by 2 (matrix multiplication) The programming language isn't important. I want to calculate it the fastest way (less than 2 days, if possible) So I tried to use Numpy in python (`numpy.linalg.matrix_power`), but it seems that it doesn't work with my nested unsorted dict format. I wrote a simple python code to do that but I estimated that it would take 18 days to accomplish: ``` jsonFileName = "file.json" def matrix_power(arr): result = {} for x1,subarray in arr.items(): print("doing item:",x1) for y1,value1 in subarray.items(): for x2,subarray2 in arr.items(): if(y1 != x2): continue for y2,value2 in subarray2.items(): partSum = value1 * value2 result[x1][y2] = result.setdefault(x1,{}).setdefault(y2,0) + partSum return result import json with open(jsonFileName, 'r') as reader: jsonFile = reader.read() print("reading is succesful") jsonArr = json.loads(jsonFile) print("matrix is in array form") matrix = matrix_power(jsonArr) print("Well Done! matrix is powered by 2 now") output = json.dumps(matrix) print("result is in json format") writer = open("output.json", 'w+') writer.write(output) writer.close() print("Task is done! you can close this window now") ``` Here, X1,Y1 is the row and col of the first matrix which then is multiplied by the corresponding element of the second matrix (X2,Y2).
2019/09/15
[ "https://Stackoverflow.com/questions/57948945", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10530951/" ]
Numpy is not the problem, you need to input it on a format that numpy can understand, but since your matrix is really big, it probably won't fit in memory, so it's probably a good idea to use a sparse matrix (`scipy.sparse.csr_matrix`): ``` m = scipy.sparse.csr_matrix(( [v for row in data.values() for v in row.values()], ( [int(row_n) for row_n, row in data.items() for v in row], [int(column) for row in data.values() for column in row] ) )) ``` Then it's just a matter of doing: ``` m**2 ```
> > now I have to somehow translate csr\_matrix back to json serializable > > > Here's one way to do that, using the attributes **data**, **indices**, **indptr** - `m` is the *csr\_matrix*: ``` d = {} end = m.indptr[0] for row in range(m.shape[0]): start = end end = m.indptr[row+1] if end > start: # if row not empty d.update({str(1+row): dict(zip([str(1+i) for i in m.indices[start:end]], m.data[start:end]))}) output = json.dumps(d, default=int) ```
17,599
744,894
I want to pull certain comments from my py files that give context to translations, rather than manually editing the .pot file basically i want to go from this python file: ``` # For Translators: some useful info about the sentence below _("Some string blah blah") ``` to this pot file: ``` # For Translators: some useful info about the sentence below #: something.py:1 msgid "Some string blah blah" msgstr "" ```
2009/04/13
[ "https://Stackoverflow.com/questions/744894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55565/" ]
After much pissing about I found the best way to do this: ``` #. Translators: # Blah blah blah _("String") ``` Then search for comments with a . like so: ``` xgettext --language=Python --keyword=_ --add-comments=. --output=test.pot *.py ```
I was going to suggest the `compiler` module, but it ignores comments: f.py: ``` # For Translators: some useful info about the sentence below _("Some string blah blah") ``` ..and the compiler module: ``` >>> import compiler >>> m = compiler.parseFile("f.py") >>> m Module(None, Stmt([Discard(CallFunc(Name('_'), [Const('Some string blah blah')], None, None))])) ``` The [AST](http://www.python.org/doc/2.5.2/lib/module-compiler.ast.html) module in Python 2.6 seems to do the same. Not sure if it's possible, but if you use triple-quoted strings instead.. ``` """For Translators: some useful info about the sentence below""" _("Some string blah blah") ``` ..you can reliably parse the Python file with the compiler module: ``` >>> m = compiler.parseFile("f.py") >>> m Module('For Translators: some useful info about the sentence below', Stmt([Discard(CallFunc(Name('_'), [Const('Some string blah blah')], None, None))])) ``` I made an attempt at writing a mode complete script to extract docstrings - it's incomplete, but seems to grab most docstrings: <http://pastie.org/446156> (or on [github.com/dbr/so\_scripts](http://github.com/dbr/so_scripts/tree/0bd66a21695a390cfa45f9ee26d7bed4eac10e5c/parse_py)) The other, much simpler, option would be to use regular expressions, for example: ``` f = """# For Translators: some useful info about the sentence below _("Some string blah blah") """.split("\n") import re for i, line in enumerate(f): m = re.findall("\S*# (For Translators: .*)$", line) if len(m) > 0 and i != len(f): print "Line Number:", i+1 print "Message:", m print "Line:", f[i + 1] ``` ..outputs: ``` Line Number: 1 Message: ['For Translators: some useful info about the sentence below'] Line: _("Some string blah blah") ``` Not sure how the `.pot` file is generated, so I can't be any help at-all with that part..
17,602
72,029,157
I read book, I try practice these code snippet ```py >>> from lis import parse >>> parse('1.5') 1.5 ``` Then I follow guide at <https://github.com/adamhaney/lispy#getting-started> . My PC is Windows 11 Pro x64. ``` C:\Users\donhu>python -V Python 3.10.4 C:\Users\donhu>pip -V pip 22.0.4 from C:\Program Files\Python310\lib\site-packages\pip (python 3.10) C:\Users\donhu>pip install lispy Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement lispy (from versions: none) ERROR: No matching distribution found for lispy C:\Users\donhu> ``` [![enter image description here](https://i.stack.imgur.com/s0Iir.png)](https://i.stack.imgur.com/s0Iir.png) I also try install with Anaconda, but not success. [![enter image description here](https://i.stack.imgur.com/kaXpk.png)](https://i.stack.imgur.com/kaXpk.png) How to fix?
2022/04/27
[ "https://Stackoverflow.com/questions/72029157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3728901/" ]
You should use [`map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) and [`filter()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter). ```js const input = [ { title: "QA", rows: [ { risk: "P1", Title: "Server down", }, { risk: "P3", Title: "Permission issue", }, ], }, { title: "Prod", rows: [ { risk: "P5", Title: "Console log errors fix", }, { risk: "P1", Title: "Server is in hung state", }, ], }, ]; const output = input.map((obj) => ({ ...obj, rows: obj.rows.filter((row) => row.risk === "P1"), })); console.log(output); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` ### Explanation You want to return one object for each of the original object values i.e. a `1:1` mapping so your primary operation is `map()`. Then you want to return the same object except that the property `rows` should only contain the `rows` with `risk === "P1"` so you need to `filter()` the rows and create a new object (you should treat objects as [immutable](https://en.wikipedia.org/wiki/Immutable_object)) with that updated `rows` property.
First your original array needs an opening `[`. Instead of using `Array#forEach` use `Array#map` instead. `.forEach` does not return any result, but can allow you to modify the original array; `.map` on the other hand creates a new array. ```js const input = [{ "title": "QA", "rows": [ { "risk": "P1", "Title": "Server down", }, { "risk": "P3", "Title": "Permission issue", } ] }, { "title": "Prod", "rows": [ { "risk": "P5", "Title": "Console log errors fix", }, { "risk": "P1", "Title": "Server is in hung state", } ] } ], filter = "P1", output = input.map( ({rows,...rest}) => ({...rest, rows: rows.filter(({risk}) => risk === filter)}) ); console.log( output ); ``` If your aim was to modify the original array, however, then make the following modification to your original code: ```js const input = [{ "title": "QA", "rows": [ { "risk": "P1", "Title": "Server down", }, { "risk": "P3", "Title": "Permission issue", } ] }, { "title": "Prod", "rows": [ { "risk": "P5", "Title": "Console log errors fix", }, { "risk": "P1", "Title": "Server is in hung state", } ] } ], filter = "P1"; input.forEach((element,index) => { input[index] = {...element, rows: element.rows.filter( x => x.risk === filter )} }); console.log( input ); ```
17,603
40,427,547
I am looking for a conditional statement in python to look for a certain information in a specified column and put the results in a new column Here is an example of my dataset: ``` OBJECTID CODE_LITH 1 M4,BO 2 M4,BO 3 M4,BO 4 M1,HP-M7,HP-M1 ``` and what I want as results: ``` OBJECTID CODE_LITH M4 M1 1 M4,BO 1 0 2 M4,BO 1 0 3 M4,BO 1 0 4 M1,HP-M7,HP-M1 0 1 ``` What I have done so far: ``` import pandas as pd import numpy as np lookup = ['M4'] df.loc[df['CODE_LITH'].str.isin(lookup),'M4'] = 1 df.loc[~df['CODE_LITH'].str.isin(lookup),'M4'] = 0 ``` Since there is multiple variables per rows in "CODE\_LITH" it seems like the script in not able to find only "M4" it can find "M4,BO" and put 1 or 0 in the new column I have also tried: ``` if ('M4') in df['CODE_LITH']: df['M4'] = 0 else: df['M4'] = 1 ``` With the same results. Thanks for your help. PS. The dataframe contains about 2.6 millions rows and I need to do this operation for 30-50 variables.
2016/11/04
[ "https://Stackoverflow.com/questions/40427547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6146748/" ]
Use simple `[row][col]` access to your double pointer. It is more readable and you can avoid errors, as you coded. ``` #include<stdio.h> #include<stdlib.h> int main(void) { int **tab; int ligne; int col; printf("saisir le nbre de lignes volous\n"); scanf("%d", &ligne); printf("saisir le nbre de colonnes volous\n"); scanf("%d", &col); tab = malloc(ligne*sizeof(int*)); if (tab != NULL) { int i ,j; for (i=0 ; i < ligne; i++) { tab[i] = malloc(col*sizeof(int)); if (tab[i] == NULL) { fprintf(stderr, "Malloc failed\n"); return 1; } } int k=0; for (i = 0; i < ligne; i++) { for (j = 0; j < col; j++) { tab[i][j] = k++; } } for (i = 0; i < ligne; i++) { for (j = 0; j < col; j++) { printf("%d\t", tab[i][j]); } free(tab[i]); printf("\n"); } } free(tab); return 0; } ```
``` int main(void) { int ligne; int col; printf("saisir le nbre de lignes volous\n"); scanf("%d", &ligne); printf("saisir le nbre de colonnes volous\n"); scanf("%d", &col); int tableSize = ligne * (col*sizeof(int)); int * table = (int*) malloc(tableSize); int i,j; for (i=0 ; i < ligne; i++) { for (j = 0; j < col; j++) { *(table + i+ j) = 0; } } for (i = 0; i < ligne; i++) { for (j = 0; j < col; j++) { printf("%d\t", *(table + i +j)); } printf("\n"); } free(table); return 0; } ```
17,604
25,826,977
I am currently taking a GIS programming class. The directions for using GDAL and ogr to manipulate the data is written for a Windows PC. I am currently working on a MAC. I am hoping to get some insight on how to translate the .bat code to a .sh code. Thanks!! Windows .bat code: ``` cd /d c:\data\PhiladelphiaBaseLayers set ogr2ogrPath="c:\program files\QGIS Dufour\bin\ogr2ogr.exe" for %%X in (*.shp) do %ogr2ogrPath% -skipfailures -clipsrc c:\data\PhiladelphiaBaseLayers\clipFeature\city_limits.shp c:\data\PhiladelphiaBaseLayers\clipped\%%X c:\data\PhiladelphiaBaseLayers\%%X for %%X in (*.shp) do %ogr2ogrPath% -skipfailures -s_srs EPSG:4326 -t_srs EPSG:3857 c:\data\PhiladelphiaBaseLayers\clippedAndProjected\%%X c:\data\PhiladelphiaBaseLayers\clipped\%%X ``` My mac .sh code: ``` cd ~/Desktop/PhiladelphiaBaseLayers set ogr2ogrPath="/Applications/QGIS.app/Contents/Resources/python/plugins/processing/algs/gdal/ogr2ogr.py" \ for shpfile in *.shp do $org2ogrPath$ -skipfailures -clipsrc \ ~/Desktop/PhiladelphiaBaseLayers/clipFeature/city_limits.shp \ ~/Desktop/PhiladelphiaBaseLayers/clipped/"shpfile2""shpfile" \ ~/Desktop/PhiladelphiaBaseLayers/"shpfile2""shpfile" for shpfile in *.shp do $ogr2ogrPath$ -skipfailures -s_srs EPSG:4326 -t_srs EPSG:3857 \ ~/Desktop/PhiladelphiaBaseLayers/clipped/"shpfile2""shpfile" done ```
2014/09/13
[ "https://Stackoverflow.com/questions/25826977", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4038027/" ]
I think it's telling you that the `false` in your code will never be reached, because the `true` causes the first part of the expression to be returned. You can simplify it to: ```dart onClick.listen((e) => fonixMenu.hidden = !fonixMenu.hidden); ```
I think what you actually wanted to do was ```dart void main() { .... var menuToggle =querySelector('#lines') ..onClick.listen((e) => fonixMenu.hidden = fonixMenu.hidden == true ? = false : fonixMenu.hidden = true); // ^ 2nd = .... } ``` but Danny's solution is more elegant when you ensure that `fonixMenu.hidden` is never `null` because ```dart var x = null; var Y = !x ``` causes an exception: ```dart type 'Null' is not a subtype of type 'bool' of 'boolean expression'. ``` A simple null-safe variant ```dart var menuToggle =querySelector('#lines') ..onClick.listen((e) => fonixMenu.hidden = fonixMenu.hidden == true ? false : true); ```
17,607
23,827,284
I'm new to programing in languages more suited to the web, but I have programmed in vba for excel. What I would like to do is: 1. pass a list (in python) to a casper.js script. 2. Inside the casperjs script I would like to iterate over the python object (a list of search terms) 3. In the casper script I would like to query google for search terms 4. Once queried I would like to store the results of these queries in an array, which I concatenate together while iterating over the python object. 5. Then once I have searched for all the search-terms and found results I would like to return the RESULTS array to python, so I can further manipulate the data. **QUESTION --> I'm not sure how to write the python function to pass an object to casper.** **QUESTION --> I'm also not sure how to write the casper function to pass an javascript object back to python.** Here is my python code. ``` import os import subprocess scriptType = 'casperScript.js' APP_ROOT = os.path.dirname(os.path.realpath(__file__)) PHANTOM = '\casperjs\bin\casperjs' SCRIPT = os.path.join(APP_ROOT, test.js) params = [PHANTOM, SCRIPT] subprocess.check_output(params) ``` js CODE ``` var casper = require('casper').create(); casper.start('http://google.com/', function() { this.echo(this.getTitle()); }); casper.run(); ```
2014/05/23
[ "https://Stackoverflow.com/questions/23827284", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3098818/" ]
``` $("#show a").click(function(e) { e.preventDefault(); $("#info, #hide").show(); $("#show").hide(); }); $("#hide a").click(function(e) { e.preventDefault(); $("#info, #hide").hide(); $("#show").show(); }); ```
Use this to show/hide the "Details" div: <http://api.jquery.com/toggle/> Also, you could use just one span to display the "Show/Hide" link, changing the text accordingly when you click to toggle.
17,608
50,709,365
I start with the following tabular data : (let's say tests results by version) ``` Item Result Version 0 TO OK V1 1 T1 NOK V1 2 T2 OK V1 3 T3 NOK V1 4 TO OK V2 5 T1 OK V2 6 T2 NOK V2 7 T3 NOK V2 ``` ``` df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'], 'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'], 'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']}) ``` and I try to build the following report: ``` V2 OK NOK V1 OK T0 T2 NOK T1 T3 ``` (`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols. I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues?
2018/06/05
[ "https://Stackoverflow.com/questions/50709365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9899968/" ]
I was having this issue with Cocoapods. The solution was to clean the build folder re-install all pods, and then rebuild the app. The issue resolved itself that way.
In the project pane on the LHS, for your build products, don't select them in the list for Target membership in the RHS pane.
17,615
60,311,148
I'm trying to pip install a package in an AWS Lambda function. The method recommended by Amazon is to create a zipped deployment package that includes the dependencies and python function all together (as described in [AWS Lambda Deployment Package in Python](https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html)). However, this results in not being able to edit the Lambda function using inline code editing within the AWS Lambda GUI. So instead, I would like to pip install the package within the AWS Lambda function itself. In AWS Lambda, the filesystem is read-only apart from the /tmp/ directory, so I am trying to pip install to the /tmp/ directory. The function is only called once-daily, so I don't mind about the few extra seconds required to re-pip install the package every time the function is run. **My attempt** ``` def lambda_handler(event, context): # pip install dependencies print('begin lambda handler') import subprocess import sys subprocess.call('pip install cryptography -t /tmp/ --no-cache-dir'.split()) from cryptography.fernet import Fernet pwd_encrypted = b'gAAAAABeTcT0OXH96ib7TD5-sTII6jMfUXPhMpwWRCF0315rWp4C0yav1XAPIn7prfkkA4tltYiWFAJ22bwuaj0z1CKaGl8vTgNd695SDl25HnLwu1xTzaQ=' key = b'fP-7YR1hUeVW4KmFmly4JdgotD6qjR52g11RQms6Llo=' cipher_suite = Fernet(key) result = cipher_suite.decrypt(pwd_encrypted).decode('utf-8') print(result) print('end lambda handler') ``` However, this results in the error > > [ERROR] ModuleNotFoundError: No module named 'cryptography' > > > I have also tried replacing the *subprocess* call with the following, as recommended in [this stackoverflow answer](https://stackoverflow.com/a/50255019/11918892) ``` cmd = sys.executable+' -m pip install cryptography -t dependencies --no-cache-dir' subprocess.check_call(cmd.split()) ``` However, this results in the error > > OSError: [Errno 30] Read-only file system: '/var/task/dependencies' > > >
2020/02/20
[ "https://Stackoverflow.com/questions/60311148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11918892/" ]
I solved this with a one-line adjustment to the original attempt. You just need to add /tmp/ to sys.path so that Python knows to search /tmp/ for the module. All you need to do is add the line `sys.path.insert(1, '/tmp/')`. **Solution** ``` import os import sys import subprocess # pip install custom package to /tmp/ and add to path subprocess.call('pip install cryptography -t /tmp/ --no-cache-dir'.split(), stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) sys.path.insert(1, '/tmp/') from cryptography.fernet import Fernet def lambda_handler(event, context): # pip install dependencies pwd_encrypted = b'gAAAAABeTcT0OXH96ib7TD5-sTII6jMfUXPhMpwWRCF0315rWp4C0yav1XAPIn7prfkkA4tltYiWFAJ22bwuaj0z1CKaGl8vTgNd695SDl25HnLwu1xTzaQ=' key = b'fP-7YR1hUeVW4KmFmly4JdgotD6qjR52g11RQms6Llo=' cipher_suite = Fernet(key) result = cipher_suite.decrypt(pwd_encrypted).decode('utf-8') print(result) ``` **Output** > > Hello stackoverflow! > > > Note - as @JohnRotenstein mentioned in the comments, the preferred method to add Python packages is to package dependencies in an [AWS Lambda Layer](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html). My solution just shows that it is possible to pip install packages directly in an AWS Lambda function.
For some reason subprocess.call() was returning a FileNotFound error when I was trying to `pip3.8 install <package> -t <install-directory>`. I solved this by using os.system() instead of subprocess.call(), and I specified the path of pip directly: `os.system('/var/lang/bin/pip3.8 install <package> -t <install-directory>')`.
17,625
32,779,333
I am trying to start learning about writing encryption algorithms, so while using python I am trying to manipulate data down to a binary level so I can add bits to the end of data as well as manipulate to obscure the data. I am not new to programming I am actually a programmer but I am relatively new to python which is why I am struggling a bit. can anyone show me the best way to manipulate, in python, a string down to the binary level (or recommend in what way I should approach this). I have looked at a number of questions: [Convert string to binary in python](https://stackoverflow.com/questions/18815820/convert-string-to-binary-in-python) [Manipulating binary data in Python](https://stackoverflow.com/questions/3059301/manipulating-binary-data-in-python) [Convert binary to ASCII and vice versa](https://stackoverflow.com/questions/7396849/convert-binary-to-ascii-and-vice-versa-python) But all these are not what I am looking for and I do not know enough of python to be able to pick out what I need. can someone please assist me with details (if you use a function please explain what it is to me e.g. ord())
2015/09/25
[ "https://Stackoverflow.com/questions/32779333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1779617/" ]
Set the color to UIColor.clearColor()
Use clear color for the scrollView background ``` self.scrollView.backgroundColor = UIColor.clearColor() ``` You don't need to set the background color for the view again once you have set the color with a pattern image. If you set the background color again, the pattern image will be removed.
17,626
64,727,574
I am new to python I am writing code to count the frequency of numbers in a list However I get KeyError. How to automatically check if value does not exist and return a default value. My code is below ``` arr = [1,1,2,3,2,1] freq={} for i in arr: freq[i] += freq[i] + 1 ```
2020/11/07
[ "https://Stackoverflow.com/questions/64727574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14595676/" ]
Yes you can leverage the get method of a dictionary. You can simply do ``` arr=[1,1,2,3,2,1] freq={} for i in arr: freq[i] = freq.get(i,0)+1 ``` Please Google for basic question like this before asking on stackoverflow
You want the dictionary's `get` method.
17,629
12,920,856
I have a text file that consists of million of vectors like this:- ``` V1 V1 V1 V3 V4 V1 V1 ``` Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different. I want to count how many times each vector line is repeated and add it in the output text file like this:- In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:- ``` V1 count 3 V3 V4 V1 count 2 ``` Although, first and last patterns are same but they are counted differently because they are in different order. I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help
2012/10/16
[ "https://Stackoverflow.com/questions/12920856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1750896/" ]
If order doesn't matter ======================= If you really want to do this in python (as opposed to the `sort filepath | uniq -c` as Jean suggests), then I would do this: ``` import collections with open('path/to/file') as f: counts = collections.Counter(f) outfile = open('path/to/outfile', 'w') for line,occ in counts.iteritems(): outfile.write("%s repeat %d\n" %(line.strip(), occ)) outfile.close() ``` If order matters ================ If order matters (if entry `i` appears before entry `j` in the input file, then entry `i` must appear before entry `j` in the output file), then what you need is a modified run-length encoder. Note however, if you have the following input file: ``` v1 v1 v1 v2 v2 v3 v1 ``` then, your output file will look like this: ``` v1 repeat 3 v2 repeat 2 v3 v1 with open('infilepath') as infile: outfile = open('outfilepath', 'w') curr = infile.readline().strip() count = 1 for line in infile: if line.strip() == curr: count += 1 else: outfile.write(curr) if count-1: outfile.write(" repeat %d\n" %count) else: outfile.write("\n") curr = line.strip() count = 1 outfile.write(curr) if count-1: outfile.write(" repeat %d\n" %count) outfile.close() ``` Of course, `uniq -c infilepath > outfilepath` will do the same Hope this helps
If it all fits into memory, then you could do: ``` from collections import Counter with open('vectors') as fin: counts = Counter(fin) ``` Or, if large, then you can use sqlite3: ``` import sqlite3 db = sqlite3.conncet('/some/path/some/file.db') db.execute('create table vector (vector)') with open('vectors.txt') as fin: db.executemany('insert into vector values(?)', fin) db.commit() for row in db.execute('select vector, count(*) as freq from vector group by vector'): print row # do something suitable here ``` If the vectors are always contiguous: ``` from itertools import groupby with open('vector') as fin: for vector, vals in groupby(fin): print '{} {}repeat'.format(vector, sum(1 for _ in vals)) ```
17,631
51,341,157
``` CREATE OR REPLACE FUNCTION CLEAN_STRING(in_str varchar) returns varchar AS $$ def strip_slashes(in_str): while in_str.endswith("\\") or in_str.endswith("/"): in_str = in_str[:-1] in_str = in_str.replace("\\", "/") return in_str clean_str = strip_slashes(in_str) return clean_str $$ LANGUAGE plpythonu ; ``` This gives me `IndentationError` . However, If I remove backslashes, it works fine. How can I handle backslashes inside plpythonu?
2018/07/14
[ "https://Stackoverflow.com/questions/51341157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3601228/" ]
Your functions are asynchronous and asynchronous functions need some way of indicating when they are finished. Typically this is done with a callback or promise. Without that there is no way to know when they are finished. If they returned a promise, you might do something like this: ```js var fun1 = function() { console.log('Started fun1'); return new Promise(resolve => { setTimeout(() => { console.log('Finished fun1'); resolve(true) }, 2000) }) } var fun2 = function() { console.log('Started fun2'); return new Promise(resolve => { setTimeout(() => { console.log('Finished fun2'); resolve(true) }, 2000) }) } function fun3(arr) { let p = Promise.resolve() for (var i = 0; i < arr.length; i++){ p = p.then(arr[i]); } return p } fun3([fun1, fun2]); ``` You can write the loop a little more succinctly with `reduce()`: ``` function fun3(arr){ return arr.reduce((a, c) => a.then(c), Promise.resolve()) } ``` If you can use `async/await` the last function would be a little easier to read. Since the `async` function returns a promise you can also easily know when the whole thing is done: ``` async function fun3(arr){ for(var i=0;i<arr.length;i++) await arr[i](); } fun3([fun1,fun2]) .then(() => console.log("finished")) ``` Of course if it's possible to just have simple functions that are not asynchronous, a lot of these problems go away…you could just run then in a timer outside the functions.
**You can add the next function *inside* the `setTimeout` callback.** For example, ```js var fun1=function(){ console.log('Started fun1'); setTimeout(()=>{ console.log('Finished fun1'); fun2(); // Start the next timeout. },2000) } var fun2=function(){ console.log('Started fun2'); setTimeout(()=>{ console.log('Finished fun2'); },2000) } // This should output 'Started fun1', delay, // output 'Finished fun1' and then 'Started fun2', // delay, and then finally output 'Finished fun2'. fun1(); ```
17,641
58,484,745
let say that thoses python objects below are **locked** we just cannot change the code, all we can is writing right after it. i know it's aweful. but let say that we are forced to work with this. ``` Name01 = "Dorian" Name02 = "Tom" Name04 = "Jerry" Name03 = "Jessica" #let say that there's 99 of them ``` **How to print the name of each and single one of them (99) witouth repetition ?** from my noob perspective. the ideal way to resolve this case witouth repetition is using the same logic that we have with strings. Because name => **Name+index** so it can be really easy to iterate with them. so somewhat a code that work in the same logic of the totally fictive one below: ``` for i in range (1,100): print(Name+f"{i:02d}") ``` ``` for i in range (1,100): string_v_of_obj = "Name" + str(f"{i:02d}") print(func_transform_string_to_code(string_v_of_obj)) ``` maybe something like that is possible. ``` for python_object in script_objects: if Name in python_object: print(python_object) ```
2019/10/21
[ "https://Stackoverflow.com/questions/58484745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11613897/" ]
This could do the trick: ``` Name01 = "Dorian" Name02 = "Tom" Name04 = "Jerry" Name03 = "Jessica" vars = locals().copy() for i in vars: if 'Name' in i: print((i, eval(i))) ``` alternative in one line: ``` Name01 = "Dorian" Name02 = "Tom" Name04 = "Jerry" Name03 = "Jessica" print([(i, eval(i)) for i in locals().copy() if "Name" in i]) ```
You can access the global variables through `globals()` or if you want the local variables with `locals()`. They are stored in a `dict`. So ``` for i in range (1,100): print(locals()[f"Name{i:02d}"]) ``` should do what you want.
17,642
27,627,440
I am trying to use the [python-user-agents](https://github.com/selwin/python-user-agents/blob/master/user_agents/parsers.py). I keep running into a number of bugs within the library itself. First it referred to a `from ua_parser import user_agent_parser` that it never defined. So after banging my head, I looked online to see what that might be and found that `ua_parser` is yet another library that this project was using. So I downloaded `ua_parser`. But now I am getting an error that ``` TypeError: parse_device() got an unexpected keyword argument 'model' ``` Sure enough, `ua_parser` has a model variable that the python-user-agents library is not expecting. Has anyone done a better job with this library? Whoever wrote it clearly did a terrible job. But it seems to be the only thing out there that I could find. Any help fixing it to work well? I am looking to use it so to identify if a browser's device is mobile or touchable or a tablet as in: `user_agent.is_mobile` or `user_agent.is_touch_capable` or `user_agent.is_tablet`
2014/12/23
[ "https://Stackoverflow.com/questions/27627440", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2187407/" ]
if you look at the readme from the github link it tells you what to install and how to use the lib: You need pyyaml and ua-parser: ``` pip install pyyaml ua-parser user-agents ``` A working example: ``` In [1]: from user_agents import parse In [2]: ua_string = 'Mozilla/5.0 (iPhone; CPU iPhone OS 5_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B179 Safari/7534.48.3' In [3]: user_agent = parse(ua_string) In [4]: user_agent.is_mobile Out[4]: True In [5]: user_agent.is_touch_capable Out[5]: True In [6]: user_agent.is_tablet Out[6]: False ```
Actually the new version of ua-parser is incompatible with this so you have to install ua-parser==0.3.6
17,649
21,214,531
Howdy: somewhat of a python/programming newbie. I am trying to find each time a certain word starts a new sentence and replace it, which in this case is good old "Bob", replaced with "John". I am using a dictionary and the `.replace()` method to do the replacing - replacing the dictionary key with the associated value. Here is my code: ``` start_replacements = {'. Bob': '. John', '! Bob': '! John', '? Bob': '? John', '\nBob': '\nJohn', } def search_and_replace(start_word, replacement): with open('start_words.txt', 'r+') as article: read_article = article.read() replaced = read_article.replace(start_word, replacement) article.seek(0) article.write(replaced) def main(): for start_word, replacement in start_replacements.iteritems(): search_and_replace(start_word, replacement) if __name__ == '__main__': main() ``` You will see in the dictionary that I have 4 ways of finding "Bob" at the beginning of a sentence, but I am not sure how to find "Bob" at the very beginning of at text file, without using regex's `^`. I would prefer to avoid using regex to keep this script more simple. Is this possible? EDIT: Contents of "start\_words.txt" before running script: ``` Bob is at the beginning of the file. Bob after period! Bob after exclamation? Bob after question. Bob after newline. ``` Content after running script: ``` Bob is at the beginning of the file. John after period! John after exclamation? John after question. John after newline. ``` EDIT: **Explanation for not wanting regex**: I would prefer to stick with the dictionary because it is going to grow each week with new words and phrases added. In this instance it is just "Bob". the dictionary will probably grow into the hundreds. I am not hell bent on not using regex, but as a relative newbie, I was trying to find out if there was another way that I don't now about. EDIT: The 3rd comment below by @tripleee is a great suggestion and works for what I want to do. Thanks a bunch. Apologies, not my intention to cause some down votes for myself and within the answer. All help as been appreciated.
2014/01/19
[ "https://Stackoverflow.com/questions/21214531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680443/" ]
You have to adjust either data you are working with or the algorithm to account for this special case. For example you may decorate the beginning of your data with some value and add corresponding replacement to your dictionary. ``` f_begin_deco = '\0\0\0' # Sequence that won't be in data. start_replacements = { f_begin_deco + 'Bob': f_begin_deco + 'John' } # In your search_and_replace function. read_article = f_begin_deco + article.read() replaced = read_article.replace(start_word, replacement) replaced = replaced[len(f_begin_deco):] # Remove beginning of file decoration. ``` Also you may what to explore [context manager protocol](http://docs.python.org/2/library/stdtypes.html#typecontextmanager) to create more elegant code for data decoration. Alternative approach is to change your search and replace algorithm to make it account for the special case. ``` start_replacements = { 'Bob': 'John' } # In your search_and_replace function. if read_article.startswith(start_word): read_article = replacement + read_article[len(start_word):] ```
Question to your question: why don't you want to use regex? ``` >>> import re >>> x = "! Bob is a foo bar" >>> re.sub('^[!?.\\n\\s]*Bob','John', x) 'John is a foo bar' >>> x[:2]+re.sub('^[!?.\\n\\s]*Bob','John', x) '! John is a foo bar' ``` Here's my attempt to do it without regex: ``` >>> x = "! Bob is a foo bar" >>> first = ['!','?','.','\n'] >>> x = x.split() >>> x[1] ="John" if x[1] == "Bob" and x[0] in first else x[1] >>> x ['!', 'John', 'is', 'a', 'foo', 'bar'] >>> " ".join(x) '! John is a foo bar' ``` As @falsetru noted: ``` >>> x = "\n Bob is a foo bar" >>> x = x.split() >>> x[1] ="John" if x[1] == "Bob" and x[0] in first else x[1] >>> " ".join(x) 'Bob is a foo bar' ``` Possibly the ugliest way to resolve the `str.split()` removing the `\n` is to: ``` >>> x = "\n Bob is a foo bar" >>> y = x.split() >>> y[1] ="John" if y[1] == "Bob" and y[0] in first else y[1] >>> y ['Bob', 'is', 'a', 'foo', 'bar'] >>> if x.split()[0] == "\n": ... y.insert(0,'\n') ... >>> " ".join(y) 'Bob is a foo bar' >>> y ['Bob', 'is', 'a', 'foo', 'bar'] >>> if x[0] == "\n": ... y.insert(0,'\n') ... >>> " ".join(y) '\n Bob is a foo bar' ``` I should stop appending my answer, otherwise I'll be just condoning the OP to use nonsensical solution to which regex resolves easily.
17,650
61,680,684
I am having trouble with a problem in python. I am making a tic tac toe game, i have created a function that takes in a list of lists containing the state of the game such that [[0,0,0],[0,0,0],[0,0,0]] and output a similar list replacing the 0, 1, 2 by "-", "X", "O" respectively as such - ``` def display_board(b): for r in range(0, 3): for c in range(0, 3): if b[r][c] == 1: b[r][c] = 'X' elif b[r][c] == 2: b[r][c] = 'O' else: b[r][c] = '-' return b ``` I am using as disp = display\_board(b) where b contains the board state as mentioned above. The function is returning the needed value correctly, however, the value of b is also changing the same as disp. Also if there is another variable before this such that test = b the value of test also changes. i have tried different compilers on different computers and the problem persists. following is my full code and i am grateful for any help in advance ``` def move_input(p): x = str(input("player {} enter your move <row,col> - ".format(p))) while True: l = x.split(",") if len(x) != 3: x = str(input("ERROR: INVALID INPUT\please enter correct input row and col with comma in between <row," "col> - ")) elif not (l[0].isdigit() and l[1].isdigit()): x = str(input("ERROR: INVALID INPUT\please enter correct input row and col with comma in between <row," "col> - ")) else: x[0] -= 1 x[1] -= 1 return x def display_board(a): b = a for r in range(0, 3): for c in range(0, 3): if b[r][c] == 1: b[r][c] = 'X' elif b[r][c] == 2: b[r][c] = 'O' else: b[r][c] = '-' return b def game(): g = [[0 for x in range(0, 3)] for x in range(0, 3)] print("Hi there! welcome to the game of tic tac toe...") while True: # Here is the problem print(g) disp = display_board(g) # the value of g changes after this print(g) print("\n{}\n{}\n{}".format(disp[0], disp[1], disp[2])) for player in range(1, 3): if (0 in g[0]) or (0 in g[1]) or (0 in g[2]): i = move_input(player) row = i[0] col = i[1] g[row][col] = player else: print("DRAW! good job both players") return 0 game() ```
2020/05/08
[ "https://Stackoverflow.com/questions/61680684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13498818/" ]
I know I'm writing very late, but I hope it helps some other people who are looking for the same thing, it has helped me, especially passing the parameters to the connection to the database, to which the variable is assigned in the where and filter the information that is needed. all from the url: <https://developers.google.com/datastudio/connector/data-source-parameters#set_url_parameters>
Expanding on Yeisson's answer. Report parameters are passed via query parameter `params`. Value is URL-encoded JSON object with all report parameters that you want to set. So parameter values such as ```json { "ds0.includeToday": true, "ds0.units": "Metric", "ds1.countries": ["Canada", "Mexico"], "ds1.labelName": "Population" } ``` would be encoded this way (JavaScript): ```js const json = JSON.stringify({ "ds0.includeToday": true, "ds0.units": "Metric", "ds1.countries": ["Canada", "Mexico"], "ds1.labelName": "Population" }) // "{\"ds0.includeToday\":true,\"ds0.units\":\"Metric\",\"ds1.countries\":[\"Canada\",\"Mexico\"],\"ds1.labelName\":\"Population\"}" const encodedParams = encodeURIComponent("{\"ds0.includeToday\":true,\"ds0.units\":\"Metric\",\"ds1.countries\":[\"Canada\",\"Mexico\"],\"ds1.labelName\":\"Population\"}") // "%7B%22ds0.includeToday%22%3Atrue%2C%22ds0.units%22%3A%22Metric%22%2C%22ds1.countries%22%3A%5B%22Canada%22%2C%22Mexico%22%5D%2C%22ds1.labelName%22%3A%22Population%22%7D" ``` and then passed to the report like this: `https://datastudio.google.com/reporting/REPORT_ID/page/PAGE_ID?params=%7B%22ds0.includeToday%22%3Atrue%2C%22ds0.units%22%3A%22Metric%22%2C%22ds1.countries%22%3A%5B%22Canada%22%2C%22Mexico%22%5D%2C%22ds1.labelName%22%3A%22Population%22%7D`
17,653
56,576,400
I wanted to create an mapping between two arrays. But in python, doing this resulted in a mapping with **last element getting picked**. ``` array_1 = [0,0,0,1,2,3] array_2 = [4,4,5,6,8,7] mapping = dict(zip(array_1, array_2)) print(mapping) ``` The mapping resulted in `{0: 5, 1: 6, 2: 8, 3: 7}` How to pick the most occurring element in this case `4` for key `0`.
2019/06/13
[ "https://Stackoverflow.com/questions/56576400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11309609/" ]
You can create a dictionary with key and a list of values for the key. Then you can go over the list of values in this dictionary, and update the value to be the most frequent item in the list using [Counter.most\_common](https://docs.python.org/3/library/collections.html#collections.Counter.most_common) ``` from collections import defaultdict, Counter array_1 = [0,0,0,1,2,3] array_2 = [4,4,5,6,8,7] mapping = defaultdict(list) #Create the mapping with a list of values for key, value in zip(array_1, array_2): mapping[key].append(value) print(mapping) #defaultdict(<class 'list'>, {0: [4, 4, 5], 1: [6], 2: [8], 3: [7]}) res = defaultdict(int) #Iterate over mapping and chose the most frequent element in the list, and make it the value for key, value in mapping.items(): #The most frequent element will be the first element of Counter.most_common res[key] = Counter(value).most_common(1)[0][0] print(dict(res)) ``` The output will be ``` {0: 4, 1: 6, 2: 8, 3: 7} ```
You can count frequencies of all mappings using `Counter` and then sort those mappings by key and frequency: ``` from collections import Counter array_1 = [0,0,0,1,2,3] array_2 = [4,4,5,6,8,7] c = Counter(zip(array_1, array_2)) dict(i for i, _ in sorted(c.items(), key=lambda x: (x[0], x[1]), reverse=True)) # {3: 7, 2: 8, 1: 6, 0: 4} ```
17,654
73,956,255
Hi I am running this python code to reduce multi-line patterns to singletons however, I am doing this on extremely large files of 200,000+ lines. Here is my current code: ``` import sys import re with open('largefile.txt', 'r+') as file: string = file.read() string = re.sub(r"((?:^.*\n)+)(?=\1)", "", string, flags=re.MULTILINE) file.seek(0) file.write(string) file.truncate() ``` The problem is the re.sub() is taking ages (10m+) on my large files. Is it possible to speed this up in any way? Example input file: ``` hello mister hello mister goomba bananas goomba bananas chocolate hello mister ``` Example output: ``` hello mister goomba bananas chocolate hello mister ``` These patterns can be bigger than 2 lines as well.
2022/10/05
[ "https://Stackoverflow.com/questions/73956255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20154432/" ]
Regexps are compact here, but will never be speedy. For one reason, you have an inherently line-based problem, but regexps are inherently character-based. The regexp engine has to deduce, over & over & over again, where "lines" are by searching for newline characters, one at a time. For a more fundamental reason, everything here is brute-force character-at-a-time search, remembering nothing from one phase to the next. So here's an alternative. Split the giant string into a list of lines, just once at the start. Then that work never needs to be done again. And then build a dict, mapping a line to a list of the indices at which that line appears. That takes linear time. Then, given a line, we don't have to search for it at all: the list of indices tells us at once every place it appears. Worse-case time *can* still be poor, but I expect it will be at least a hundred times faster on "typical" inputs. ```py def dedup(s): from collections import defaultdict lines = s.splitlines(keepends=True) line2ix = defaultdict(list) for i, line in enumerate(lines): line2ix[line].append(i) out = [] n = len(lines) i = 0 while i < n: line = lines[i] # Look for longest adjacent match between i:j and j:j+(j-i). # j must be > i, and j+(j-i) <= n so that j <= (n+i)/2. maxj = (n + i) // 2 searching = True for j in reversed(line2ix[line]): if j > maxj: continue if j <= i: break # Lines at i and j match. if all(lines[i + k] == lines[j + k] for k in range(1, j - i)): searching = False break if searching: out.append(line) i += 1 else: # skip the repeated block at i:j i = j return "".join(out) ``` EDIT ---- This incorporates Kelly's idea of incrementally updating `line2ix` using a `deque` so that the candidates looked at are always in `range(i+1, maxj+1)`. Then the innermost loop doesn't need to check for those conditions. It's a mixed bag, losing a little when there are very few duplicates, because in such cases the `line2ix` sequences are very short (or even singletons for unique lines). Here's timing for a case where it really pays off: a file containing about 30,000 lines of Python code. Many lines are unique, but a few kinds of lines are very common (for example, the empty `"\n"` line). Cutting the work in the innermost loop can pay for those common lines. `dedup_nuts` was picked for the name because this level of micro-optimization is, well, nuts ;-) ```none 71.67997950001154 dedup_original 48.948923900024965 dedup_blhsing 2.204853900009766 dedup_Tim 9.623824400012381 dedup_Kelly 1.0341253000078723 dedup_blhsingTimKelly 0.8434303000103682 dedup_nuts ``` And the code: ```py def dedup_nuts(s): from array import array from collections import deque encode = {} decode = [] lines = array('L') for line in s.splitlines(keepends=True): if (code := encode.get(line)) is None: code = encode[line] = len(encode) decode.append(line) lines.append(code) del encode line2ix = [deque() for line in lines] view = memoryview(lines) out = [] n = len(lines) i = 0 last_maxj = -1 while i < n: maxj = (n + i) // 2 for j in range(last_maxj + 1, maxj + 1): line2ix[lines[j]].appendleft(j) last_maxj = maxj line = lines[i] js = line2ix[line] assert js[-1] == i, (i, n, js) js.pop() for j in js: #assert i < j <= maxj if view[i : j] == view[j : j + j - i]: for k in range(i + 1, j): js = line2ix[lines[k]] assert js[-1] == k, (i, k, js) js.pop() i = j break else: out.append(line) i += 1 #assert all(not d for d in line2ix) return "".join(map(decode.__getitem__, out)) ``` Some key invariants are checked by asserts there, but the expensive ones are commented out for speed. Season to taste.
Nesting a quantifier within a quantifier is expensive and in this case unnecessary. You can use the following regex without nesting instead: ``` string = re.sub(r"(^.*\n)(?=\1)", "", string, flags=re.M | re.S) ``` In the following test it more than cuts the time in half compared to your approach: <https://replit.com/@blhsing/HugeTrivialExperiment>
17,655
53,569,407
Is it possible to conditionally replace parts of strings in MySQL? Introduction to a problem: Users in my database stored articles (table called "table", column "value", each row = one article) with wrong links to images. I'd like to repair all of them at once. To do that, I have to replace all of the addresses in "href" links that are followed by images, i.e., `<a href="link1"><img src="link2"></a>` should by replaced by `<a href="link2"><img src="link2"></a>` My idea is to search for each "href" tag and if the tag is followed by and "img", than I'd like to obtain "link2" from the image and use it replace "link1". I know how to do it in bash or python but I do not have enough experience with MySQL. To be specific, my table contains references to images like ``` <a href="www.a.cz/b/c"><img class="image image-thumbnail " src="www.d.cz/e/f.jpg" ... ``` I'd like to replace the first adress (href) by the image link. To get ``` <a href="www.d.cz/e/f.jpg"><img class="image image-thumbnail " src="www.d.cz/e/f.jpg" ... ``` Is it possible to make a query (queries?) like ``` UPDATE `table` SET value = REPLACE(value, 'www.a.cz/b/c', 'XXX') WHERE `value` LIKE '%www.a.cz/b/c%' ``` where XXX differs every time and its value is obtained from the database? Moreover, "www.a.cz/b/c" varies. To make things complicated, not all of the images have the "href" link and not all of the links refer to images. There are three possibilities: 1. "href" followed by "img" -> replace 2. "href" not followed by "img" -> keep original link (probably a link to another page) 3. "img" without "href" -> do nothing (there is no wrong link to replace) Of course, some of the images may have a correct link. In this case it may be also replaced (original and new will be the same). Database info from phpMyAdmin > > Software: MariaDB > > > Software version: 10.1.32-MariaDB - Source distribution > > > Protocol version: 10 > > > Server charset: UTF-8 Unicode (utf8) > > > Apache > > > Database client version: libmysql - 5.6.15 > > > PHP extension: mysqli > > > Thank you in advance
2018/12/01
[ "https://Stackoverflow.com/questions/53569407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10731133/" ]
``` SELECT regexp_replace( value, '^<a href="([^"]+)"><img class="([^"]+)" src="([^"]+)"(.*)$', '<a href="\\3"><img class="\\2" src="\\3"\\4' ) FROM yourTable ``` The replacement only happens if the pattern is matched. * `^` at the start means `start of the string` * `([^"]+)` means `one of more characters, excluding "` * `(.*)` means zero or more of any character * `$` at the end means `end of the string` The replacement takes the 3rd "pattern enclosed in braces" (back-reference) and puts it where the 1st "pattern enclosed in braces" (back-reference) was. The 2nd, 3rd and 4th back-references are replaced with themselves *(no change)*. <https://dbfiddle.uk/?rdbms=mariadb_10.2&fiddle=96aef2214f844a1466772f41415617e5> If you have strings that don't ***exactly*** match the pattern, it will do nothing. Extra spaces will trip it up, for example. In which case you need to work out a new regular expression that always matches all of the strings you want to work on. Then you can use the `\\n` back-references to make replacements. For example, the following deals with extra spaces in the `href` tag... ``` SELECT regexp_replace( value, '^<a[ ]+href[ ]*=[ ]*"([^"]+)"><img class="([^"]+)" src="([^"]+)"(.*)$', '<a href="\\3"><img class="\\2" src="\\3"\\4' ) FROM yourTable ``` ***EDIT:*** Following comments clarifying that these are actually snippets from the MIDDLE of the string... <https://dbfiddle.uk/?rdbms=mariadb_10.2&fiddle=48ce1cc3df5bf4d3d140025b662072a7> ``` UPDATE yourTable SET value = REGEXP_REPLACE( value, '<a href="([^"]+)"><img class="([^"]+)" src="([^"]+)"', '<a href="\\3"><img class="\\2" src="\\3"' ) WHERE value REGEXP '<a href="([^"]+)"><img class="([^"]+)" src="([^"]+)"' ``` *(Though I prefer the syntax `RLIKE`, it's functionally identical.)* This will also find an replace that pattern multiple times. You're not clear if that's desired or possible.
Solved, thanks to @MatBailie , but I had to modified his answer. The final query, including the update, is ``` UPDATE `table` SET value = REGEXP_REPLACE(value, '(.*)<a href="([^"]+)"><img class="([^"]+)" src="([^"]+)"(.*)', '\\1<a href="\\4"><img class="\\3" src="\\4"\\5' ``` ) A wildcard (.\*) had to be put at the beginning of the search because the link is included in an article (long text) and, consequently, the arguments of the replace pattern are increased.
17,660
64,950,799
I am trying to group the indexes of the customers based on the following condition with python. If database contains the same contact number or email, the result should return the indexes of the tuples grouped together in a sub-list. For a given database: ``` data = [ ("Customer1","contactA", "emailA"), ("CustomerX","contactA", "emailX"), ("CustomerZ","contactZ", "emailW"), ("CustomerY","contactY", "emailX"), ] ``` The above example shows that Customer1 and CustomerX shares the same contact number, and CustomerX and CustomerY shares the same email, hence Customer1, CustomerX and CustomerY are the same customer. Hence the result is `[[0, 1, 3], [2]]`
2020/11/22
[ "https://Stackoverflow.com/questions/64950799", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14091382/" ]
You could build a graph where you connect elements with a common email or with a common contact and then find [connected components](https://en.wikipedia.org/wiki/Component_(graph_theory)) (e.g., by using a [bfs](https://en.wikipedia.org/wiki/Breadth-first_search) visit). In this case I'm using the [networkx](https://networkx.org/documentation/stable//index.html) library to build a graph and find connected components. ``` >>> contacts = defaultdict(list) >>> emails = defaultdict(list) >>> for idx, (name, contact, email) in enumerate(data): ... contacts[contact].append(idx) ... emails[email].append(idx) ... >>> g = nx.Graph() >>> for common_attr in itertools.chain(contacts.values(), emails.values()): ... g.add_edges_from(itertools.combinations(common_attr,2)) ... >>> list(nx.connected_components(g)) [{0, 1, 3}, {2}] ```
You could do this: ``` my_contact_dict = {} my_email_dict = {} my_list = [] for pos, cust in enumerate(data): contact_group = my_contact_dict.get(cust[1], set()) # returns empty set if not in dict email_group = my_email_dict.get(cust[2], set()) # contact_group.add (pos) email_group.add (pos) contact_group.update (email_group) # Share info between the two groups email_group.update (contact_group) # for member in contact_group: my_contact_dict[data[member][1]] = contact_group for member in email_group: my_email_dict[data[member][2]] = email_group result = {tuple(x) for x in my_contact_dict.values()} print (result) ``` **Testing it out:** ``` data = [ ("Customer1","contactA", "emailA"), ("CustomerX","contactA", "emailX"), ("CustomerZ","contactZ", "emailW"), ("CustomerY","contactY", "emailX"), ] ``` gives: ``` {(2,), (0, 1, 3)} ``` And: ``` data = [ ("Customer1","contactA", "emailA"), ("CustomerX","contactA", "emailX"), ("CustomerZ","contactZ", "emailW"), ("CustomerY","contactY", "emailX"), ("CustomerW","contactZ", "emailA"), ] ``` gives: ``` {(0, 1, 2, 3, 4)} ```
17,661
27,773,111
I'm new to cocos2d-X.I'm trying to set up cocos2d-x for android and I exactly followed below [video](https://www.youtube.com/watch?v=2LI1IrRp_0w&index=2&list=PLRtjMdoYXLf4od_bOKN3WjAPr7snPXzoe) tutorial I failed the steps in terminal with problem (python setup.py command result is not as expected). For example when I begin to setup in terminal I get the following error. ``` ->Please enter the path of NDK_ROOT (or press Enter to skip):/Users/apple/Documents/Development/Cosos2d-x/android-ndk-r9d ->Error: "/Users/apple/Documents/Development/Cosos2d-x/android-ndk-r9d " is not a valid path of NDK_ROOT. Ignoring it. ->Check environment variable ANDROID_SDK_ROOT ->Search for environment variable ANDROID_SDK_ROOT... ->ANDROID_SDK_ROOT not found ->Search for command android in system... ->Command android not found ``` The same above error happening for setup path ANDROID\_SDK\_ROOT and ANT\_ROOT. How can I fix the problem? Thanks for get me out. I'm working on * Mac OS 10.9.5 * android-ndk-r9d * apache-ant-1.9.4 * cocos2d-x-3.3 * adt-bundle-mac-x86\_64-20140321
2015/01/05
[ "https://Stackoverflow.com/questions/27773111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2219111/" ]
Do you copy the path to the terminal? If so, try to delete the trailing whitespace, it will solve the problem.
Cocos script uses `os.path.join($your_path, $some_extra_file)`, so you have to add slash `/` at the end: > > /Users/apple/Documents/Development/Cosos2d-x/android-ndk-r9d/ > > >
17,663
38,361,916
I am trying to insert the following list of dictionaries named `posts` to mongo, and got a `BulkWriteError: batch op errors occurred` error which I don't know how to fix. `posts:` ``` [{'#AUTHID': 'fffafe151f07a30a0ede2038a897b680', 'Records': [ {'DATE': '07/22/09 05:54 PM', 'STATUS': 'Is flying back friday night in time to move the rest of his stuff then go to work the next morning... great.'}, ...... {'DATE': '07/19/09 04:39 PM', 'STATUS': 'is stealing his net by the lake'}]}, {'#AUTHID': 'fffafe151f07a30a0ede2038a897b680', 'Records': [ {'DATE': '07/22/09 05:54 PM', 'STATUS': 'Is flying back friday night in time to move the rest of his stuff then go to work the next morning... great.'}, {'DATE': '07/19/09 04:39 PM', 'STATUS': 'is stealing his net by the lake'}, .... ``` The code I used: ``` collection = db.posts collection.insert_many(p for p in posts ) ``` But then I got an error that says `BulkWriteError: batch op errors occurred` and only managed to import the first dictionary (corresponding to the first `#AUTHID`) I found a link that describes similar situation but it doesn't explain much about why this happens or how to solve this issue. It's under the **\_Why does PyMongo add an *id field to all of my documents?*** in the following link: <https://github.com/mongodb/mongo-python-driver/blob/master/doc/faq.rst#id25>
2016/07/13
[ "https://Stackoverflow.com/questions/38361916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6200575/" ]
Not to late to answer here, you almost there. I am not sure if the [FAQ](https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.insert_many) updated but please read it properly: > > when calling `insert_many()` with a list of references to a **single** document raises BulkWriteError > > > Note that it says **single** or in other word, **same instance**. The example in the FAQ shows how to produce the error with the same instance. You can check if it is the same by using `id()` to display the memory address. In fact, I can see the content of your documents is the same. Most probably (but not necessarily) it is the same instance. ``` print id(posts[0]) print id(posts[1]) ``` If any of the dict having the same instance, then something wrong during preparing the `posts` variable. Just make sure all list items should have different instance because you are inserting (many) different documents!
[Here is output](http://i.stack.imgur.com/SIZQQ.png) in this output records are store which are in list. ``` from pymongo import MongoClient client = MongoClient('localhost', 27017) db = client['post'] posts = [{'#AUTHID': 'fffafe151f07a30a0ede2038a897b680', 'Records': [ {'DATE': '07/22/09 05:54 PM', 'STATUS': 'Is flying back friday night in time to move the rest of his stuff then go to work the next morning... great.'}, {'DATE': '07/19/09 04:39 PM', 'STATUS': 'is stealing his net by the lake'}]}, {'#AUTHID': 'fffafe151f07a30a0ede2038a897b680', 'Records': [ {'DATE': '07/22/09 05:54 PM', 'STATUS': 'Is flying back friday night in time to move the rest of his stuff then go to work the next morning... great.'}, {'DATE': '07/19/09 04:39 PM', 'STATUS': 'is stealing his net by the lake'}]}] collection = db.posti.insert_many(p for p in posts ) ```
17,664
18,388,050
I have a large amount of data of this type: ``` array(14) { ["ap_id"]=> string(5) "22755" ["user_id"]=> string(4) "8872" ["exam_type"]=> string(32) "PV Technical Sales Certification" ["cert_no"]=> string(12) "PVTS081112-2" ["explevel"]=> string(1) "0" ["public_state"]=> string(2) "NY" ["public_zip"]=> string(5) "11790" ["email"]=> string(19) "ivorabey@zeroeh.com" ["full_name"]=> string(15) "Ivor Abeysekera" ["org_name"]=> string(21) "Zero Energy Homes LLC" ["org_website"]=> string(14) "www.zeroeh.com" ["city"]=> string(11) "Stony Brook" ["state"]=> string(2) "NY" ["zip"]=> string(5) "11790" } ``` I wrote a for loop in python which reads through the file, creating a dictionary for each array and storing elements like thus: ``` a = 0 data = [{}] with open( "mess.txt" ) as messy: lines = messy.readlines() for i in range( 1, len(lines) ): line = lines[i] if "public_state" in line: data[a]['state'] = lines[i + 1] elif "public_zip" in line: data[a]['zip'] = lines[i + 1] elif "email" in line: data[a]['email'] = lines[i + 1] elif "full_name" in line: data[a]['contact'] = lines[i + 1] elif "org_name" in line: data[a]['name'] = lines[i + 1] elif "org_website" in line: data[a]['website'] = lines[i + 1] elif "city" in line: data[a]['city'] = lines[i + 1] elif "}" in line: a += 1 data.append({}) ``` I know my code is terrible, but I am fairly new to Python. As you can see, the bulk of my project is complete. What's left is to strip away the code tags from the actual data. For example, I need `string(15) "Ivor Abeysekera"` to become `Ivor Abeysekera"`. After some research, I considered `.lstrip()`, but since the preceding text is always different.. I got stuck. Does anyone have a clever way of solving this problem? Cheers! Edit: I am using Python 2.7 on Windows 7.
2013/08/22
[ "https://Stackoverflow.com/questions/18388050", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2646265/" ]
Depending on how the code tags are formatted, you could split the line on `"` then pick out the second element. ``` s = 'string(15) "Ivor Abeysekera"' temp = s.split('"')[1] # temp is 'Ivor Abeysekera' ``` Note that this will get rid of the trailing `"`, if you need it you can always just add it back on. In your example this would look like: ``` data[a]['state'] = lines[i + 1].split('"')[1] # etc. for each call of lines[i + 1] ``` Because you are calling it so much (regardless of what answer you use) you should probably turn it into a function: ``` def prepare_data(line_to_fix): return line_to_fix.split('"')[1] # latter on... data[a]['state'] = prepare_data(lines[i + 1]) ``` This will give you some more flexibility.
**BAD SOLUTION Based on current question** but to answer your question just use ``` info_string = lines[i + 1] value_str = info_string.split(" ",1)[-1].strip(" \"") ``` **BETTER SOLUTION** do you have access to the php generating that .... if you do just do `echo json_encode($data);` instead of using `var_dump` if instead you have them output json it(the json output) will look like ``` {"variable":"value","variable2","value2"} ``` you can then read it in like ``` import json json_str = requests.get("http://url.com/json_dump").text # or however you get the original text data = json.loads(json_str) print data ```
17,665
64,154,088
I am Python coder and got stuck in a question that "How to check input in textbox of tkinter python". The problem is that it is not giving output on writing this code . ``` def start(event): a = main.get(1.0,END) if a == 'ver': print('.....') main = Text(root) main.pack() root.bind('<Return>',start) ```
2020/10/01
[ "https://Stackoverflow.com/questions/64154088", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14225987/" ]
We can do this by `get()` method: ``` from tkinter import * a=Tk() def check(): print(x.get('1.0',END)[:-1]) x=Text(a) b=Button(a,text='Check',command=check) x.pack() b.pack() a.mainloop() ```
You should write something like ``` def start(event): t = var.get() if t == 'something': pass var = StringVar() e = Entry(master, textvariable=var) e.pack() e.bind(bind('<Return>',start) ```
17,671
52,113,890
I needed to extend User model to add things like address, score, more user\_types, etc. There are 2 possible ways to achieve that, extend the User model or create a new model that will be connected with the target User with `OneToOneField`. I decided to go with a new model because It seemed easier and It is recommended in [this](https://stackoverflow.com/questions/44109/extending-the-user-model-with-custom-fields-in-django) stack overflow question. But now I cannot create Serializer without nested profile field which is moreover undocumented because default rest\_framwork documentation generator cannot generate documentation for nested serializers. My `UserSerializer` looks like this: ``` class UserSerializer(serializers.ModelSerializer): # This creates a nested profile field profile = ProfileSerializer(required=True) def create(self, validated_data): profile_data = validated_data.pop('profile') user = User.objects.create_user(**validate_data) profile, created = Profile.objects.upodate_or_creeate(user=user, defaults=profile_data) return user class Meta: model = User fields = ('id', 'username', 'email', 'password', 'buckelists', 'profile') read_only_fields = ('id',) extra_kwargs = {'password':{'write_only': True}} ``` This Serializer takes following JSON format: ``` { 'name': ..., 'email': ..., 'password': ..., 'profile': { 'address': ..., 'score': ..., 'user_type': ..., 'achievements': ..., 'country': ..., 'trusted': ..., } ``` This looks weird and documentation generated with `rest_framework.documentation.include_docs_urls` shows just following: ``` { 'username': ..., 'email': ..., 'password': ..., 'field': ..., } ``` So it's not clear what should be included in the profile field. I'd like to create Serializer that would accepted following format: ``` { 'name': ..., 'email': ..., 'password': ..., 'address': ..., 'score': ..., 'user_type': ..., 'achievements': ..., 'country': ..., 'trusted': ..., } ``` Is it possible without creating custom Serializer from scratch? Or at least is it possible to generate documentation for nested serializers. PS: I use python3.6 and Django 2.1 EDIT: Here is a relevant part of my models.py: ``` class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) trusted = models.BooleanField(default=False) address = models.CharField(max_length=100, default="") COUNTRIES = ( ('CZ', 'Czech Republic'), ('EN', 'England'), ) country = models.CharField(max_length=2, choices=COUNTRIES, default="CZ") score = models.BigIntegerField(default=0) achievements = models.ManyToManyField(Achievement, blank=True) USER_TYPES = ( ('N', 'Normal'), ('C', 'Contributor'), ('A', 'Admin'), ) user_type = models.CharField(max_length=1, choices=USER_TYPES, default='N') @receiver(post_save, sender=settings.AUTH_USER_MODEL) def create_auth_token(sender, instance=None, created=False, **kwargs): if created: Token.objects.create(user=instance) @receiver(post_save, sender=User) def create_user_profile(sender, instance, created=False, **kwargs): if created: profile, created = Profile.objects.get_or_create(user=instance) profile.save() ``` EDIT: Mohammad Ali's answers solves this for GET, but I'd also like to use POST, UPDATE and PATCH methods. I have found that I have to use `source` parameter but this is relative to serializer I don't know how to reference profile wihtout having profile field.
2018/08/31
[ "https://Stackoverflow.com/questions/52113890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4981456/" ]
Take it easy, You can create Profile obj just in the create function. ``` class UserSerializer(serializers.ModelSerializer): trusted = serializers.BooleanField() address = serializers.CharField() class Meta: model = User fields = ('username', 'email', 'password', 'trusted', 'address',) def create(self, validated_data): user = User.objects.create(username=validated_data['username'], email=validated_data['email']) user.set_password(validated_data['password']) user.save() profile = Profile(user=user, trusted=validated_data['trusted'], address=validated_data['address'] profile.save() return validated_data ``` It is just a brief implementation of your scenario. you can fill the story.
Plase read documentation for Serializers: [Django REST FRAMEWORK](http://www.django-rest-framework.org/api-guide/relations/) -- user related\_name ``` user = models.OneToOneField(User, on_delete=models.CASCADE, related_name="user_profile") # models class ProfileSerializer(serializers.ModelSerializer): user = serializers.PrimaryKeyRelatedField(queryset=User.objects.all(), required=False) class Meta: model = Profile fields = '__all__' class UserSerializer(serializers.ModelSerializer): user_profile = ProfileSerializer(required=True) class Meta: model = User fields = '__all__' ```
17,674
27,701,573
I got error message: *{DetachedInstanceError} Parent instance is not bound to a session; lazy load operation of attribute 'owner' cannot proceed* My python code: ``` car_obj = my_query_function() # get a Car object owner_name = car_obj.owner.name # here generate error! ``` My model: ``` class Person(EntityClass): attributes = ['id', 'name'] name = sa.Column(sa.String(250)) class Car(EntityClass): attributes = ['id', 'brand', 'color', 'purchase_time', 'owner_id'] brand = sa.Column(sa.String(250)) color = sa.Column(sa.String(250)) purchase_time = sa.Column(sa.String(250)) owner_id = sa.Column(DBKeyType, sa.ForeignKey(Person.__tablename__ + '.id'), nullable=False) owner = relationship('Person', cascade='all, delete-orphan', backref=backref('car', cascade='delete'), single_parent=True) ``` Is this has something to do with the lazy-loading relationship setting between Car and User (many-to-one association)? How can I fix the relationship? Thanks in advance.
2014/12/30
[ "https://Stackoverflow.com/questions/27701573", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3778914/" ]
I traced the docs and made it work by adding `lazy='subquery'` ``` owner = relationship('Person', lazy='subquery', cascade='all, delete-orphan', backref=backref('car', cascade='delete'), single_parent=True) ``` <http://docs.sqlalchemy.org/en/rel_0_9/orm/join_conditions.html>
Made it work by adding `joinedload_all()` in `session.query(Car).options()`, for example: ``` cars = session.query(Car).options(joinedload_all('*')).all() session.close() for car in cars: "do your struff" ``` good luck
17,675
29,956,181
I am a newbie in this field, and I am trying to solve a problem (not really sure if it is possible actually) where I want to print on the display some information plus some input from the user. The following works fine: ``` >>> print (" Hello " + input("tellmeyourname: ")) tellmeyourname: dfsdf Hello dfsdf ``` However if I want to assign user's input to a variable, I can't: ``` >>> print (" Hello ", name = input("tellmeyourname: ")) tellmeyourname: mike Traceback (most recent call last): File "<pyshell#47>", line 1, in <module> print (" Hello ", name = input("tellmeyourname: ")) TypeError: 'name' is an invalid keyword argument for this function ``` I have researched inside here and other python documentation, tried with `%s` etc. to solve, without result. I don't want to use it in two lines (first assigning the variable `name= input("tellmeyourname:")` and then printing). Is this possible?
2015/04/29
[ "https://Stackoverflow.com/questions/29956181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4848506/" ]
Starting from Python 3.8, this will become possible using an [assignment expression](https://www.python.org/dev/peps/pep-0572/): ``` print("Your name is: " + (name := input("Tell me your name: "))) print("Your name is still: " + name) ``` Though 'possible' is not the same as 'advisable'... --- But in Python <3.8: you can't. Instead, separate your code into two statements: ``` name = input("Tell me your name: ") print("Your name is: " + name) ``` If you often find yourself wanting to use two lines like this, you could make it into a function: ``` def input_and_print(question): s = input("{} ".format(question)) print("You entered: {}".format(s)) input_and_print("What is your name?") ``` Additionally you could have the function return the input `s`.
**no this is not possible**. well except something like ``` x=input("tell me:");print("blah %s"%(x,)); ``` but thats not really one line ... it just looks like it
17,676
34,300,908
I've been creating an webapp (just for learning purposes) using python django, and have no intention in deploying it. However, is there a way to let someone else, try the webapplication, or more precisely: Is it possible to somehow test the webapp on another computer. I tried to send det source code (and the whole folder), to another computer, installed virtual environment, activated it, and tried to runserver. However, I always get runtimeerror:maximum recursion depth exceeded in cmp. Is there any other way around it?
2015/12/15
[ "https://Stackoverflow.com/questions/34300908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3799968/" ]
You can use ngrok -- <https://ngrok.com/> -- to create a public URL to your local server for testing, and then give that URL to people so they can try your webapp.
You can also use [Localtunnel](https://localtunnel.me) to easily share a web service on your local development without deploying the code in the server. Install the localtunnel ``` npm install -g localtunnel ``` Start a webserver on some local port (eg <http://localhost:8000>) and use the command line interface to request a tunnel to your local server ``` lt --port 8000 ``` You will receive a url, for example <https://xyz.localtunnel.me>, that you can share with anyone for as long as your local instance of lt remains active. Any requests will be routed to your local service at the specified port.
17,677
56,364,756
My log files have some multiline bytestring in them, like [2019-05-25 19:16:31] b'logstring\r\n\r\nmore log' After I try to extract the original multiline string, how do I convert that to a real string using Python 3? As a simplified example, after reading the log file and stripping the time, I end up with a variable that has the type str and has the b' prefix, as a string. ``` # note: b'' is inside the str (taken from log) tmp = "b'logstring\r\n\r\nmore log'" # convert here print(tmp) ``` I'm looking for a way to tell python that the content needs to get decoded. But str doesn't allow decoding. The result I'd like to see from the print command is ``` logstring more log ``` UPDATE: The "eval" function will produce this result, but this would execute the code, so it's not safe. ``` # note: b'' is inside the str (taken from log) tmp = "b'logstring\r\n\r\nmore log'" tmp = eval(tmp) print(tmp) ``` Is there a better way?
2019/05/29
[ "https://Stackoverflow.com/questions/56364756", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2335020/" ]
You can use regex: ``` import re tmp = "b'logstring\r\n\r\nmore log'" r = re.compile(r"b'(.+)'", re.DOTALL|re.MULTILINE) result = r.sub(r"\1", tmp) print(result) # logstring\r\n\r\nmore log ``` You could use this for the entire file or line by line but you may need to slightly change this code to meet your needs. **- Edit -** If you want to remove duplicate newlines (as your desired output shows) you can do it like this: ``` import re tmp = "b'logstring\r\n\r\nmore log'" binary_regex = re.compile(r"b'(.+)'", re.DOTALL|re.MULTILINE) newline_regex = re.compile(r"(\r\n)+", re.DOTALL|re.MULTILINE) # Make sure to do the compiles outside of any loops you have result = binary_regex.sub(r"\1", tmp) # Remove the b'' result = newline_regex.sub(r"\r\n", result) # Remove duplicate new lines print(result) ``` **Output:** ``` logstring more log ```
It seems that you can lock down the eval function so that it can't run functions and python builtins. You do this by passing a dictionary of allowed global and local functions. By mapping all builtins to None you can block the execution of regular python commands. With that in place, using eval to evaluate the string content is safe. Source [this article](http://lybniz2.sourceforge.net/safeeval.html) ``` # note: b'' is inside the str (taken from log) tmp = "b'logstring\r\n\r\nmore log'" tmp = eval(tmp, {'__builtins__': None}, {}) print(tmp) ``` And in the comments above @juanpa.arrivillaga offered another solution, that also solves the eval security problem: ``` import ast tmp = "b'logstring\r\n\r\nmore log'" tmp = ast.literal_eval(tmp) print(tmp) ```
17,678
6,467,407
I'm using Jython from within Java; so I have a Java setup similar to below: ``` String scriptname="com/blah/myscript.py" PythonInterpreter interpreter = new PythonInterpreter(null, new PySystemState()); InputStream is = this.getClass().getClassLoader().getResourceAsStream(scriptname); interpreter.execfile(is); ``` And this will (for instance) run the script below: ``` # myscript.py: import sys if __name__=="__main__": print "hello" print sys.argv ``` How I pass in 'commandline' arguments using this method ? (I want to be able to write my Jython scripts so that I can also run them on the commandline with 'python script arg1 arg2').
2011/06/24
[ "https://Stackoverflow.com/questions/6467407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/184456/" ]
I'm using Jython 2.5.2 and `runScript` didn't exist, so I had to replace it with `execfile`. Aside from that difference, I also needed to set `argv` in the state object before creating the `PythonInterpreter` object: ``` String scriptname = "myscript.py"; PySystemState state = new PySystemState(); state.argv.append (new PyString ("arg1")); state.argv.append (new PyString ("arg2")); PythonInterpreter interpreter = new PythonInterpreter(null, state); InputStream is = Tester.class.getClassLoader().getResourceAsStream(scriptname); interpreter.execfile (is); ``` The `argv` list in the state object initially has a length of 1, with an empty string in it, so the preceding code results in the output: ``` hello ['', 'arg1', 'arg2'] ``` If you need `argv[0]` to be the actual script name, you'd need to create the state like this: ``` PySystemState state = new PySystemState(); state.argv.clear (); state.argv.append (new PyString (scriptname)); state.argv.append (new PyString ("arg1")); state.argv.append (new PyString ("arg2")); ``` Then the output is: ``` hello ['myscript.py', 'arg1', 'arg2'] ```
For those people whom the above solution does not work, try the below. This works for me on jython version 2.7.0 ``` String[] params = {"get_AD_accounts.py","-server", "http://xxxxx:8080","-verbose", "-logLevel", "CRITICAL"}; ``` The above replicates the command below. i.e. each argument and its value is separate element in params array. ***jython get\_AD\_accounts.py -logLevel CRITICAL -server <http://xxxxxx:8080> -verbose*** ``` PythonInterpreter.initialize(System.getProperties(), System.getProperties(), params); PySystemState state = new PySystemState() ; InputStream is = new FileInputStream("C:\\projectfolder\\get_AD_accounts.py"); PythonInterpreter interp = new PythonInterpreter(null, state); PythonInterpreter interp = new PythonInterpreter(null, state); interp.execfile(is); ```
17,679
16,640,624
I am outputting ``` parec -d "name" ``` You don't need to know this command, just know that as soon as you press enter, it outputs binary data representing audio. My goal is to read this with python in real time, ie start it and have it in a variable "data" I can read from with something like ``` data = p.stdout.read() ``` What I tried ``` p = subprocess.Popen(['parec','-d','"name"'],stdout=subprocess.PIPE,shell=True) while True: data = p.stdout.read() ``` But this results in no data being received. ``` parec -d "name" > result.raw ``` is readable with an audio-programme and contains exactly the necessary data. So what is the command for python?
2013/05/19
[ "https://Stackoverflow.com/questions/16640624", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2348735/" ]
There are multiple problems and they are not simple (unless the version of the ascensor script is outdated). The first issue is fairly simple, and illustrates the initial problem - some of the documentation doesn't match the code. In particular, the case doesn't match. For example, you have `childType: 'section'` (lowercase c) but you need `ChildType: 'section'` uppercase C. This is true of all the settings. After fixing this the fun begins! The various arrays you specified follow the documentation, but the ascensor script in the jsfiddle is attempting to parse strings. `"hello1 | hello2 |..."` versus `['hello1', 'hello2'...]` (similar issue for the AscensorMap). Perhaps the ascensor script reference refers to an outdated version, or the plug-in has changed and the docs haven't been updated.
The reverse of the current answer is now true. Using the latest version of Ascensor (1.8.0 (2014-02-23)), you have to specify the property names in lower case. e.g. change `ChildType: 'section'` to `childType: 'section'`. The examples all around the net are unfortunately using older versions.
17,680
57,361,849
I'm doing some dockerized code in Python (3.5) and flask (1.1.1) working against a CouchDB database (2.3.1) using the cloudant python extension (2.12.0) which seems to be the most up to date library to work against CouchDB. I'm trying to fetch and use a view from the database, but it is not working. I can fetch documents, and work with the database normally, but I can't use the view. I've added a print statement for the object that should hold the design document at the program start, and I see that the document shows as having no views (or anything at all) AND the CouchDB log shows NO requests for the design document being made. I also tried to both get the design document and use the view via curl using the same URL and username/password, and both actions work successfully. Here's sample code that fails: ```py from flask import Flask, render_template , request, g from cloudant.client import CouchDB from cloudant.view import View from cloudant.design_document import DesignDocument import requests application = Flask(__name__) application.config.from_pyfile("config.py") couch = CouchDB(application.config['COUCHDB_USER'], application.config['COUCHDB_PASSWORD'], url=application.config['COUCHDB_SERVER'], connect=True, auto_renew=True) database = couch[application.config['COUCHDB_DATABASE']] views = DesignDocument(database, '_design/vistas') print(views) print(views.list_views()) @application.route("/", methods=['GET', 'POST']) def index(): for pelicula in View(views,'titulos_peliculas'): titulos.append({ "id": pelicula['id'], "titulo": pelicula['key'] }) return render_template('menu.html',titulos=titulos) ``` In that code, the print of the design document (views) returns: ``` {'lists': {}, 'indexes': {}, 'views': {}, 'shows': {}, '_id': '_design/vistas'} ``` With empty views as show... And the CouchDB log only shows the login to the database and getting the DB info: ``` couchdb:5984 172.23.0.4 undefined POST /_session 200 ok 69 couchdb:5984 172.23.0.4 vmb_web HEAD //peliculas 200 ok 232 ``` No other queries at all. No errors in the app log either. Even when I call the routed use of the views: ``` [pid: 21|app: 0|req: 1/1] 172.23.0.1 () {52 vars in 1225 bytes} [Mon Aug 5 15:03:24 2019] POST / => generated 1148 bytes in 56 msecs (HTTP/1.1 200) 2 headers in 81 bytes (1 switches on core 0) ``` And, as I said, I can get, and use the document: ``` curl http://vmb_web:password@127.0.0.1:999/peliculas/_design/vistas ``` ``` {"_id":"_design/vistas","_rev":"1-e8108d41a6627ea61b9a89a637f574eb","language":"javascript","views":{"peliculas":{"map":"function(doc) { if (doc.schema == 'pelicula') { emit(doc.titulo, null); for(i=0;i<doc.titulos_alt.length;i++) { emit(doc.titulos_alt[i],null); } for(i=0;i<doc.directores.length;i++) { emit(doc.directores[i].nombre,null); } for(i=0;i<doc.actores.length;i++) { emit(doc.actores[i].nombre,null); } for(i=0;i<doc.escritores.length;i++) { emit(doc.escritores[i].nombre,null); } for(i=0;i<doc.etiquetas.length;i++) { emit(doc.etiquetas[i],null); } } }"},"titulos_peliculas":{"map":"function(doc) { if ((doc.schema == 'pelicula') && (doc.titulo)) { emit(doc.titulo, null); } }"},"archivos_peliculas":{"map":"function(doc) { if ((doc.schema == 'pelicula') && (doc.titulo)) { emit(doc.titulo, doc.archivo); } }"},"titulo_rev":{"map":"function(doc) { if ((doc.schema == 'pelicula') && (doc.titulo)) { emit(doc.titulo, doc._rev); } }"}}} ```
2019/08/05
[ "https://Stackoverflow.com/questions/57361849", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6879212/" ]
I'm answering my own question, in case someone in the future stumbles upon this. I got the answer from Esteban Laver in the github for python-cloudant and it is what @chrisinmtown mentions in a response up there. I was failing to call fetch() on the design document before using it. Another good suggestion was to use the get\_view\_result helper method for the database object which takes care of fetching the design document and instantiating the View object from the selected view all at once.
I believe the code posted above creates a new DesignDocument object, and does not search for an existing DesignDocument. After creating that object, it looks like you need to call its fetch() method and **then** check its views property. HTH. p.s. promoting my comment to an answer, hope that's cool in SO land these days :)
17,681
6,539,472
I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame. ``` Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pygame Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named pygame ``` Python 2.6.5 imports it without fuss, however, ``` Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pygame >>> ``` Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2? Thanks for any pointers. (--,)
2011/06/30
[ "https://Stackoverflow.com/questions/6539472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/777225/" ]
I hate to re-open an old post, but I had the hardest time installing pygame with a version of python that was not Ubuntu's default build. So I created this tutorial/ how to: [Install python3.1 and pygame1.9.1 in Ubuntu](https://sites.google.com/site/cslappe1/knowledge-base-and-how-to-s/installpython31andpygame191inubuntu) I hopes this helps the next unfortunate soul to try this.
Just use the below command to install pygame for Python3. I could install pygame correctly on Ubuntu 16.04 and Python Python 3.5.2. pip3 install pygame
17,682
42,349,191
This is a typical use case for FEM/FVM equation systems, so is perhaps of broader interest. From a triangular mesh à la [![enter image description here](https://i.stack.imgur.com/RS6MJ.png)](https://i.stack.imgur.com/RS6MJ.png) I would like to create a `scipy.sparse.csr_matrix`. The matrix rows/columns represent values at the nodes of the mesh. The matrix has entries on the main diagonal and wherever two nodes are connected by an edge. Here's an MWE that first builds a node->edge->cells relationship and then builds the matrix: ``` import numpy import meshzoo from scipy import sparse nx = 1600 ny = 1000 verts, cells = meshzoo.rectangle(0.0, 1.61, 0.0, 1.0, nx, ny) n = len(verts) nds = cells.T nodes_edge_cells = numpy.stack([nds[[1, 2]], nds[[2, 0]],nds[[0, 1]]], axis=1) # assign values to each edge (per cell) alpha = numpy.random.rand(3, len(cells)) vals = numpy.array([ [alpha**2, -alpha], [-alpha, alpha**2], ]) # Build I, J, V entries for COO matrix I = [] J = [] V = [] # V.append(vals[0][0]) V.append(vals[0][1]) V.append(vals[1][0]) V.append(vals[1][1]) # I.append(nodes_edge_cells[0]) I.append(nodes_edge_cells[0]) I.append(nodes_edge_cells[1]) I.append(nodes_edge_cells[1]) # J.append(nodes_edge_cells[0]) J.append(nodes_edge_cells[1]) J.append(nodes_edge_cells[0]) J.append(nodes_edge_cells[1]) # Create suitable data for coo_matrix I = numpy.concatenate(I).flat J = numpy.concatenate(J).flat V = numpy.concatenate(V).flat matrix = sparse.coo_matrix((V, (I, J)), shape=(n, n)) matrix = matrix.tocsr() ``` With ``` python -m cProfile -o profile.prof main.py snakeviz profile.prof ``` one can create and view a profile of the above: [![enter image description here](https://i.stack.imgur.com/TmNtl.png)](https://i.stack.imgur.com/TmNtl.png) The method `tocsr()` takes the lion share of the runtime here, but this is also true when building `alpha` is more complex. Consequently, I'm looking for ways to speed this up. What I've already found: * Due to the structure of the data, the values on the diagonal of the matrix can be summed up in advance, i.e., ``` V.append(vals[0, 0, 0] + vals[1, 1, 2]) I.append(nodes_edge_cells[0, 0]) # == nodes_edge_cells[1, 2] J.append(nodes_edge_cells[0, 0]) # == nodes_edge_cells[1, 2] ``` This makes `I`, `J`, `V` shorter and thus speeds up `tocsr`. * Right now, edges are "per cell". I could identify equal edges with each other using `numpy.unique`, effectively saving about half of `I`, `J`, `V`. However, I found that this too takes some time. (Not surprising.) One other thought that I had was that that I could replace the diagonal `V`, `I`, `J` by a simple `numpy.add.at` if there was a `csr_matrix`-like data structure where the main diagonal is kept separately. I know that this exists in some other software packages, but couldn't find it in scipy. Correct? Perhaps there's a sensible way to construct CSR directly?
2017/02/20
[ "https://Stackoverflow.com/questions/42349191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/353337/" ]
I would try creating the csr structure directly, especially if you are resorting to `np.unique` since this gives you sorted keys, which is half the job done. I'm assuming you are at the point where you have `i, j` sorted lexicographically and overlapping `v` summed using `np.add.at` on the optional `inverse` output of `np.unique`. Then `v` and `j` are already in csr format. All that's left to do is creating the `indptr` which you simply get by `np.searchsorted(i, np.arange(M+1))` where `M` is the column length. You can pass these directly to the `sparse.csr_matrix` constructor. Ok, let code speak: ``` import numpy as np from scipy import sparse from timeit import timeit def tocsr(I, J, E, N): n = len(I) K = np.empty((n,), dtype=np.int64) K.view(np.int32).reshape(n, 2).T[...] = J, I S = np.argsort(K) KS = K[S] steps = np.flatnonzero(np.r_[1, np.diff(KS)]) ED = np.add.reduceat(E[S], steps) JD, ID = KS[steps].view(np.int32).reshape(-1, 2).T ID = np.searchsorted(ID, np.arange(N+1)) return sparse.csr_matrix((ED, np.array(JD, dtype=int), ID), (N, N)) def viacoo(I, J, E, N): return sparse.coo_matrix((E, (I, J)), (N, N)).tocsr() #testing and timing # correctness N = 1000 A = np.random.random((N, N)) < 0.001 I, J = np.where(A) E = np.random.random((2, len(I))) D = np.zeros((2,) + A.shape) D[:, I, J] = E D2 = tocsr(np.r_[I, I], np.r_[J, J], E.ravel(), N).A print('correct:', np.allclose(D.sum(axis=0), D2)) # speed N = 100000 K = 10 I, J = np.random.randint(0, N, (2, K*N)) E = np.random.random((2 * len(I),)) I, J, E = np.r_[I, I, J, J], np.r_[J, J, I, I], np.r_[E, E] print('N:', N, ' -- nnz (with duplicates):', len(E)) print('direct: ', timeit('f(a,b,c,d)', number=10, globals={'f': tocsr, 'a': I, 'b': J, 'c': E, 'd': N}), 'secs for 10 iterations') print('via coo:', timeit('f(a,b,c,d)', number=10, globals={'f': viacoo, 'a': I, 'b': J, 'c': E, 'd': N}), 'secs for 10 iterations') ``` Prints: ``` correct: True N: 100000 -- nnz (with duplicates): 4000000 direct: 7.702431229001377 secs for 10 iterations via coo: 41.813509466010146 secs for 10 iterations ``` Speedup: 5x
So, in the end this turned out to be the difference between COO's and CSR's `sum_duplicates` (just like @hpaulj suspected). Thanks to the efforts of everyone involved here (particularly @paul-panzer), [a PR](https://github.com/scipy/scipy/pull/7078) is underway to give `tocsr` a tremendous speedup. SciPy's `tocsr` does a `lexsort` on `(I, J)`, so it helps organizing the indices in such a way that `(I, J)` will come out fairly sorted already. For for `nx=4`, `ny=2` in the above example, `I` and `J` are ``` [1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7] [1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7 1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7] ``` First sorting each row of `cells`, then the rows by the first column like ``` cells = numpy.sort(cells, axis=1) cells = cells[cells[:, 0].argsort()] ``` produces ``` [1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6] [1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6 1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6] ``` For the number in the original post, sorting cuts down the runtime from about 40 seconds to 8 seconds. Perhaps an even better ordering can be achieved if the nodes are numbered more appropriately in the first place. I'm thinking of [Cuthill-McKee](https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm) and [friends](https://en.wikipedia.org/wiki/Minimum_degree_algorithm).
17,692
69,276,976
I've tried to way I was instructed and moved the code in csv I was given into the same folder as my Jupyter Notebook is located. It still isn't reading it. I'm also trying to convert it into a dataframe and get it to 'describe'. I'll post the code and the errors below. Please help! Thank you in advance! ``` import pandas as pd fish = pd.read_csv('c:\\Users\\M\anaconda3\\Scripts\\Fish') fish2 = pd.DataFrame(fish) fish2.to_csv('fishdata.csv') fish2.describe() ``` ``` OSError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_11624/2289113011.py in <module> ----> 1 fish = pd.read_csv('c:\\Users\\M\anaconda3\\Scripts\\Fish') ~\anaconda3\lib\site-packages\pandas\util\_decorators.py in wrapper(*args, **kwargs) 309 stacklevel=stacklevel, 310 ) --> 311 return func(*args, **kwargs) 312 313 return wrapper ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options) 584 kwds.update(kwds_defaults) 585 --> 586 return _read(filepath_or_buffer, kwds) 587 588 ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py in _read(filepath_or_buffer, kwds) 480 481 # Create the parser. --> 482 parser = TextFileReader(filepath_or_buffer, **kwds) 483 484 if chunksize or iterator: ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py in __init__(self, f, engine, **kwds) 809 self.options["has_index_names"] = kwds["has_index_names"] 810 --> 811 self._engine = self._make_engine(self.engine) 812 813 def close(self): ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py in _make_engine(self, engine) 1038 ) 1039 # error: Too many arguments for "ParserBase" -> 1040 return mapping[engine](self.f, **self.options) # type: ignore[call-arg] 1041 1042 def _failover_to_python(self): ~\anaconda3\lib\site-packages\pandas\io\parsers\c_parser_wrapper.py in __init__(self, src, **kwds) 49 50 # open handles ---> 51 self._open_handles(src, kwds) 52 assert self.handles is not None 53 ~\anaconda3\lib\site-packages\pandas\io\parsers\base_parser.py in _open_handles(self, src, kwds) 220 Let the readers open IOHandles after they are done with their potential raises. 221 """ --> 222 self.handles = get_handle( 223 src, 224 "r", ~\anaconda3\lib\site-packages\pandas\io\common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options) 699 if ioargs.encoding and "b" not in ioargs.mode: 700 # Encoding --> 701 handle = open( 702 handle, 703 ioargs.mode, OSError: [Errno 22] Invalid argument: 'c:\\Users\\M\x07naconda3\\Scripts\\Fish' ```
2021/09/22
[ "https://Stackoverflow.com/questions/69276976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Deno does not currently support "classic" workers. 1. From [Worker() - Web APIs | MDN](https://developer.mozilla.org/en-US/docs/Web/API/Worker/Worker): > > `type`: A [`DOMString`](https://developer.mozilla.org/en-US/docs/Web/API/DOMString) specifying the type of worker to create. The value can be `classic` or `module`. If not specified, the default used is `classic`. > > > 2. From [Workers | Manual | Deno](https://deno.land/manual/runtime/workers): > > Currently Deno supports only `module` type workers; thus it's essential to pass the `type: "module"` option when creating a new worker. > > > For your use case you might be able to use a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs). e.g.: ``` new Worker( `data:text/javascript;base64,${btoa( `console.log("hello world"); self.close();` )}`, { type: "module" } ); ```
The information provided in [mfulton26's answer](https://stackoverflow.com/a/69292184/438273) is right, but you don't need a data URL: you simply need to add `{ type: "module" }` to your worker instantiation options. Deno even supports TypeScript as the source for your worker: `blob-worker.ts`: ```ts const workerModuleSource = ` const version: string = Deno.version.deno; console.log(\`Hello from Deno v\${version}\`); self.close(); `; const blob = new Blob( [workerModuleSource], {type: 'application/typescript'}, ); const objUrl = URL.createObjectURL(blob); const worker = new Worker(objUrl, { deno: true, type: 'module', }); URL.revokeObjectURL(objUrl); ``` ``` $ deno run --unstable blob-worker.ts Hello from Deno v1.14.1 ```
17,693
56,452,581
I've almost the same problem like this one: [How to make a continuous alphabetic list python (from a-z then from aa, ab, ac etc)](https://stackoverflow.com/questions/29351492/how-to-make-a-continuous-alphabetic-list-python-from-a-z-then-from-aa-ab-ac-e) But, I am doing a list in gui like excel, where on the vertical header should be letters ...aa,ab,ac....dg,dh,di... To do it, I have to declare every place on my list to certain letter. It is probably impossible with yield. I mean, let me say, I have 100 of cells and I want to name them all differently. Cell 1 should be "A", Cell 2 should be "B".... Cell 27 should be "AA" and so one. You know it probably from excel. I could do it manually, but it is going to take a lot of time. Well, I tried to play a little with this code underneath, but without success. I know that there should be a loop somewhere, but I have no idea where. ``` from string import ascii_lowercase import itertools def iter_all_strings(): for size in itertools.count(1): for s in itertools.product(ascii_lowercase, repeat=size): yield "".join(s) for s in iter_all_strings(): print(s) if s == 'bb': break ``` The scope: "for s in iter\_all\_strings():" is counting until the break. I would say here should be my loop for iteration for my cells. There's just no place for that.
2019/06/04
[ "https://Stackoverflow.com/questions/56452581", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11329096/" ]
Another alternative, if you want to dive deeper (create up to ~18,000 columns): ``` from string import ascii_lowercase letters = list(ascii_lowercase) num_cols = 100 excel_cols = [] for i in range(0, num_cols - 1): n = i//26 m = n//26 i-=n*26 n-=m*26 col = letters[m-1]+letters[n-1]+letters[i] if m>0 else letters[n-1]+letters[i] if n>0 else letters[i] excel_cols.append(col) ```
Try this code. It works by pretending that all Excel column names have two characters, but the first "character" may be the null string. I get the `product` to accept the null string as a "character" by using a list of characters rather than a string. ``` from string import ascii_lowercase import itertools first_char = [''] + list(ascii_lowercase) def iter_excel_columns(): for char1, char2 in itertools.product(first_char, ascii_lowercase): yield char1 + char2 for s in iter_excel_columns(): print(s) if s == 'bb': break ``` This gives the printout that you apparently want: ``` a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax ay az ba bb ```
17,694
64,834,395
i use linux nodejs had no problem untill i upgraded my system (sudo apt upgrade) now when i try to install nodejs it say python-minimal mot installed then i knew that it casue of updating python from python2.7.17 to python2.7.18 and python minimal is no longer require ,but now i cant install nodejs cause it ask for python-minimal can any one help me the problem is when i want to install nodejs 15 or 14 but when i install nodejs 12 or lower it have no problem this is what it say after i write (sudo apt install nodejs) ``` sudo apt upgrade nodejs [sudo] password for julian: Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libnode72 : Conflicts: nodejs-legacy nodejs : Depends: python-minimal but it is not installable E: Broken packages ```
2020/11/14
[ "https://Stackoverflow.com/questions/64834395", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14535629/" ]
jq does not have an `eval` function for evaluating arbitrary jq expressions, but it does provide functions that can be used to achieve much the same effect, the key idea being that certain JSON values can be used to specify query operations. In your case, you would have to translate the jq query into a suitable jq operation, such as: ``` jq --argjson a '["Header","Tenant"]' ' getpath(paths|select( .[- ($a|length) :]== $a)) ' test.json ``` Extending jq's JSON-based query language ---------------------------------------- More interestingly, you could write your own `eval`, e.g. ``` jq --argjson a '[[], "Header","Tenant"]' ' def eval($expr): if $expr == [] then . else $expr[0] as $op | if $op == [] then .[] | eval($expr[1:]) else getpath([$op]) | eval($expr[1:]) end end; eval($a) ' test.json ``` With eval.jq as a module ------------------------ If the above def of `eval` were put in a file, say ~/jq/eval.jq, then you could simply write: ``` jq -L ~/jq --argjson a '[[], "Header","Tenant"]' ' include "eval"; eval($a)' test.json ``` Or you could specify the search path in the jq program: ``` jq --argjson a '[[], "Header","Tenant"]' ' include "eval" { "search": "~/jq" }; eval($a)' input.json ``` Or you could use `import` ...
**TLDR;** The following code does the job: ``` $ a=".[].Header.Tenant"; jq -f <(echo "[$a]") test.json [ "Tenant1", "Tenant2" ] ``` One as well can add/modify the filter in the jq call, if needed: ``` $ a=".[].Header.Tenant"; jq -f <(echo "[$a]|length") test.json 2 ``` **Longer explanation** My ultimate goal was to figure out how I can define the lowest common denominator jq filter in a variable and use it when calling jq, plus add additional parameters if necessary. If you have a really complex jq filter spanning multiple lines that you call frequently, you probably want to template it somehow and use that template when calling jq. While *peak* demonstrated how it can be done, I think it is overengineering the simple task. However, using process substitution combined with the jq's `-f` option to read a filter from the file does solve my problem.
17,696
61,081,016
After following the official RTD installation tutorial for ubuntu18 I manage to do everything (even webhooks) until the point of building, for a project called **test**, where I get the following error: > > python3.6 -mvirtualenv /home/myuser/readthedocs.org/user\_builds/test/envs/latest > > > Followed by: > > There must be only one argument: DEST\_DIR (you gave /home/gcsuser/readthedocs.org/user\_builds/test/envs/latest) > Usage: virtualenv.py [OPTIONS] DEST\_DIR > > > [![THe output when building](https://i.stack.imgur.com/wSLZi.jpg)](https://i.stack.imgur.com/wSLZi.jpg) I haven't changed (nor I do know here) to change the DEST\_DIR. The debug.log states exactly what's on the image Here is the total log after clicking the "Build" Button: [`Full log in here`](https://pastebin.com/QQz6Rn47) If I run the command "python3.6 -mvirtualenv /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest" in the command line the result is the following: > > (venv) myuser@lxgcsrtd01:~/readthedocs.org$ python3.6 -mvirtualenv /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest > > > Using real prefix '/usr' > > > Path not in prefix '/home/myuser/readthedocs.org/venv/include/python3.6m' '/usr' > > > New python executable in /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest/bin/python3.6 > > > Not overwriting existing python scrip /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest/bin/python (you must use /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest/bin/python3.6) > > > Installing setuptools, pip, wheel... done. (venv) > > > myuser@lxgcsrtd01:~/readthedocs.org$ > > > Looks like, as the error states, there are two argumets, yet I can't find any space in bettween the arguments, and if I copy paste it to the shell it won't throw any errors.
2020/04/07
[ "https://Stackoverflow.com/questions/61081016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2236386/" ]
Your running job #3 has only 4 tasks (screenshot #2), thats why you see 4 executors. Spark doesn't need 6 executors to complete 4 tasks. Each executor (screenshot #3) has 5 cores and what looks like 14GB memory ((14GB -300MB) \* 0.6 ~ 7.8GB). See [Spark memory management](https://spark.apache.org/docs/latest/configuration.html#memory-management). Each executor executes a single task, which means it uses only one core out of 5 allocated, hence the low CPU usage. (In Spark, an executor with X cores can process X tasks in parallel. It can NOT process one task on X cores.)
You have only 2 nodes with 16 vCores each, in total of 32 vCores, which you can very well see in your Yarn UI. Now when you are submitting your job you are requesting Yarn to create 6 containers(executors) with 5 vCores each but then on a single node you can have at max of 2 executors considering 5 cores requirement (10 vCores used up to create 2 executors on a single worker node from total of 16vCores available). You will end up getting max of 4 executors anyways. One executor can't span multiple worker nodes.
17,697
21,579,459
I am just starting on Python from a PHP background. I was wondering if there is a more elegant way in assigning a variable the result of an "if ... in" statement? I currently do ``` is_holiday = False if now_date in holidays: is_holiday = True ``` To me it looks like an unnecessary amount of code line or is this the absolute minimum and meets python style guides?
2014/02/05
[ "https://Stackoverflow.com/questions/21579459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/912588/" ]
``` is_holiday = now_date in holidays ```
Use [Conditional expressions](http://docs.python.org/2/reference/expressions.html#conditional-expressions): `is_holiday = True if now_date in holidays else False` or just `is_holiday = now_date in holidays`.
17,698
17,502,704
I am trying to use the tempfile module. (<http://docs.python.org/2.7/library/tempfile.html>) I am looking for a temporary file that I could open several times to get several streams to read it. ``` tmp = ... stream1 = # get a stream for the temp file stream2 = # get another stream for the temp file ``` I have tried several functions (TemporaryFile, NamedTemporaryFile, SpooledTemporaryFile) and using the fileno method or so but I could not perform what I am looking for. Any idea of should I just make my own class? Thanks > UPDATE -------- I get an error trying to open the file with its name... ``` In [2]: t = tempfile.NamedTemporaryFile() In [3]: t.write('abcdef'*1000000) In [4]: t.name Out[4]: 'c:\\users\\mike\\appdata\\local\\temp\\tmpczggbt' In [5]: f = open(t.name) --------------------------------------------------------------------------- IOError Traceback (most recent call last) <ipython-input-6-03b9332531d2> in <module>() ----> 1 f = open(t.name) IOError: [Errno 13] Permission denied: 'c:\\users\\mike\\appdata\\local\\temp\\tmpczggbt' ```
2013/07/06
[ "https://Stackoverflow.com/questions/17502704", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1232891/" ]
File objects (be they temporary or otherwise) cannot be read multiple times without re-positioning the file position back to the start. Your options are: * To reopen the file multiple times, creating multiple file objects for the same file. * To rewind the file object before each read. To reopen the file, use a `NamedTemporaryFile` and use a regular `open()` call to re-open the same filename several times. You probably will want to pass `delete=False` to the constructor, especially on Windows, to be able to do this. To rewind, call `.seek(0)` on the file object.
You could use [`tempfile.mkstemp()`](http://docs.python.org/2.7/library/tempfile.html#tempfile.mkstemp). From the documentation: > > Creates a temporary file in the most secure manner possible. There are no race conditions in the file’s creation, assuming that the platform properly implements the os.O\_EXCL flag for os.open(). The file is readable and writable only by the creating user ID. If the platform uses permission bits to indicate whether a file is executable, the file is executable by no one. The file descriptor is not inherited by child processes. > > > Unlike TemporaryFile(), the user of mkstemp() is responsible for deleting the temporary file when done with it. > > > You can then use the `open()` builtin function to create and open that file several times. Remember to delete the file when you are done, as this is not done automatically.
17,699
3,186,526
In debian recently change de default version of python from 2.5 to 2.6 but i need 2.5, how i can configure apache and/or wsgi script to say it use pythons2.5 and not python default?
2010/07/06
[ "https://Stackoverflow.com/questions/3186526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
``` <input type="submit" onclick="this.disabled = true" value="Save"/> ``` or ref [this](https://stackoverflow.com/questions/2545641/how-to-submit-form-only-once-after-multiple-clicking-on-submit)
Using **jQuery**, add onClick handler that returns false: ``` <input type="submit" value="Submit" onClick="$(this).click(function() {return false;});"/> ```
17,700
2,335,762
I want to parse a css file and multiply each pixel value by (2/3). I was wondering if this was possible with sed? I know this is incorrect syntax but i think it'll bring home the explanation of what i want to achieve: ``` sed -e "s|\([0-9]*\)px|int((\1 * 2)/3)|g" file.css ``` So basically I want to take \1, multiply it by (2/3) and cast to and int. Or maybe it's more possible with awk? Suppose I could write a python script, but would like to know if it can be done by quicker means. Thanks
2010/02/25
[ "https://Stackoverflow.com/questions/2335762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/281403/" ]
use awk ``` $ cat file foo: 3px; bar: 6px $ awk '{for(i=1;i<=NF;i++){if($i~/^[0-9]+px/){o=$i;sub(/^[0-9]+/,"",o);$i=($i+0)*(2/3)o}}}1' file foo: 2px; bar: 4px ```
You can use perl do it like this: ``` echo -e "100px;\n20px;" | perl -pe 's{ (\d*) (?=px) }{ $1*(2/3) }xe' ```
17,710
70,581,125
I am to write a program by taking two integers as input and output their sum on Sololearn using python But I don’t seem to get what they want me to do
2022/01/04
[ "https://Stackoverflow.com/questions/70581125", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17833033/" ]
``` num1 = int(input('Enter your first number: ')) num2 = int(input('Enter your second number: ')) print('Your sum is', num1 + num2) ``` I would suggest doing more research on the `input` function in python. Also, in the future, please provide the code that you have already written so that it is easier to answer your question.
Try with ```py a = int(input("A: ")) # Convert input string to int b = int(input("B: ")) print("A + B =", a+b) ```
17,715
12,960,522
I am trying to create an enumeration in python. I have seen seen several solutions (The second answer [here](https://stackoverflow.com/questions/36932/whats-the-best-way-to-implement-an-enum-in-python) by @alec thomas intrigued me most), but I would like to make the enumeration immutable. I found a [python recipe](http://code.activestate.com/recipes/413486-first-class-enums-in-python/) that is immutable, but I want to have a dict-like key/value association. I was attempting to use duck-punching to add properties to the class that would throw an `AttributeError` if you tried to call `fset` or `fdel` on the property. I ran into trouble defining the `fget` function of the property. Here's the code i have so far: ``` def enum(*sequential, **named): # Build property dict enums = dict(zip(sequential, range(len(sequential))), **named) # Define an errorhandler function def err_func(*args, **kwargs): raise AttributeError('Enumeration is immutable!') # Create a base type t = type('enum', (object,), {}) # Add properties to class by duck-punching for attr, val in enums.iteritems(): setattr(t, attr, property(lambda attr: enums[attr], err_func, err_func)) # Return an instance of the new class return t() e = enum('OK', 'CANCEL', 'QUIT') print e print e.OK print e.CANCEL print e.QUIT # Immutable? e.OK = 'ASDF' # Does throw the correct exception print e.OK ``` The output from this is: ``` <__main__.enum object at 0x01FC8F70> Traceback (most recent call last): File "enum.py", line 24, in <module> print e.OK File "enum.py", line 17, in <lambda> setattr(t, attr, property(lambda attr: enums[attr], err_func, err_func)) KeyError: <__main__.enum object at 0x01FC8F70> ``` Perhaps this is not the best way to create an enumeration, but it's short and I wanted to explore more of this whole duck-punching/monkey-patching concept.
2012/10/18
[ "https://Stackoverflow.com/questions/12960522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1255482/" ]
Your immediate problem is that the `getter` of a `property` takes `self` as the only parameter, not `attr`. Therefore, you should use something like `lambda self: val` instead. However, that doesn't work because the `lambda` binds the *name* `val`, which changes from iteration to iteration. So you will need to wrap it somehow: ``` def getter(val): return lambda self: val for attr, val in enums.iteritems(): setattr(t, attr, property(getter(val), err_func, err_func)) ```
The final implementation (thanks to @nneonneo). * Checks for duplicate enum keys * Checks if enum is empty * Does not allow deletion or modification of enum items ``` def enum(*sequential, **named): # Check for duplicate keys names = list(sequential) names.extend(named.keys()) if len(set(names)) != len(names): raise KeyError('Cannot create enumeration with duplicate keys!') # Build property dict enums = dict(zip(sequential, range(len(sequential))), **named) if not enums: raise KeyError('Cannot create empty enumeration') # Function to be called as fset/fdel def err_func(*args, **kwargs): raise AttributeError('Enumeration is immutable!') # function to be called as fget def getter(cls, val): return lambda cls: val # Create a base type t = type('enum', (object,), {}) # Add properties to class by duck-punching for attr, val in enums.iteritems(): setattr(t, attr, property(getter(t, val), err_func, err_func)) # Return an instance of the new class return t() ```
17,717
63,790,601
``` # Read an integer: a = input() #Now swap it... a[0] = a[1] a[1] = a[0] ``` As you can see I am trying to change the value and trying to swap it.. ``` print(a) ``` ...and then i print it out. But I am getting an error which is as follows: ``` Traceback (most recent call last): File "python", line 4, in <module> TypeError: 'str' object does not support item assignment ``` For example, if my input is `79` I want the result to be `97`. Can you tell me where my mistake is?
2020/09/08
[ "https://Stackoverflow.com/questions/63790601", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14235043/" ]
Try this: ``` a = input() a = str(a) result = int(a[-1: : -1]) print(result) ``` Output: ( a = 34 ) ``` 43 ```
Based on your question simple thing you can do. As above comment string is not iterable while you as input. You need to convert to `list` to access by index. For swap you need to use temporary variable, so i used `temp` as variable to swap. ``` a = list(input()) #Now swap it... print(a) temp = a[0] a[0] = a[1] a[1] = temp print(a) print("".join(a)) ```
17,718
48,272,939
In advance, thank you for looking at my issue community, My python test script will not execute from my Centos 7 Crontab. This script will execute manually if called either in the containing directory or from the root/any other directory with a full path. My Centos Python location is `/bin/python`. This is included at the top of my python script. Crontab itself seems to be working fine as: ``` * * * * * root date >> /home/test.log ``` redirects output to test.log without issue. Please find both my Crontab and test script. Crontab in /etc/crontab ``` SHELL=/bin/bash #PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root * * * * * root python /scripts/test1.py ``` As for the python script in /scripts/test1.py: ``` #!/bin/python import os os.system('date >> testlog.txt') ``` Any input/advice you can offer I would greatly appreciate. Thank you kindly,
2018/01/16
[ "https://Stackoverflow.com/questions/48272939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6820299/" ]
I had the same problem. Pulling images was working but provisioning a container was not working. In this case the solution was to provide Docker with a configuration file named `~/.docker/config.json` with the following contents. ``` { "proxies": { "default": { "httpProxy": "http://proxy.server....com:8080", "httpsProxy": "https://proxy.server.....com:8080" } } } ``` I hope this will solve your problem.
I struggled making it work but finally found a working solution on my side. I'm behind a corporate proxy and have a CNTLM properly configured on windows and linked in my docker desktop settings with address `127.0.0.1:3128`. My docker runs under WSL2. The magic tip hereis to link your containers proxies to docker internal proxy `host.docker.internal`. I didn't find why I wasn't able to connect with the localhost proxy (`127.0.0.1:3128`) like i did for docker desktop config, i guess docker runs an internal proxy as a gateway to access windows network.
17,719
33,981,803
Lets say I am trying to get the number of different peoples names. user inputs names until they enter a -1, once -1 is entered then loop will break Once entered then i am trying to tabulate the output something likes this names : John Max Joan No of occurrences : 4 1 2 % of occurences : 20% 10% 30% ``` #!/usr/bin/python names = ["John","Max","Joan"] lst = [] while True: lst = raw_input("What is your name?") if lst == "-1": break input_list = lst.split() print "Names" '[%s]' % ' '.join(map(str, names)) ``` I have no idea on how to increment the values of the names with the number of times they are entered by the user - lets say the user enters john, john, max,joan joan joan then I would need to increment john twice, max once and joan 3 times. I know I can reference different parts of names using [0] for example which is the first item but I don't know how to increment all the relevant parts .
2015/11/29
[ "https://Stackoverflow.com/questions/33981803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5466007/" ]
You can use [`collections.Counter`](https://docs.python.org/2/library/collections.html#collections.Counter) to count and accumulate the occurrences of names in the given input: ``` counter = collections.Counter() names = ["John", "Max", "Joan"] while True: lst = raw_input("What is your name?") if lst == "-1": break lst = [name for name in lst.strip().split() if name in names] names.update(collections.Counter(lst)) print "names : {}".format(" ".join(names)) print "No of occurrences : {}".format(" ".join(map(str, names.values()))) ``` *Please note that, I presumed names are separated by whitespace in the input.*
Counting word frequency in a multi-word string: ``` import sys from collections import defaultdict WORDS = """Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.""" d = defaultdict(lambda: 0) for word in WORDS.split(): d[word] += 1 for key in ['in', 'et', 'ut', 'sed']: print(key, ':\t', str(d[key])) ``` output: ``` in : 0 et : 1 ut : 2 sed : 1 ``` Whether one normalizes the words to lowercase depends on the problem domain; in case the questioner wants to do that with the names he receives, that'd be `data[word.lower()] += 1` in the loop, and then he may re-capitalize the first letter on output if he wishes to do that.
17,720
45,176,779
I have a python object that looks like this. I am trying to parse this object and turn it to a human readable string which I need to put in the logs. How can I recursively loop through this considering the object could be nested dictionaries or nested lists or dictionaries inside lists inside dictionaries etc. ``` {"plugins": [ {"Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai"}, {"Chrome PDF Viewer": "internal-pdf-viewer"}, {"Native Client": "internal-nacl-plugin"}, {"Shockwave Flash": "PepperFlashPlayer.plugin"}, {"Widevine Content Decryption Module": "widevinecdmadapter.plugin"} ] } ``` I want to possibly serialize the above to look something like this ``` "plugins: Chrome PDF Viewer": "mhjfbmdgcfjbbpaeojofohoefgiehjai, Chrome PDF Viewer": "internal-pdf-viewer, Native Client": "internal-nacl-plugin, Shockwave Flash": "PepperFlashPlayer.plugin, Widevine Content Decryption Module": "widevinecdmadapter.plugin" ``` My code so far [this works for nested dictionaries but I am not sure how I can alter this to support lists in the above object]: ``` result_str = "" def dictionary_iterator(results): global result_str for key, value in results.items(): if isinstance(value, dict): result_str = result_str + key + ": \n \t" dictionary_iterator(value) else: result_str = result_str + key + ": " + str(value) + "\n" return result_str ``` I have looked over possible answers but could not find a solution.
2017/07/18
[ "https://Stackoverflow.com/questions/45176779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7044618/" ]
The formatting might be a bit off ``` def humanizer(input, result=''): if type(input) == dict: for k, v in input.items(): if type(v) == str: result += '%s:%s\n\t' % (str(k), str(v)) elif type(v) in (dict, list): result += '%s:\n\t' % str(k) result = humanizer(v, result) result += '\n\t' elif type(input) == list: for item in input: if type(item) == str: result += item continue result = humanizer(item, result) + '\n\t' else: result += input + '\n\t' return result ``` Result: ``` plugins: Chrome PDF Viewer:mhjfbmdgcfjbbpaeojofohoefgiehjai Chrome PDF Viewer:internal-pdf-viewer Native Client:internal-nacl-plugin Shockwave Flash:PepperFlashPlayer.plugin Widevine Content Decryption Module:widevinecdmadapter.plugin ```
Maybe the output of [pformat](https://docs.python.org/3/library/pprint.html#pprint.pformat "pformat") would suit you: ``` from pprint import pformat results_str = pformat(results) ```
17,721
10,226,551
I have a list of floating point numbers and I want to generate another list of period returns from my first list. This is a run of the mill implementation (not tested - and OBVIOUSLY no error checking/handling): ``` a = [100,105,100,95,100] def calc_period_returns(values, period): output = [] startpos, endpos = (period, len(values)-1) while True: current = values[startpos] previous = values[startpos-period] ret = 100*((current-previous)/(1.0*previous)) output.append(ret) startpos += period if startpos > endpos: break return output calc_period_returns(a,1) # Expected output: # [5.0, -4.7619047619047619, -5.0, 5.2631578947368416] ``` Is there a more pythonic way of doing this - perhaps using list comprehension and maps?
2012/04/19
[ "https://Stackoverflow.com/questions/10226551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962891/" ]
Here you go: ``` >>> [100.0 * a1 / a2 - 100 for a1, a2 in zip(a[1:], a)] [5.0, -4.7619047619047592, -5.0, 5.2631578947368354] ``` Since you want to compare neighbor elements of a list, you better create a list of pairs you are interested in, like this: ``` >>> a = range(5) >>> a [0, 1, 2, 3, 4] >>> zip(a, a[1:]) [(0, 1), (1, 2), (2, 3), (3, 4)] ``` After that it is just a simple math to extract a percentage change from a pair of numbers.
I don't know how large your list of numbers is going to be, but if you are going to process large amounts of numbers, you should have a look at numpy. The side effect is that calculations look a lot simpler. With numpy, you create an array for your data ``` >>> import numpy as np >>> a = np.array([100,105,100,95,100], dtype=float) ``` and work with arrays as if they were simple numbers ``` >>> np.diff(a) / a[:-1] * 100. [ 5. -4.76190476 -5. 5.26315789] ```
17,724
17,239,077
Im trying to learn python and started with this, I keep getting a syntax error when i try to run it. the cursor jumps to the end of the close " at def start section. Im not sure where the syntax error is coming from as i speech mark all the print ``` #! python3 # J Presents: Rock, paper, Scissors: The Video Game import random import time rock = 1 paper = 2 scissors = 3 names = { rock: "Rock", paper: "Paper", scissors: "Scissors" } rules = {rock: scissors, paper: rock, scissors: paper} Player_score = 0 computer_score = 0 def start (): print "Let's play a game of Rock, Paper, Scissors." while game (): pass scores() def game (): player = move () computer = random.randint(1, 3) result(player, computer) return play_again() def move(): while True: print player = raw_input("Rock = 1\nPaper = 2\nScissors = 3\nMake a Move: ") try: player = int(player) if player in (1,2,3): return player except ValueError: pass Print "Oops! I didn't understand that. Please enter 1, 2 or 3." def result (player, computer): print "1..." time.sleep(1) print "2..." time.sleep(1) print "3!" time.sleep (0.5) print "Computer threw {0}!".format(names[computer]) global player_score, computer_score if player == computer: print "Tie Game." else: if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: print "The computer laughs as you realise you have been defeated." def play_again(): answer = raw_input("Would you like to play again? y/n: ") if answer in ("y", "Y", "yes", "Yes", "Of course!"): return answer else: print "Thank you very much for playing our game. See you next time!" def scores(): global player_score, computer_score print " HIGH SCORES" print "Player: ", player_score print "Computer: ", computer_score if __name__ == '__main__': start() ```
2013/06/21
[ "https://Stackoverflow.com/questions/17239077", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401529/" ]
1) You have an indentation error here : ``` try: player = int(player) if player in (1,2,3): return player except ValueError: #Try Except Block Statement pass Print "Oops! I didn't understand that. Please enter 1, 2 or 3." ``` --- 2) Also : ``` if rules[player} == computer: #Error in this line ``` Should be : ``` if rules[player] == computer: ``` --- 3) Indentation error in if else. ``` if player == computer: print "Tie Game." else: if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: #Incorrect indentation print "The computer laughs as you realise you have been defeated." ``` This should be : ``` if rules[player} == computer: print "Your victory has been assured." player_score +=1 else: print "The computer laughs as you realise you have been defeated." ```
``` if rules[player} == computer: ``` The curly brace should be a bracket.
17,730
39,194,747
I'm coding some python files with sublime and I'd like to comment multiple selected lines which means putting the character '#' at the beginning of each selected line. Is it possible to create a such shortcut-key Binding on sublime to do that ? Thanks Vincent
2016/08/28
[ "https://Stackoverflow.com/questions/39194747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6767684/" ]
There are a few ways to do this. Primarily, *two*: (1) Use the CPU/processor stack. There are some variants, each with its own limitations. (2) Or, recode your function(s) to use a "stack frame" struct that simulates a "stack". The actual function ceases to be recursive. This can be virtually limitless up to whatever the heap will permit --- For (1) ... (A) If your system permits, you can issue a `syscall` to extend the process's stack size. There may be limits on how much you can do this and collisions with shared library addresses. (B) You can `malloc` a large area. With some [somewhat] intricate inline asm trickery, you can swap this area for the stack [and back again] and call your function with this `malloc` area as the stack. Doable, but not for the faint of heart ... (C) An easier way is to `malloc` a large area. Pass this area to `pthread_attr_setstack`. Then, run your recursive function as a thread using `pthread_create`. Note, you don't really care about multiple threads, it's just an easy way to avoid the "messy" asm trickery. With (A), *assuming* the stack extend syscall permits, the limit could be all of available memory permitted for stack [up to some system-wide or RLIMIT\_\* parameter]. With (B) and (C), you have to "guess" and make the `malloc` large enough before you start. After it has been done, the size is fixed and can *not* be extended further. Actually, that's not quite true. Using the asm trickery repeatedly [when needed], you could simulate a near infinite stack. But, IMO, the overhead of keeping track of these large malloc areas is high enough that I'd opt for (2) below. --- For (2) ... This can literally expand/contract as needed. One of the advantages is that you don't need to guess beforehand at how much memory you'll need. The [pseudo] stack can just keep growing as needed [until `malloc` returns `NULL` :-)]. Here is a sample recursive function [treat loosely as pseudo code]: ``` int myfunc(int a,int b,int c,int d) { int ret; // do some stuff ... if (must_recurse) ret = myfunc(a + 5,b + 7,c - 6,d + 8); else ret = 0; return ret; } ``` Here is that function changed to use a `struct` as a stack frame [again, loose pseudo code]: ``` typedef struct stack_frame frame_t; struct stack_frame { frame_t *prev; int a; int b; int c; int d; }; stack_t *free_pool; #define GROWCOUNT 1000 frame_t * frame_push(frame_t *prev) { frame_t *cur; // NOTE: we can maintain a free pool ... while (1) { cur = free_pool; if (cur != NULL) { free_pool = cur->prev; break; } // refill free pool from heap ... free_pool = calloc(GROWCOUNT,sizeof(stack_t)); if (free_pool == NULL) { printf("frame_push: no memory\n"); exit(1); } cur = free_pool; for (int count = GROWCOUNT; count > 0; --count, ++cur) cur->prev = cur + 1; cur->prev = NULL; } if (prev != NULL) { *cur = *prev; cur->prev = prev; cur->a += 5; cur->b += 7; cur->c += 6; cur->d += 8; } else memset(cur,0,sizeof(frame_t)); return cur; } frame_t * frame_pop(frame_t *cur) { frame_t *prev; prev = cur->prev; cur->prev = free_pool; free_pool = cur; return prev; } int myfunc(void) { int ret; stack_t *cur; cur = frame_push(NULL); // set initial conditions in cur... while (1) { // do stuff ... if (must_recurse) { cur = frame_push(cur); must_recurse = 0; continue; } // pop stack cur = frame_pop(cur); if (cur == NULL) break; } return ret; } ```
All of functions, objects, variable and user defined structures use memory spaces which is control by OS and compiler. So, it means your defined stack works under a general memory space which is specified for the stack of your process in OS. As a result, it does not have a big difference, but you can define an optimized structure with high efficiency to use this general stack much more better.
17,732
71,561,891
![This is what I want](https://i.stack.imgur.com/lexMT.png "example of what I want")How to make R side by side two column histogram (above) which I am able to do in python ([image taken from here](https://stackoverflow.com/questions/6871201/plot-two-histograms-on-single-chart-with-matplotlib)) and all the answers I have found for R get ![this result](https://i.stack.imgur.com/I11Cq.png "what I am getting")[image taken from here](https://stackoverflow.com/questions/3541713/how-to-plot-two-histograms-together-in-r) I tried searching for answers on stackoverflow and just google in general but no one seemed to be able to tell me how to make the histogram I want.
2022/03/21
[ "https://Stackoverflow.com/questions/71561891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18295411/" ]
We remove the `NA` with `na.omit` and get the `first` element - use `[1]` to coerce to `NA` if there are no non-NA elements present ``` library(dplyr) test %>% group_by(name) %>% summarise(across(everything(), ~ first(na.omit(.x))[1])) ``` -output ``` # A tibble: 2 × 4 name test_1 test_2 make_up_test <chr> <int> <int> <dbl> 1 C 2 4 1 2 J 1 3 NA ```
Here is an approach with pivoting: ``` library(tidyr) library(dplyr) test %>% pivot_longer(-name, names_to = "names") %>% drop_na() %>% pivot_wider(names_from = names, values_from = value) %>% relocate(test_2, .after = test_1) ``` ``` name test_1 test_2 make_up_test <chr> <dbl> <dbl> <dbl> 1 J 1 3 NA 2 C 2 4 1 ```
17,733
22,358,540
I know how to read bits inside an int in Python but not how to do so on a char. For an int, this elementary operation works: a & (2\*\*bit\_index) . But for a single character it gives the following error message: `unsupported operand type(s) for &: 'str' and 'int'` In case, this "subtlety' matters, I'm also reading my char from a string object using: ``` for my_char in my_string: ``` I'm stressing this point, because it could be that my\_char is actually a string of length one and not a char, just because I know really little about python handle of types. Thank you.
2014/03/12
[ "https://Stackoverflow.com/questions/22358540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3275464/" ]
You can use a `bytearray` instead of a string. The individual elements are integers, but you can still do basic string manipulation on the whole: ``` >>> arr = bytearray('foo') >>> type(arr[0]) <type 'int'> >>> arr.replace('o', 'u') bytearray(b'fuu') ```
Python doesn't really have char type. You have a string of length one. You need to convert it to int before you can apply those operators in it. Depending on what is in `my_string` this might work: `int(my_char, 10)`
17,734
50,182,833
I am running a Flask app on Google Cloud App Engine (flex). Running it locally works just fine, but once it deploys I get a `502 Bad Gateway error (nginx)`. Now I would like to figure out what causes this, but I am not able to find any option to view the console logs that my app creates. Since it works just fine on my local environment, my current workflow to solve this issue involves changing my code locally and deploying it to see if it works afterwards, but each deployment takes over 30min only to figure out it still does not work. There must be a way to do this more efficiently. Following the docs <https://cloud.google.com/appengine/docs/flexible/python/debugging-an-instance> I was able to SSH into my instance in debug-mode and launch the Flask app from the Cloud Shell, however it tells me to access it on <http://127.0.0.1:8080/> which I can't access from the cloud server. Hence I can't navigate the webpage in order to reproduce the 502 error and then see the output in the console. How can I figure out what causes the 502 error on the server?
2018/05/04
[ "https://Stackoverflow.com/questions/50182833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6341510/" ]
Had the similar issue. Found that app engine looks for app variable in main.py file. My final app.yaml looks like below. app.yaml -------- ``` runtime: python env: flex entrypoint: gunicorn -b :$PORT main:app runtime_config: python_version: 3 ``` and had requirements.txt, which looks like below. requirements.txt ---------------- ``` Flask==1.1.1 gunicorn==20.0.4 ```
Here are my theories: * localhost (`127.0.0.1`) is being used; should use `0.0.0.0` * Flask internal WSGI server is being used; should use e.g. [Gunicorn](https://gunicorn.org/) **NB** You **may** develop and test these solutions using Cloud Shell. Cloud Shell (now) includes a [web preview](https://cloud.google.com/shell/docs/using-web-preview) feature that permits browsing endpoints (including `:8080`) for servers running on the Cloud Shell instance. Flask ----- Flask includes a development (WSGI) server and tutorials generally include: ```py if __name__ == '__main__': app.run(host='127.0.0.1', port=8080, debug=True) ``` Which, if run as `python somefile.py` will use Flask's inbuilt (dev) server and expose it on localhost (`127.0.0.1`). This is inaccessible from other machines: ``` * Serving Flask app "main" (lazy loading) * Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. * Debug mode: on * Running on http://127.0.0.1:8080/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 244-629-469 ``` If instead, `host='0.0.0.0'` is used, then this will work: ``` * Serving Flask app "main" (lazy loading) * Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. * Debug mode: on * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 244-629-469 192.168.9.1 - - [08/May/2019 23:59:59] "GET / HTTP/1.1" 200 - 192.168.9.1 - - [08/May/2019 23:59:59] "GET /favicon.ico HTTP/1.1" 404 - ``` E.g. Gunicorn ------------- Flask's inbuilt server should not be used and Flex's documentation describes how to use gunicorn (one of various alternatives) should be configured: <https://cloud.google.com/appengine/docs/flexible/python/runtime#application_startup> Which, if run `gunicorn --bind=0.0.0.0:8080 main:app` gives: ``` [INFO] Starting gunicorn 19.9.0 [INFO] Listening at: http://0.0.0.0:8080 (1) [INFO] Using worker: sync [INFO] Booting worker with pid: 7 ``` App Engine Flex --------------- Using the recommended configuration, app.yaml would include: ``` runtime: python env: flex entrypoint: gunicorn --bind:$PORT main:app ``` Dockerfiles ----------- You can test these locally with Dockerfiles and -- if you wish -- deploy these to Flex as custom runtimes (after revising `app.yaml`): ``` FROM python:3.7-alpine WORKDIR /app ADD . . RUN pip install -r requirements.txt ``` For Flask add: ``` ENTRYPOINT ["python","main.py"] ``` **NB** In the above, the configuration results from the somefile.py `app.run(...)` And for gunicorn: ``` ENTRYPOINT ["gunicorn","--bind=0.0.0.0:8080","main:app"] ```
17,739
17,029,752
I am scraping 23770 webpages with a pretty simple web scraper using `scrapy`. I am quite new to scrapy and even python, but managed to write a spider that does the job. It is, however, really slow (it takes approx. 28 hours to crawl the 23770 pages). I have looked on the `scrapy` webpage and the mailing lists and `stackoverflow`, but I can't seem to find generic recommendations for writing fast crawlers understandable for beginners. Maybe my problem is not the spider itself, but the way i run it. All suggestions welcome! I have listed my code below, if it's needed. ``` from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.item import Item, Field import re class Sale(Item): Adresse = Field() Pris = Field() Salgsdato = Field() SalgsType = Field() KvmPris = Field() Rum = Field() Postnummer = Field() Boligtype = Field() Kvm = Field() Bygget = Field() class HouseSpider(BaseSpider): name = 'House' allowed_domains = ["http://boliga.dk/"] start_urls = ['http://www.boliga.dk/salg/resultater?so=1&type=Villa&type=Ejerlejlighed&type=R%%C3%%A6kkehus&kom=&amt=&fraPostnr=&tilPostnr=&iPostnr=&gade=&min=&max=&byggetMin=&byggetMax=&minRooms=&maxRooms=&minSize=&maxSize=&minsaledate=1992&maxsaledate=today&kode=&p=%d' %n for n in xrange(1, 23770, 1)] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select("id('searchresult')/tr") items = [] for site in sites: item = Sale() item['Adresse'] = site.select("td[1]/a[1]/text()").extract() item['Pris'] = site.select("td[2]/text()").extract() item['Salgsdato'] = site.select("td[3]/text()").extract() Temp = site.select("td[4]/text()").extract() Temp = Temp[0] m = re.search('\r\n\t\t\t\t\t(.+?)\r\n\t\t\t\t', Temp) if m: found = m.group(1) item['SalgsType'] = found else: item['SalgsType'] = Temp item['KvmPris'] = site.select("td[5]/text()").extract() item['Rum'] = site.select("td[6]/text()").extract() item['Postnummer'] = site.select("td[7]/text()").extract() item['Boligtype'] = site.select("td[8]/text()").extract() item['Kvm'] = site.select("td[9]/text()").extract() item['Bygget'] = site.select("td[10]/text()").extract() items.append(item) return items ``` Thanks!
2013/06/10
[ "https://Stackoverflow.com/questions/17029752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2183630/" ]
Here's a collection of things to try: * use latest scrapy version (if not using already) * check if non-standard middlewares are used * try to increase `CONCURRENT_REQUESTS_PER_DOMAIN`, `CONCURRENT_REQUESTS` settings ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#concurrent-requests)) * turn off logging `LOG_ENABLED = False` ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#log-enabled)) * try `yield`ing an item in a loop instead of collecting items into the `items` list and returning them * use local cache DNS (see [this thread](https://stackoverflow.com/questions/12427451/how-do-i-improve-scrapys-download-speed)) * check if this site is using download threshold and limits your download speed (see [this thread](https://stackoverflow.com/questions/13505194/scrapy-crawling-speed-is-slow-60-pages-min#comment18491083_13505194)) * log cpu and memory usage during the spider run - see if there are any problems there * try run the same spider under [scrapyd](http://scrapyd.readthedocs.org/en/latest/) service * see if [grequests](https://github.com/kennethreitz/grequests) + [lxml](http://lxml.de/) will perform better (ask if you need any help with implementing this solution) * try running `Scrapy` on `pypy`, see [Running Scrapy on PyPy](https://stackoverflow.com/questions/31029362/running-scrapy-on-pypy) Hope that helps.
Looking at your code, I'd say most of that time is spent in network requests rather than processing the responses. All of the tips @alecxe provides in his answer apply, but I'd suggest the `HTTPCACHE_ENABLED` setting, since it caches the requests and avoids doing it a second time. It would help on following crawls and even offline development. See more info in the docs: <http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.contrib.downloadermiddleware.httpcache>
17,740
29,449,659
I have a server application written in python/django (REST api) for accepting a file upload from the client application. I want this uploaded file to be stored in AWS S3. I also want the file to be uploaded from client as multipart form / data . How can i achieve this. Any sample code application will help me to understand the way it should be done. Please assist. ``` class FileUploadView(APIView): parser_classes = (FileUploadParser,) def put(self, request, filename, format=None): file_obj = request.data['file'] self.handle_uploaded_file(file_obj) return self.get_response("", True, "", {}) def handle_uploaded_file(self, f): destination = open('<path>', 'wb+') for chunk in f.chunks(): destination.write(chunk) destination.close() ``` Thanks in advance
2015/04/04
[ "https://Stackoverflow.com/questions/29449659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/780805/" ]
If you want to your uploads to go directly to AWS S3, you can use `django-storages` and set your Django file storage backend to use AWS S3. * [django-storages](https://bitbucket.org/david/django-storages) * [django-storages documentation](http://django-storages.readthedocs.org/en/latest/index.html) This will allow your Django project to handle storage transparently to S3 without your having to manually re-upload your uploaded files to S3. **Storage Settings** You will need to add at least these configurations to your Django settings: ``` # default remote file storage DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' # aws access keys AWS_ACCESS_KEY_ID = 'YOUR-ACCESS-KEY' AWS_SECRET_ACCESS_KEY = 'YOUR-SECRET-ACCESS-KEY' AWS_BUCKET_NAME = 'your-bucket-name' AWS_STORAGE_BUCKET_NAME = AWS_BUCKET_NAME ``` **Example Code to Store Upload to Remote Storage** This is a modified version of your view with a the `handle_uploaded_file` method using Django's storage backend to save the uploade file to the remote destination (using django-storages). Note: Be sure to define the `DEFAULT_FILE_STORAGE` and AWS keys in your `settings` so `django-storage` can access your bucket. ``` from django.core.files.storage import default_storage from django.core.files import File # set file i/o chunk size to maximize throughput FILE_IO_CHUNK_SIZE = 128 * 2**10 class FileUploadView(APIView): parser_classes = (FileUploadParser,) def put(self, request, filename, format=None): file_obj = request.data['file'] self.handle_uploaded_file(file_obj) return self.get_response("", True, "", {}) def handle_uploaded_file(self, f): """ Write uploaded file to destination using default storage. """ # set storage object to use Django's default storage storage = default_storage # set the relative path inside your bucket where you want the upload # to end up fkey = 'sub-path-in-your-bucket-to-store-the-file' # determine mime type -- you may want to parse the upload header # to find out the exact MIME type of the upload file. content_type = 'image/jpeg' # write file to remote server # * "file" is a File storage object that will use your # storage backend (in this case, remote storage to AWS S3) # * "media" is a File object created with your upload file file = storage.open(fkey, 'w') storage.headers.update({"Content-Type": content_type}) f = open(path, 'rb') media = File(f) for chunk in media.chunks(chunk_size=FILE_IO_CHUNK_SIZE): file.write(chunk) file.close() media.close() f.close() ``` See more explanation and examples on how to access the remote storage here: * [django-storages: Amazon S3](http://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html)
Take a look at `boto` package which provides AWS APIs: ``` from boto.s3.connection import S3Connection s3 = S3Connection(access_key, secret_key) b = s3.get_bucket('<bucket>') mp = b.initiate_multipart_upload('<object>') for i in range(1, <parts>+1): io = <receive-image-part> # E.g. StringIO mp.upload_part_from_file(io, part_num=i) mp.complete_upload() ```
17,745
48,364,573
New to python and deep learning. I was trying to build an RNN with some data and I don't know where am I going wrong. This is my code: ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline raw = pd.read_excel('Online Retail.xlsx',index_col='InvoiceDate') sales = raw.drop(['InvoiceNo','StockCode','Country','Description'],axis=1) sales.head() sales.index = pd.to_datetime(sales.index) sales.info() train_set = sales.head(50000) test_set = sales.tail(41909) from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() training = np.nan_to_num(train_set) testing = np.nan_to_num(test_set) train_scaled = scaler.fit_transform(training) test_scaled = scaler.fit_transform(testing) def next_batch(training_data,batch_size,steps): rand_start = np.random.randint(0,len(training_data)-steps) y_batch = np.array(training_data[rand_start:rand_start+steps+1].reshape(26,steps+1)) return y_batch[:,:-1].reshape(-1,steps,1),y_batch[:,1:].reshape(-1,steps,1) import tensorflow as tf num_inputs = 1 num_time_steps = 10 num_neurons = 100 num_outputs = 1 learning_rate = 0.03 num_train_iterations = 4000 batch_size = 1 X = tf.placeholder(tf.float32,[None,num_time_steps,num_inputs]) y = tf.placeholder(tf.float32,[None,num_time_steps,num_outputs]) cell = tf.contrib.rnn.OutputProjectionWrapper( tf.contrib.rnn.BasicLSTMCell(num_units=num_neurons,activation=tf.nn.relu),output_size=num_outputs) outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32) loss = tf.reduce_mean(tf.square(outputs - y)) # MSE optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train = optimizer.minimize(loss) init = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session(config=tf.ConfigProto()) as sess: sess.run(init) for iteration in range(num_train_iterations): X_batch, y_batch = next_batch(train_scaled,batch_size,num_time_steps) sess.run(train, feed_dict={X: X_batch, y: y_batch}) if iteration % 100 == 0: mse = loss.eval(feed_dict={X: X_batch, y: y_batch}) print(iteration, "\tMSE:", mse) # Save Model for Later saver.save(sess, "./ex_time_series_model") ``` The output: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-36-f2f7c66a33df> in <module>() 4 for iteration in range(num_train_iterations): 5 ----> 6 X_batch, y_batch = next_batch(train_scaled,batch_size,num_time_steps) 7 sess.run(train, feed_dict={X: X_batch, y: y_batch}) 8 <ipython-input-26-f673a469c67d> in next_batch(training_data, batch_size, steps) 1 def next_batch(training_data,batch_size,steps): 2 rand_start = np.random.randint(0,len(training_data)-steps) ----> 3 y_batch = np.array(training_data[rand_start:rand_start+steps+1].reshape(26,steps+1)) 4 return y_batch[:,:-1].reshape(-1,steps,1),y_batch[:,1:].reshape(-1,steps,1) ValueError: cannot reshape array of size 33 into shape (26,11) In [ ]: ```
2018/01/21
[ "https://Stackoverflow.com/questions/48364573", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8766071/" ]
I'm not sure where the number `26` came from, but it doesn't match with your data dimensions. After you dropped four columns, the `training_data` array is `(50000, 3)`, of which you take `(11, 3)` batches. This array obviously can't reshape to `(26, 11)`. What you probably meant is this (in `next_batch` function): ``` y_batch = np.array(training_data[rand_start:rand_start+steps+1].reshape(3,steps+1)) ```
The error says that you trying to reshape a tensor with size `33` into a tensor with size `26x11`, which you can't. You should reshape a tensor with size `286` into `26x11`. Try to debug the `next_batch` function by printing the `y_batch` shape in each step using `print (y_batch.get_shape())` and check it, if it has shape `286`. I didn't catch this point, why you fetch each batch randomly? why didn't you read input data normally? It would be good if you fix the indents when you posting your code, it is hard to track.
17,746
7,008,175
I wrote such a code to get timezone based on DST for an specific epoch time: ``` def getTimeZoneFromEpoch(epoch) if time.daylight and time.gmtime(epoch).tm_isdst==1: return -time.altzone/3600.0 else: return -time.timezone/3600.0 ``` But i'm not sure its correct, in fact at the moment i mistakes by 1 hour. Maybe i should swap altzone and timezone in this code, but its not what i understood from python's help (time module): ``` timezone -- difference in seconds between UTC and local standard time altzone -- difference in seconds between UTC and local DST time tm_isdst 1 if summer time is in effect, 0 if not, and -1 if unknown ``` Have i misundestood something?
2011/08/10
[ "https://Stackoverflow.com/questions/7008175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/495838/" ]
I've tested this code to obtain the VM's locale UTC offset. Which, by the way, is only really valid at the moment it is measured. I'm not sure whether your code is equivalent or not. ``` def local_ephemeral_UTC_offset(epoch_time=None): u"Returns a datetime.timedelta object representing the local time offset from UTC at the moment" if epoch_time == None: epoch_time = time() return datetime.fromtimestamp(epoch_time) - datetime.utcfromtimestamp(epoch_time) ```
In short, use `time.localtime()` instead of `time.gmtime()`. --- The problem is that you use `gmtime()` , as the result of the following program shows. ``` from time import * def getTimeZoneFromEpoch(epoch): if daylight and gmtime(epoch).tm_isdst==1: return -altzone/3600.0 else: return -timezone/3600.0 print " tm_isdst of tm_isdst of time zone's\n" + \ ' epoch gmtime(epoch) localtime(epoch) offset' for d in ('13/03/2011', # DST start date in USA '14/03/2011', '', '06/11/2011', # DST end date in USA '07/11/2011', '', '27/03/2011', # DST start date in Europe '28/03/2011', '', '30/10/2011', # DST end date in Europe '31/10/2011'): if d: ds = strptime(d,'%d/%m/%Y') epoch = mktime(ds) lt = localtime(epoch) gt = gmtime(epoch) print '%s %s %12s %11s %7s %17s' % (d,ds.tm_isdst,epoch,gt.tm_isdst,lt.tm_isdst,getTimeZoneFromEpoch(epoch)) else: print ``` With my clock set to the "UTC-07:00 Rocky Mountains" time zone, where the DST starts on March 13th 2011 and ends on November 06th 2011 , the result is: ``` tm_isdst of tm_isdst of time zone's epoch gmtime(epoch) localtime(epoch) offset 13/03/2011 -1 1299999600.0 0 0 -7.0 14/03/2011 -1 1300082400.0 0 1 -7.0 06/11/2011 -1 1320559200.0 0 1 -7.0 07/11/2011 -1 1320649200.0 0 0 -7.0 27/03/2011 -1 1301205600.0 0 1 -7.0 28/03/2011 -1 1301292000.0 0 1 -7.0 30/10/2011 -1 1319954400.0 0 1 -7.0 31/10/2011 -1 1320040800.0 0 1 -7.0 ``` With my clock set to the "UTC+01:00 West Continental Europe" time zone, where the DST starts on March 27th 2011 and ends on October 30th 2011 , the result is: ``` tm_isdst of tm_isdst of time zone's epoch gmtime(epoch) localtime(epoch) offset 13/03/2011 -1 1299970800.0 0 0 1.0 14/03/2011 -1 1300057200.0 0 0 1.0 06/11/2011 -1 1320534000.0 0 0 1.0 07/11/2011 -1 1320620400.0 0 0 1.0 27/03/2011 -1 1301180400.0 0 0 1.0 28/03/2011 -1 1301263200.0 0 1 1.0 30/10/2011 -1 1319925600.0 0 1 1.0 31/10/2011 -1 1320015600.0 0 0 1.0 ```
17,747
30,540,825
I have an OS X system where I need to install a module for python 2.6. Both `pip` and `easy_install-2.6` are failing: ``` # /usr/bin/easy_install-2.6 pip Searching for pip Reading http://pypi.python.org/simple/pip/ Download error: unknown url type: https -- Some packages may not be found! Couldn't find index page for 'pip' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading http://pypi.python.org/simple/ Download error: unknown url type: https -- Some packages may not be found! No local packages or download links found for pip error: Could not find suitable distribution for Requirement.parse('pip') ``` Downloading [get\_pip.py](http://pip.readthedocs.org/en/latest/installing.html "get_pip.py") and running it with the stock OS X-supplied python 2.6: ``` # python2.6 ./get_pip.py Traceback (most recent call last): File "./get_pip.py", line 17868, in <module> main() File "./get_pip.py", line 162, in main bootstrap(tmpdir=tmpdir) File "./get_pip.py", line 82, in bootstrap import pip File "/tmp/tmpVJBvaW/pip.zip/pip/__init__.py", line 15, in <module> File "/tmp/tmpVJBvaW/pip.zip/pip/vcs/subversion.py", line 9, in <module> File "/tmp/tmpVJBvaW/pip.zip/pip/index.py", line 30, in <module> File "/tmp/tmpVJBvaW/pip.zip/pip/wheel.py", line 34, in <module> File "/tmp/tmpVJBvaW/pip.zip/pip/_vendor/__init__.py", line 92, in load_module ImportError: No module named 'pip._vendor.distlib.scripts' $ python2.6 --version Python 2.6.9 ``` With python2.7, either method works fine.
2015/05/30
[ "https://Stackoverflow.com/questions/30540825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4146877/" ]
Download the source file [here](https://pypi.python.org/packages/source/p/pip/pip-7.0.1.tar.gz#md5=5627bb807cf3d898a2eba276685537aa). Then do ``` >> cd ~/Downloads >> tar -xzvf pip-7.0.1.tar.gz ``` (replacing `~/Downloads` if necessary). Then ``` >> cd pip-7.0.1 >> sudo python2.6 setup.py install >> cd ``` (the last `cd` is used to leave the build directory). Now you should be able to run ``` >> python2.6 -c 'import pip;print pip.__version__' 7.0.1 ``` By default, pip (when installed from source) should be installed into `/usr/local/bin`. To check: ``` >> /usr/local/bin/pip --version pip 7.0.1 from /Library/Python/2.6/site-packages/pip-7.0.1-py2.6.egg (python 2.6) ``` Now you can install your favorite packages using ``` >> /usr/local/bin/pip install package >> python2.6 -c 'import package' ``` If you have conflicting versions of `pip` in `/usr/local/bin` you can try this ridiculous one liner: ``` >> python -c 'import os;dir="/usr/local/bin";[ os.system("echo %s/%s: && %s/%s --version"%(dir,s,dir,s)) for s in os.listdir("/usr/local/bin") if s.startswith("pip") ]' /usr/local/bin/pip: pip 7.0.1 from /Library/Python/2.6/site-packages/pip-7.0.1-py2.6.egg (python 2.6) /usr/local/bin/pip2: pip 7.0.1 from /Library/Python/2.6/site-packages/pip-7.0.1-py2.6.egg (python 2.6) /usr/local/bin/pip2.6: pip 7.0.1 from /Library/Python/2.6/site-packages/pip-7.0.1-py2.6.egg (python 2.6) ``` to find the one linked to py2.6. (in my case they are all the same)
By default [Homebrew](http://brew.sh/) provides `pip` command via: `brew install python`. So try installing Python using Homebrew. Try to not use `sudo` when working with `brew`. To verify which files are installed with your Python package, try: ``` $ brew list python /usr/local/Cellar/python/2.7.9/bin/pip /usr/local/Cellar/python/2.7.9/bin/pip2 /usr/local/Cellar/python/2.7.9/bin/pip2.7 ... ``` which should consist `pip`. After installation you should symlink your formula's installed files by: ``` brew link python ``` which should create the right symbolic links (such as `/usr/local/bin/pip` pointing to your `Cellar/python/2.?.?/bin/pip`) If you've permission issue, you may fix it by: ``` sudo chgrp -R admin /usr/local /Library/Caches/Homebrew sudo chmod -R g+w /usr/local /Library/Caches/Homebrew ``` and make sure your user is in admin group (`id -Gn $USER`). Then re-link it again: ``` brew unlink python && brew link python ``` To test dry-run, unlink and run: `brew link -n python` to see links of files which `brew` would link. After linking is successful, make sure that your PATH system variable have `/usr/local`, if not, add: ``` export PATH=/usr/local/sbin:/usr/local/bin:$PATH ``` to your `~/.bashrc` file. If successful, your `pip` should work now. --- If you don't want to use Homebrew or you have two Pythons installed on your Mac, you can alternatively install it via: ``` sudo easy_install pip ``` --- Your error: > > Download error: unknown url type: https > > > means that your Python can't handle HTTPS protocol without having SSL support, so try installing: `openssl` package (on Linux either `libssl-dev` or `openssl-devel`).
17,750
70,714,374
How to loop multi-variable data like this in python ? I have latitude and longitude data and I want to pass all these value and run it for 5 times. e.g. **round 1** lat = 13.29 , longitude = 100.34 city = 'ABC' **round 2** lat = 94.09834 ,longitude = 103.34 city = 'XYZ' ,... ,.. ,round 5 Very new to python world. Thank you for every kind comment and suggestion :)
2022/01/14
[ "https://Stackoverflow.com/questions/70714374", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17859678/" ]
I was getting permission issues because I was running SLES based docker container inside CentOS based host machine. If I use SLES based host machine, I could run the container without any permission issue.
in my case I fixed it by upgrading docker to latest version. [reference link.](https://travis-ci.community/t/unable-to-access-file-structure-of-docker-container-when-running-in-travis/11229)
17,751
28,848,098
I'm trying to make a recursive function that finds all the combinations of a python list. I want to input ['a','b','c'] in my function and as the function runs I want the trace to look like this: ```none ['a','b','c'] ['['a','a'],['b','a'],['c','a']] ['['a','a','b'],['b','a','b'],['c','a','b']] ['['a','a','b','c'],['b','a','b','c'],['c','a','b','c']] ``` My recursive function looks like this: ``` def combo(lst,new_lst = []): for item in lst: new_lst.append([lst[0],item]) print([lst[0],item]) return combo(new_lst,lst[1:]) ```
2015/03/04
[ "https://Stackoverflow.com/questions/28848098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1995933/" ]
The right answer is that you should use `itertools.combinations`. But if for some reason you don't want to, and want to write a recursive function, you can use the following piece of code. It is an adaptation of the erlang way of generating combinations, so it may seem a bit weird at first: ``` def combinations(N, iterable): if not N: return [[]] if not iterable: return [] head = [iterable[0]] tail = iterable[1:] new_comb = [ head + list_ for list_ in combinations(N - 1, tail) ] return new_comb + combinations(N, tail) ``` This a very elegant way of thinking of combinations of size `N`: you take the first element of an iterable (*head*) and combine it with smaller (`N-1`) combinations of the rest of the iterable (*tail*). Then you add same size (`N`) combinations of the *tail* to that. That's how you get all possible combinations. If you need all combinations, of all lengths you would do: ``` for n in range(1, len(iterable) + 1): print(combinations(n, iterable)) ```
Seems that you want all the product of a list, you can use [`itertools.product`](https://docs.python.org/2/library/itertools.html#itertools.product) within the following function to return a list of generators: ``` >>> from itertools import product >>> def pro(li): ... return [product(l,repeat=i) for i in range(2,len(l)+1)] ... >>> for i in pro(l): ... print list(i) ... [('a', 'a'), ('a', 'b'), ('a', 'c'), ('b', 'a'), ('b', 'b'), ('b', 'c'), ('c', 'a'), ('c', 'b'), ('c', 'c')] [('a', 'a', 'a'), ('a', 'a', 'b'), ('a', 'a', 'c'), ('a', 'b', 'a'), ('a', 'b', 'b'), ('a', 'b', 'c'), ('a', 'c', 'a'), ('a', 'c', 'b'), ('a', 'c', 'c'), ('b', 'a', 'a'), ('b', 'a', 'b'), ('b', 'a', 'c'), ('b', 'b', 'a'), ('b', 'b', 'b'), ('b', 'b', 'c'), ('b', 'c', 'a'), ('b', 'c', 'b'), ('b', 'c', 'c'), ('c', 'a', 'a'), ('c', 'a', 'b'), ('c', 'a', 'c'), ('c', 'b', 'a'), ('c', 'b', 'b'), ('c', 'b', 'c'), ('c', 'c', 'a'), ('c', 'c', 'b'), ('c', 'c', 'c')] ```
17,752
11,372,033
I'm getting an error when testing a python script which is installed on my Android Emulator running SDK 2.2 I have installed "Python\_for\_android\_r1.apk" and "sl4a\_r5.apk" in my emulator. It seems that my code is trying to import the following: ``` from urllib import urlencode from urllib2 import urlopen ``` And from what I can tell urllib2 is not found based on the error below. ``` ( FILE "/home/manuel/A;tanaStudio3Workspace/python-for-android/python-build/output/usr/lib/python2.6/urllib2.py, line 124 in urlopen ) ``` Any ideas how I can fix this problem??
2012/07/07
[ "https://Stackoverflow.com/questions/11372033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/953507/" ]
Your urllib module seems to be found. If the module is not found, python will return you an error at the import. Looking at the error, it appears that you are having problems with urlopen. Is the url you are trying to open valid? Line 124 in urllib2 refers to the opener that you are using to get your response.
`A;tanaStudio3Workspace` this is weird. You have no problem with your import module but the path look really wrong. I could assume if you fix the path, it will be alright but for further investigation you need to provide a real traceback.
17,753
14,163,429
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: **views.py** ``` DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` I have tried the following, but it still does not work: **views.py** ``` class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() ``` Currently, my only workaround is to do `MySQLdb.connect()` in each function that uses mysql. Also I noticed that when using django's `manage.py runserver`, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because `list()` is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware ``` def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler) ```
2013/01/04
[ "https://Stackoverflow.com/questions/14163429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/516476/" ]
Sometimes if you see "OperationalError: (2006, 'MySQL server has gone away')", it is because you are issuing a query that is too large. This can happen, for instance, if you're storing your sessions in MySQL, and you're trying to put something really big in the session. To fix the problem, you need to increase the value of the max\_allowed\_packet setting in MySQL. The default value is 1048576. So see the current value for the default, run the following SQL: ``` select @@max_allowed_packet; ``` To temporarily set a new value, run the following SQL: ``` set global max_allowed_packet=10485760; ``` To fix the problem more permanently, create a /etc/my.cnf file with at least the following: ``` [mysqld] max_allowed_packet = 16M ``` After editing /etc/my.cnf, you'll need to restart MySQL or restart your machine if you don't know how.
SQLAlchemy now has a great write-up on how you can use pinging to be pessimistic about your connection's freshness: <http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic> From there, ``` from sqlalchemy import exc from sqlalchemy import event from sqlalchemy.pool import Pool @event.listens_for(Pool, "checkout") def ping_connection(dbapi_connection, connection_record, connection_proxy): cursor = dbapi_connection.cursor() try: cursor.execute("SELECT 1") except: # optional - dispose the whole pool # instead of invalidating one at a time # connection_proxy._pool.dispose() # raise DisconnectionError - pool will try # connecting again up to three times before raising. raise exc.DisconnectionError() cursor.close() ``` And a test to make sure the above works: ``` from sqlalchemy import create_engine e = create_engine("mysql://scott:tiger@localhost/test", echo_pool=True) c1 = e.connect() c2 = e.connect() c3 = e.connect() c1.close() c2.close() c3.close() # pool size is now three. print "Restart the server" raw_input() for i in xrange(10): c = e.connect() print c.execute("select 1").fetchall() c.close() ```
17,754
7,629,753
I have been doing a lot of studying of the BaseHTTPServer and found that its not that good for multiple requests. I went through this article <http://metachris.org/2011/01/scaling-python-servers-with-worker-processes-and-socket-duplication/#python> and I wanted to know what is the best way for building a HTTP Server for multiple requests -> My requirements for the HTTP Server are simple - - support multiple requests (where each request may run a LONG Python Script) Till now I have following options -> - BaseHTTPServer (with thread is not good) - Mod\_Python (Apache intergration) - CherryPy? - Any other?
2011/10/02
[ "https://Stackoverflow.com/questions/7629753", "https://Stackoverflow.com", "https://Stackoverflow.com/users/558397/" ]
I have had very good luck with the CherryPy web server, one of the oldest and most solid of the pure-Python web servers. Just write your application as a WSGI callable and it should be easy to run under CherryPy's multi-threaded server. <http://www.cherrypy.org/>
Indeed, the the HTTP servers provided with the standard python library are meant only for light duty use; For moderate scaling (100's of concurrent connections), `mod_wsgi` in apache is a great choice. If your needs are greater than that(10,000's of concurrent connections), You'll want to look at an asynchronous framework, such as Twisted or Tornado. The general structure of an asynchronous application is quite different, so if you think you're likely to need to go down that route, you should definitely start your project in one of those frameworks from the start
17,764
18,267,454
The sql expression : ```sql select * from order where status=0 and adddate(created_time, interval 1 day)>now(); ``` python code: ```python from sqlalchemy.sql.expression import func, text from datetime import datetime closed_orders = DBSession.query(Order).filter(func.dateadd(Order.create_time, text('interval 1 day'))>datetime.now()).all() ``` but it's got wrong. how to do it correctly? thanks REF :[Using DATEADD in sqlalchemy](https://stackoverflow.com/questions/15572292/using-dateadd-in-sqlalchemy)
2013/08/16
[ "https://Stackoverflow.com/questions/18267454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2688431/" ]
Try this: ``` from sqlalchemy import func import datetime DBSession.query(Order)\ .filter(func.ADDDATE(Order.create_time,1)>datetime.datetime.now()) ```
presto: ``` extract('hour', cast(t_table.open_time,TIMESTAMP)) - 5 == 12 extract('dow', cast(cast(t_table.open_time, TIMESTAMP) - 5,TIMESTAMP)) == 3 ```
17,766
10,618,956
I want to implement a symbol type, which keeps track of the symbols we already have(saved in `_sym_table`), and return them if they exist, or create new ones otherwise. The code: ``` # -*- coding: utf-8 -*- _sym_table = {} class Symbol(object): def __new__(cls, sym): if sym not in _sym_table: return super().__new__(cls) else: return _sym_table[sym] def __init__(self, sym): self.sym = sym _sym_table[sym] = self def __str__(self): return self.sym def __cmp__(self, other): return self is other def __hash__(self): return self.sym.__hash__() ``` But when I call `copy.deepcopy` on a list of such `Symbol` instances, exception is raised: ``` a = Symbol('a') b = Symbol('b') s = [a, b] t = copy.deepcopy(s) ``` Error messages: ``` Traceback (most recent call last): File "xxx.py", line 7, in <module> t = copy.deepcopy(s) File "/usr/lib/python3.2/copy.py", line 147, in deepcopy y = copier(x, memo) File "/usr/lib/python3.2/copy.py", line 209, in _deepcopy_list y.append(deepcopy(a, memo)) File "/usr/lib/python3.2/copy.py", line 174, in deepcopy y = _reconstruct(x, rv, 1, memo) File "/usr/lib/python3.2/copy.py", line 285, in _reconstruct y = callable(*args) File "/usr/lib/python3.2/copyreg.py", line 88, in __newobj__ return cls.__new__(cls, *args) TypeError: __new__() takes exactly 2 arguments (1 given) ``` So my questions are: * How can I make a deep copy on these objects with self-defined `__new__` methods? * And any suggestions about when and how to use `copy.deepcopy`? Thanks a lot!
2012/05/16
[ "https://Stackoverflow.com/questions/10618956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/403367/" ]
one problem is that `deepcopy` and `copy` have no way of knowing which arguments to pass to `__new__`, therefore they only work with classes that don't require constructor arguments. the reason why you can have `__init__` arguments is that `__init__` isn't called when copying an object, but `__new__` must be called to create the new object. so if you want to control copying, you'll have to define the special `__copy__` and `__deepcopy__` methods: ``` def __copy__(self): return self def __deepcopy__(self, memo): return self ``` by the way, [singletons](http://tech.puredanger.com/2007/07/03/pattern-hate-singleton/) are [evil](http://blogs.msdn.com/b/scottdensmore/archive/2004/05/25/140827.aspx) and not really needed in python.
Seems to me you want the Symbol instances to be singletons. Deepcopy, however is supposed to be used when you want an exact copy of an instance, i.e. a different instance that is equal to the original. So the usage here kinda contradicts the purpose of deepcopy. If you want to make it work anyhow, you can define the [\_\_deepcopy\_\_](http://docs.python.org/py3k/library/copy.html#copy.deepcopy) method on *Symbol*.
17,767
70,351,208
I am trying to fit some `experimental data (x and y)` with a `custom function (Srt)` and using `scipy.optimize.curve_fit()`: Reading the data and defining the function, using dummy values (10,10) for Km and Vmax (which are to be determined using the curve fit) works fine, as long as I use `np.asarray()`: ``` from scipy.special import lambertw from scipy.optimize import curve_fit import numpy as np import scipy def Srt(t,s,Km,Vmax): print("t",type(t)) print("t",t) print("last element of t:",t[-1]) print("s",type(s)) print("s",s) print("last element of s:",s[-1]) Smax = s[-1] # Substrate concentration at end of reaction t0 = t[0] # time=0 (beginning of reaction) s0 = s[0] # Substrate concentration at time = 0 (beginning of reaction) E = np.exp(((Smax - s0) - Vmax*(t+t0))/Km) L = lambertw(((Smax - s0)/Km)*E) y = Smax - Km*L return y x=[2.780000e-03,2.778000e-02,5.278000e-02,7.778000e-02,1.027800e-01 ,1.277800e-01,1.527800e-01,1.777800e-01,2.027800e-01,2.277800e-01 ,2.527800e-01,2.777800e-01,3.027800e-01,3.277800e-01,3.527800e-01] y=[0.44236,0.4308,0.42299,0.41427,0.40548,0.39908,0.39039,0.3845,0.37882 ,0.37411,0.36759,0.36434,0.35864,0.35508,0.35138] xdata = np.asarray(x) ydata = np.asarray(y) Srt(xdata, ydata,10,10) ``` If I do not use `np.asarray`, I get a "Type Error": ``` Srt(x, y,10,10) ``` [![TypeError](https://i.stack.imgur.com/SWE3X.png)](https://i.stack.imgur.com/SWE3X.png) When I continue to use curve\_fit to make the fit for Vmax and Km with: ``` parameters, covariance = scipy.optimize.curve_fit(Srt, xdata, ydata) ``` I get into trouble: If I understand the error message correctly, for some reason, the array `ydata` is not an array anymore when it is read in as `s` ?!? [![ydata_S_Error](https://i.stack.imgur.com/3Z6rG.png)](https://i.stack.imgur.com/3Z6rG.png) What do I have to change in my code so that I can work with my function `Srt` and `curve_fit` ? **EDIT: Full output of code:** ``` t <class 'numpy.ndarray'> t [0.00278 0.02778 0.05278 0.07778 0.10278 0.12778 0.15278 0.17778 0.20278 0.22778 0.25278 0.27778 0.30278 0.32778 0.35278] last element of t: 0.35278 s <class 'numpy.ndarray'> s [0.44236 0.4308 0.42299 0.41427 0.40548 0.39908 0.39039 0.3845 0.37882 0.37411 0.36759 0.36434 0.35864 0.35508 0.35138] last element of s: 0.35138 t <class 'numpy.ndarray'> t [0.00278 0.02778 0.05278 0.07778 0.10278 0.12778 0.15278 0.17778 0.20278 0.22778 0.25278 0.27778 0.30278 0.32778 0.35278] last element of t: 0.35278 s <class 'numpy.float64'> s 1.0 --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-23-5ce34d06b849> in <module> 33 #then the problems start 34 ---> 35 parameters, covariance = scipy.optimize.curve_fit(Srt, xdata, ydata) ~\anaconda3\lib\site-packages\scipy\optimize\minpack.py in curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma, check_finite, bounds, method, jac, **kwargs) 761 # Remove full_output from kwargs, otherwise we're passing it in twice. 762 return_full = kwargs.pop('full_output', False) --> 763 res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs) 764 popt, pcov, infodict, errmsg, ier = res 765 ysize = len(infodict['fvec']) ~\anaconda3\lib\site-packages\scipy\optimize\minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag) 386 if not isinstance(args, tuple): 387 args = (args,) --> 388 shape, dtype = _check_func('leastsq', 'func', func, x0, args, n) 389 m = shape[0] 390 ~\anaconda3\lib\site-packages\scipy\optimize\minpack.py in _check_func(checker, argname, thefunc, x0, args, numinputs, output_shape) 24 def _check_func(checker, argname, thefunc, x0, args, numinputs, 25 output_shape=None): ---> 26 res = atleast_1d(thefunc(*((x0[:numinputs],) + args))) 27 if (output_shape is not None) and (shape(res) != output_shape): 28 if (output_shape[0] != 1): ~\anaconda3\lib\site-packages\scipy\optimize\minpack.py in func_wrapped(params) 461 if transform is None: 462 def func_wrapped(params): --> 463 return func(xdata, *params) - ydata 464 elif transform.ndim == 1: 465 def func_wrapped(params): <ipython-input-23-5ce34d06b849> in Srt(t, s, Km, Vmax) 10 print("s",type(s)) 11 print("s",s) ---> 12 print("last element of s:",s[-1]) 13 Smax = s[-1] # Substrate concentration at end of reaction 14 t0 = t[0] # time=0 (beginning of reaction) IndexError: invalid index to scalar variable. ``` **EDIT 2** FULLY Functional code, thanks to Jonathan Weine. Fit is suboptimal due to "bad" experimental data, I am playing around with my full dataset now :D ``` from scipy.special import lambertw from scipy.optimize import curve_fit import numpy as np import scipy def Srt(t, Smax: float, s0: float, Km: float, Vmax: float): t0 = t[0] # time=0 (beginning of reaction) E = np.exp(((Smax - s0) - Vmax*(t+t0))/Km) # L = lambertw(((Smax - s0)/Km)*E) # this apparently can be complex which causes another Error L = np.abs(lambertw(((Smax - s0)/Km)*E)) y = Smax + Km*L return y y = [0.44236,0.4308,0.42299,0.41427,0.40548,0.39908,0.39039,0.3845,0.37882 ,0.37411,0.36759,0.36434,0.35864,0.35508,0.35138,0.34748,0.34437,0.34143 ,0.3391,0.3348,0.33345,0.31404,0.30212,0.29043,0.28026,0.27331,0.26672 ,0.26187,0.25645,0.25208,0.24736,0.244,0.24056,0.23798,0.23359,0.23138 ,0.22845,0.22566,0.22384,0.22112,0.21894,0.21672,0.21466,0.21316,0.21209 ,0.20941,0.20823,0.20687,0.2056,0.20324,0.20266,0.20095,0.19935,0.19895 ,0.19746,0.19616,0.19486,0.19419,0.19382,0.19301,0.19085,0.19108,0.19024 ,0.18933,0.18839,0.18706,0.18643,0.18623,0.18569,0.18469,0.18381,0.18341 ,0.18331,0.18324,0.18222,0.18106,0.18039,0.18022,0.17906,0.17935,0.17842 ,0.17834,0.1781,0.17731,0.17704,0.1766,0.17654,0.1761,0.17568,0.1744 ,0.17453,0.17393,0.17325,0.17329,0.17302,0.17347,0.17344,0.17233,0.17228 ,0.17208,0.17177,0.1712,0.17076,0.171,0.17043,0.17057,0.17003,0.16965 ,0.16923,0.16944,0.16898,0.16879,0.16809,0.16821,0.16794,0.16831,0.16779 ,0.16805,0.16765,0.16762,0.16695,0.16694,0.1669,0.16642,0.16583,0.166 ,0.16625,0.16575,0.1658,0.16553,0.16565,0.1654,0.16419,0.16487,0.16467 ,0.16452,0.16433,0.16468,0.16423,0.16427,0.16372,0.16388,0.16388,0.16394 ,0.16382,0.1631,0.16353,0.1638,0.16304,0.163,0.16296,0.16295,0.16284 ,0.16275,0.16214,0.16243,0.16211,0.16207,0.16185,0.16187,0.16176,0.16168 ,0.16195,0.16138,0.16177,0.16121,0.16163,0.16121,0.161,0.16114,0.16122 ,0.16096,0.16105,0.16102,0.16068,0.16031,0.16028,0.16051,0.16045,0.16017 ,0.15977,0.15927,0.16007,0.15953,0.15933,0.1596,0.15911,0.15903,0.15884 ,0.15856,0.15889,0.15888,0.15861,0.15849,0.158,0.15822,0.15776,0.15759 ,0.15734,0.15757,0.15718,0.15699,0.15747,0.15692,0.15701,0.15715,0.15675 ,0.15732,0.15687,0.15659,0.15664,0.15635,0.15633,0.15591] #csvFile.iloc[0:500,9] x = [2.780000e-03,2.778000e-02,5.278000e-02,7.778000e-02,1.027800e-01 ,1.277800e-01,1.527800e-01,1.777800e-01,2.027800e-01,2.277800e-01 ,2.527800e-01,2.777800e-01,3.027800e-01,3.277800e-01,3.527800e-01 ,3.777800e-01,4.027800e-01,4.277800e-01,4.527800e-01,4.777800e-01 ,5.027800e-01,7.538900e-01,1.003890e+00,1.253890e+00,1.503890e+00 ,1.753890e+00,2.003890e+00,2.253890e+00,2.503890e+00,2.753890e+00 ,3.003890e+00,3.253890e+00,3.503890e+00,3.753890e+00,4.003890e+00 ,4.253890e+00,4.503890e+00,4.753890e+00,5.003890e+00,5.253890e+00 ,5.503890e+00,5.753890e+00,6.003890e+00,6.253890e+00,6.503890e+00 ,6.753890e+00,7.003890e+00,7.253890e+00,7.503890e+00,7.753890e+00 ,8.003890e+00,8.253890e+00,8.503890e+00,8.753890e+00,9.003890e+00 ,9.253890e+00,9.503890e+00,9.753890e+00,1.000389e+01,1.025389e+01 ,1.050389e+01,1.075389e+01,1.100389e+01,1.125389e+01,1.150389e+01 ,1.175389e+01,1.200389e+01,1.225389e+01,1.250389e+01,1.275389e+01 ,1.300389e+01,1.325389e+01,1.350389e+01,1.375389e+01,1.400389e+01 ,1.425389e+01,1.450389e+01,1.475389e+01,1.500389e+01,1.525389e+01 ,1.550389e+01,1.575389e+01,1.600389e+01,1.625389e+01,1.650389e+01 ,1.675389e+01,1.700389e+01,1.725389e+01,1.750389e+01,1.775389e+01 ,1.800389e+01,1.825389e+01,1.850389e+01,1.875389e+01,1.900389e+01 ,1.925389e+01,1.950389e+01,1.975389e+01,2.000389e+01,2.025389e+01 ,2.050389e+01,2.075389e+01,2.100389e+01,2.125389e+01,2.150389e+01 ,2.175389e+01,2.200389e+01,2.225389e+01,2.250389e+01,2.275389e+01 ,2.300389e+01,2.325389e+01,2.350389e+01,2.375389e+01,2.400389e+01 ,2.425389e+01,2.450389e+01,2.475389e+01,2.500389e+01,2.525389e+01 ,2.550389e+01,2.575389e+01,2.600389e+01,2.625389e+01,2.650389e+01 ,2.675389e+01,2.700389e+01,2.725389e+01,2.750389e+01,2.775389e+01 ,2.800389e+01,2.825389e+01,2.850389e+01,2.875389e+01,2.900389e+01 ,2.925389e+01,2.950389e+01,2.975389e+01,3.000389e+01,3.025389e+01 ,3.050389e+01,3.075389e+01,3.100389e+01,3.125389e+01,3.150389e+01 ,3.175389e+01,3.200389e+01,3.225389e+01,3.250389e+01,3.275389e+01 ,3.300389e+01,3.325389e+01,3.350389e+01,3.375389e+01,3.400389e+01 ,3.425389e+01,3.450389e+01,3.475389e+01,3.500389e+01,3.525389e+01 ,3.550389e+01,3.575389e+01,3.600389e+01,3.625389e+01,3.650389e+01 ,3.675389e+01,3.700389e+01,3.725389e+01,3.750389e+01,3.775389e+01 ,3.800389e+01,3.825389e+01,3.850389e+01,3.875389e+01,3.900389e+01 ,3.925389e+01,3.950389e+01,3.975389e+01,4.000389e+01,4.025389e+01 ,4.050389e+01,4.075389e+01,4.100389e+01,4.125389e+01,4.150389e+01 ,4.175389e+01,4.200389e+01,4.225389e+01,4.250389e+01,4.275389e+01 ,4.300389e+01,4.325389e+01,4.350389e+01,4.375389e+01,4.400389e+01 ,4.425389e+01,4.450389e+01,4.475389e+01,4.500389e+01,4.525389e+01 ,4.550389e+01,4.575389e+01,4.600389e+01,4.625389e+01,4.650389e+01 ,4.675389e+01,4.700389e+01,4.725389e+01,4.750389e+01,4.775389e+01 ,4.800389e+01,4.825389e+01,4.850389e+01,4.875389e+01] # csvFile.iloc[0:500,0] xdata = np.array(x) ydata = np.array(y) parameters, covariance = scipy.optimize.curve_fit(Srt, xdata, ydata) Km = parameters[1] Vmax = parameters[0] fit_y = Srt(xdata, ydata[-1],ydata[0], Km, Vmax) print("Km: ", parameters[1], "Vmax: ", parameters[0]) plt.plot(xdata, fit_y, '-', color="green",linewidth=1) plt.plot(xdata, ydata, 'o', color="red") ```
2021/12/14
[ "https://Stackoverflow.com/questions/70351208", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5466187/" ]
Please have a closer look at the [documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) of the `curve_fit` function. Where it states that `ydata` must nominaly be the result of `func(xdata... )`. So the ydata that you hand to `curve_fit` is never passed as argument of the call of `Srt` as you indicated in the manual call. Furthermore, the parameters to be estimated must have the same shape, which means that you have to define `Smax` and `s0` as float input. I modified your example such that it actually runs: ```py from scipy.special import lambertw from scipy.optimize import curve_fit import numpy as np import scipy def Srt(t, Smax: float, s0: float, Km: float, Vmax: float): t0 = t[0] # time=0 (beginning of reaction) E = np.exp(((Smax - s0) - Vmax*(t+t0))/Km) # L = lambertw(((Smax - s0)/Km)*E) # this apparently can be complex which causes another Error L = np.abs(lambertw(((Smax - s0)/Km)*E)) y = Smax - Km*L return y x=[2.780000e-03,2.778000e-02,5.278000e-02,7.778000e-02,1.027800e-01 ,1.277800e-01,1.527800e-01,1.777800e-01,2.027800e-01,2.277800e-01 ,2.527800e-01,2.777800e-01,3.027800e-01,3.277800e-01,3.527800e-01] y=[0.44236,0.4308,0.42299,0.41427,0.40548,0.39908,0.39039,0.3845,0.37882 ,0.37411,0.36759,0.36434,0.35864,0.35508,0.35138] xdata = np.array(x) ydata = np.array(y) parameters, covariance = scipy.optimize.curve_fit(Srt, xdata, ydata) ``` **NOTE**: The `np.abs` inside the function does not make sense, but the complex result of `lambertw` apparently can be complex. In this case an error is raised as there is no safe casting rule, causing curvefit to abort.
Your first error is produced by the `t+t0` expression. It `t` is a list `x`, that's a list "concatenate" expression, which is fine for `[1,2,3]+[4,5]` but not `[1,2,3]+5`. That's why `x` and `y` have to arrays. In the second error, what did the ``` print("s",type(s)) print("s",s) ``` show? Apparently `s` is not an array, or even a list.
17,770
13,961,140
I am a beginner to python and am at the moment having trouble using the command line. I have a script test.py (which only contains `print("Hello.")`), and it is located in the map C:\Python27. In my system variables, I have specified python to be C:\Python27 (I have other versions of Python installed on my computer as well). I thought this should be enough to run `python test.py` in the command line, but when I do so I get this: ``` File "<stdin>", line 1 python test.py ^ SyntaxError: invalid syntax ``` What is wrong? Thanks in advance!
2012/12/19
[ "https://Stackoverflow.com/questions/13961140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913042/" ]
Don't type `python test.py` from inside the Python interpreter. Type it at the command prompt, like so: ![cmd.exe](https://i.stack.imgur.com/gc2Q1.png) ![python test.py](https://i.imgur.com/TFUBm.png)
Running from the command line means running from the terminal or DOS shell. You are running it from Python itself.
17,771
61,648,271
**Piece of Code** ``` def wishListCount(): wishlist_count = len(session['Wishlist']) if len(session['Wishlist']) <= 0: return 0 else: return wishlist_count @app.route('/wishlist', methods=['GET', 'POST', 'DELETE']) def wishlist(): if request.method == 'POST': product_id = int(request.form['product_id']) ListItems = [product_id] if 'Wishlist' in session: if product_id in session['Wishlist']: print("This product is already in wishList!") else: session['Wishlist'] = mergeDict(session['Wishlist'], ListItems) else: session['Wishlist'] = ListItems wishlist_count = wishListCount() ``` **Heroku logs** State changed from starting to up 2020-05-07T00:37:33.000000+00:00 app[api]: Build succeeded 2020-05-07T00:37:39.445026+00:00 heroku[router]: at=info method=GET path="/" host=intelli-supermart.herokuapp.com request\_id=bc70627f-fbff-4722-8b7e-f97c18e7e2d5 fwd="203.128.16.105" dyno=web.1 connect=1ms service=102ms status=500 bytes=470 protocol=https 2020-05-07T00:37:39.441994+00:00 app[web.1]: [2020-05-07 00:37:39,440] ERROR in app: Exception on / [GET] 2020-05-07T00:37:39.442004+00:00 app[web.1]: Traceback (most recent call last): 2020-05-07T00:37:39.442005+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site packages/flask/app.py", line 2447, in wsgi\_app 2020-05-07T00:37:39.442005+00:00 app[web.1]: response = self.full\_dispatch\_request() 2020-05-07T00:37:39.442006+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site- packages/flask/app.py", line 1952, in full\_dispatch\_request 2020-05-07T00:37:39.442006+00:00 app[web.1]: rv = self.handle\_user\_exception(e) 2020-05-07T00:37:39.442007+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/app.py", line 1821, in handle\_user\_exception 2020-05-07T00:37:39.442007+00:00 app[web.1]: reraise(exc\_type, exc\_value, tb) 2020-05-07T00:37:39.442007+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/\_compat.py", line 39, in reraise 2020-05-07T00:37:39.442008+00:00 app[web.1]: raise value 2020-05-07T00:37:39.442009+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/app.py", line 1950, in full\_dispatch\_request 2020-05-07T00:37:39.442009+00:00 app[web.1]: rv = self.dispatch\_request() 2020-05-07T00:37:39.442009+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch\_request 2020-05-07T00:37:39.442010+00:00 app[web.1]: return self.view\_functionsrule.endpoint 2020-05-07T00:37:39.442010+00:00 app[web.1]: File "/app/app.py", line 117, in index 2020-05-07T00:37:39.442011+00:00 app[web.1]: wishlist\_count = wishListCount() 2020-05-07T00:37:39.442011+00:00 app[web.1]: File "/app/app.py", line 79, in wishListCount 2020-05-07T00:37:39.442011+00:00 app[web.1]: wishlist\_count = len(session['Wishlist']) 2020-05-07T00:37:39.442012+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/werkzeug/local.py", line 377, in 2020-05-07T00:37:39.442012+00:00 app[web.1]: **getitem** = lambda x, i: x.\_get\_current\_object()[i] 2020-05-07T00:37:39.442012+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/sessions.py", line 84, in **getitem** 2020-05-07T00:37:39.442013+00:00 app[web.1]: return super(SecureCookieSession, self).**getitem**(key) 2020-05-07T00:37:39.442019+00:00 app[web.1]: KeyError: 'Wishlist' 2020-05-07T00:37:39.445086+00:00 app[web.1]: 10.11.150.203 - - [07/May/2020:00:37:39 +0000] "GET / HTTP/1.1" 500 290 "-" "Mozilla/5.0 (X11; Linux x86\_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.122 Safari/537.36"
2020/05/07
[ "https://Stackoverflow.com/questions/61648271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13487131/" ]
To answer my own question, it was happening because of the Spring Boot version which was not ready to handle multistage builds, but after upgrading the service to 2.3.x i can build.
I think it is because of the Jar file not in supported form. That's why jarmode can't process it. Jarmode is a special system used to extracting Layered Jars. You can check out: <https://spring.io/blog/2020/01/27/creating-docker-images-with-spring-boot-2-3-0-m1> for detail info.
17,781
11,121,352
I deleted python .pyc files from my local repo and what I thought I did was to delete from remote github. I pushed all changes. The files are still on the repo but not on my local machine. How do I remove files from the github repo? I tried the following: ``` git rm classes/file.pyc git add . git ``` and even: ``` git rm --cached classes/file.pyc ``` Then when I try and checkout the files I get this error. ``` enter code here`error: pathspec 'classes/redis_ha.pyc' did not match any file(s) known to git. ``` I now dont know what else to do. As of now I have a totally corrupted git repo.
2012/06/20
[ "https://Stackoverflow.com/questions/11121352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1203556/" ]
You should not do `git add`. That's all ``` git rm classes/file.pyc git commit -m"bla bla bla" git push ```
``` git commit -am "A file was deleted" git push ```
17,782
39,545,452
I have a php script that should (I think) run a python script to control the energenie radio controlled plug sockets depending on which button is selected. It seems to work in that it echos back the correct message when the button is pressed but the python scripts don''t appear to run. I have added the line: www-data ALL=NOPASSWD: /usr/bin/python /home/pi/lampon.py which should give the apache user privileges to run the python script at least for turning on the power socket but it doesn't work. The script itself does work when run via the pi command line itself. Any suggestions? (the code for the php is below) ``` <html> <head> <meta name="viewport" content="width=device-width" /> <title>LED Control</title> </head> <body> LED Control: <form method="get" action="energenie.php"> <input type="submit" value="ON" name="on"> <input type="submit" value="OFF" name="off"> </form> <?php if(isset($_GET['on'])){ shell_exec("python /home/pi/lampon.py"); echo "LED is on"; } else if(isset($_GET['off'])){ shell_exec("python /home/pi/lampoff.py"); echo "LED is off"; } ?> </body> </html> ```
2016/09/17
[ "https://Stackoverflow.com/questions/39545452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6842362/" ]
[Parallel.Invoke](https://msdn.microsoft.com/en-us/library/dd992634(v=vs.110).aspx) method: ``` Parallel.Invoke( () => method1(), () => method2(), () => method3(), () => method4() ) ``` Add namespace `System.Threading.Tasks`
You can create a list of `Action` delegate where each delegate is a call to a given method: ``` List<Action> actions = new List<Action> { method1, method2, method3 }; ``` And then use [`Parallel.ForEach`](https://msdn.microsoft.com/en-us/library/dd992001(v=vs.110).aspx) to call them in parallel: ``` Parallel.ForEach(actions, action => action()); ```
17,783
57,854,621
I couldn't find any question related to this subject. But does python execute a function after the previous called function is finished or is there in any way parallel execution? **For example:** ``` def a(): print('a') def b(): print('b') a() b() ``` So in this example I would like to know if I can always be sure that `function b` is called **after** `function a` is finished, even if `function a` is a very long script? And what is the defenition of this, so I can look up documentation regarding this matter. Thanks!!
2019/09/09
[ "https://Stackoverflow.com/questions/57854621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9247792/" ]
Defining the function doesn't mean its execution. Since you defined `a` first, the function object for `a` will be created first, so as for there calls. You can take it as execution timeline starting from top to bottom.
There is no parallel execution of functions in python. The above functions will be executed in the same sequence that they were called in regardless of the amount of computation workload of either of the functions.
17,784
69,165,968
I'm trying to run a legacy React app locally for the first time. I'm on a new Mac M1 with Big Sur 11.5.2. My node version is 16.9.0, and I made python3 the default (although the app seems to be looking for python2). I also upgraded CommandLineTools to the latest version. But when I do a simple `npm install`, I get lots of warnings, and finally this error: ```none npm ERR! /Users/cd/.node-gyp/16.9.0/include/node/v8-internal.h:489:38: error: no template named 'remove_cv_t' in namespace 'std'; did you mean 'remove_cv'? npm ERR! !std::is_same<Data, std::remove_cv_t<T>>::value>::Perform(data); npm ERR! ~~~~~^~~~~~~~~~~ npm ERR! remove_cv npm ERR! /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1/type_traits:776:50: note: 'remove_cv' declared here npm ERR! template <class _Tp> struct _LIBCPP_TEMPLATE_VIS remove_cv npm ERR! ^ npm ERR! 1 error generated. npm ERR! make: *** [Release/obj.target/binding/src/binding.o] Error 1 npm ERR! gyp ERR! build error npm ERR! gyp ERR! stack Error: `make` failed with exit code: 2 npm ERR! gyp ERR! stack at ChildProcess.onExit (/Users/cd/develop/storybook/webclient/node_modules/node-gyp/lib/build.js:262:23) npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:394:28) npm ERR! gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12) npm ERR! gyp ERR! System Darwin 20.6.0 npm ERR! gyp ERR! command "/opt/homebrew/Cellar/node/16.9.0/bin/node" "/Users/cd/develop/storybook/webclient/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library=" npm ERR! gyp ERR! cwd /Users/cd/develop/storybook/webclient/node_modules/node-sass npm ERR! gyp ERR! node -v v16.9.0 npm ERR! gyp ERR! node-gyp -v v3.8.0 npm ERR! gyp ERR! not ok npm ERR! Build failed with error code: 1 ``` I'm not proficient enough with node to understand exactly how to troubleshoot this, beyond the steps I've already taken. Clearly something is wrong with node-sass, so I installed it separately with: ``` sudo npm install --unsafe-perm -g node-sass ``` That works, but when I re-run `npm install` to get all the dependencies, I get the same error.
2021/09/13
[ "https://Stackoverflow.com/questions/69165968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1660256/" ]
Upgrade `node-sass` to a version higher than 6.0.1 (mine was 4.0.0) solves this issue Ref: [error: no template named 'remove\_cv\_t' in namespace 'std'; did you mean 'remove\_cv'?](https://stackoverflow.com/questions/67241196/error-no-template-named-remove-cv-t-in-namespace-std-did-you-mean-remove)
try this ``` rm -rf node_modules package-lock.json npm install --saveDev node-sass npm install ```
17,789
13,295,064
As part of my course at university I am learning python. A task I have been trying to complete is to write a program that will print out random letters and their corresponding positions in "antidisestablishmentarianism". It will then print the remaining letters on a single line. I have been trying to do this in probably a crazy weird roundabout way - populating a list with the chosen values and removing these characters from the original. I realize my program is probably all wrong and broken; I only started learning lists today! ``` import random word = "antidisestablishmentarianism" wordList =["antidisestablishmentarianism"] print("The Word is:",word,"\n") lengthWord = len(word) usedValues=[] for i in range(5): position = random.randrange(0,lengthWord) print("word[",position, "]\t", word [position]) usedValues=[position] for ch in wordList: wordList.remove([usedValues]) print("The remaining letters are",WordList, sep='') ```
2012/11/08
[ "https://Stackoverflow.com/questions/13295064", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1809406/" ]
I think part of the problem is that you're creating and manipulating your `worldList` and `usedValues` lists incorrectly. To create a list of characters as `wordList` use `list(word)`. To add a used index to `usedValues` use `usedValues.append(position)`. There's also an issue with how you remove the used values from the word list. Here's your code with those errors fixed: ``` import random word = "antidisestablishmentarianism" wordList = list(word) print("The Word is:",word,"\n") lengthWord = len(word) usedValues=[] for i in range(5): position = random.randrange(0,lengthWord) print("word[",position, "]\t", word[position]) usedValues.append(position) for index in usedValues: wordList.pop(index) print("The remaining letters are",WordList, sep='') ``` This will mostly work. However, there's still a logic error. If you get the same random position twice in the first loop, you'll report the same character each time. However, when you remove them from the list later, you'll end up popping two different letters. Similarly, if you remove an letter from near the start of the word, the indexes you remove later on will be incorrect. You can even get a `IndexError` if one of the last positions selected was near the end of the word. One fix would be to remove the selected values from the list immediately, within the first loop. You'll need to explicitly call `len` each cycle then (since it changes each time through) but other than that everything should work correctly. Or here's how I'd solve the problem. Instead of picking five specific indexes and removing them from the list, I'd `random.shuffle` a list of all the indexes and take the first five. The rest can then be either printed out in their random order, or sorted first to give the impression of the letters being removed from the original word. ``` import random word = "antidisestablishmentarianism" indexes = list(range(len(word))) random.shuffle(indexes) for i in indexes[:5]: print("word[%d] is '%s'" % (i, word[i])) rest = sorted(indexes[5:]) # or just use indexes[5:] to keep random order print("The remaining letters are '%s'" % "".join(word[i] for i in rest)) ```
There are a few problems with your code as it stands. Firstly, this line: ``` wordList =["antidisestablishmentarianism"] ``` doesn't do what you think - it actually creates a list containing the single item `"antidisestablishmentarianism"`. To convert a string into a list of characters, you can use `list()` - and since you've already have the variable `word`, there's no need to type the word in again. On a side note, `wordList` isn't a very good variable name. Apart from the fact that it uses camelCase rather than the more pythonic underscore\_separated style, what you actually want here is a list of the *characters* in the word. So, that line can be replaced with: ``` characters = list(word) ``` Moving on ... this line: ``` lengthWord = len(word) ``` is redundant - you only reference `lengthWord` once in your code, so you might as well just replace that reference with `len(word)` where you use it. Your line: ``` usedValues=[position] ``` also isn't doing what you think: it's *replacing* `usedValues` entirely, with a list containing only the latest position in your loop. To append a value to a list, use `list.append()`: ``` used_positions.append(position) ``` (I've given the variable a more accurate name). Your next problem is this block: ``` for ch in wordList: wordList.remove([usedValues]) ``` First of all, you really want to check each of the positions you've previously stored, not each of the characters in the word. Your use of `list.remove()` is also wrong: you can't give a list of values to remove like that, but anyway `list.remove()` will remove the first instance of a value from a list, and what you want to do is remove the item at a particular position, which is what `list.pop()` is for: ``` for position in sorted(used_positions, reverse=True): characters.pop(position) ``` We're using a copy of `used_positions` sorted in reverse so that when we remove an item, the remaining positions in `used_positions` don't slide out of alignment with what's left of `characters`[\*]. Your final problem is the last line: ``` print("The remaining letters are",WordList, sep='') ``` If you want to print the contents of a list separated by `''`, this isn't the way to do it. Instead, you need `str.join()`: ``` print("The remaining letters are", "".join(characters)) ``` Putting all of those changes into practice, and tidying up a little, we end up with: ``` import random word = "antidisestablishmentarianism" characters = list(word) print("The Word is:", word, "\n") used_positions = [] for i in range(5): position = random.randrange(0, len(word)) print("word[",position, "]\t", word[position]) used_positions.append(position) for position in sorted(used_positions, reverse=True): characters.pop(position) print("The remaining letters are", "".join(characters)) ``` [\*] In fact, this throws up another problem: what if your code chooses the same position twice? I'll leave you to think about that one.
17,790
45,415,081
I have Eclipse with Pydev and RSE installed on my local Windows machine. I want to remote debug a Python application (Odoo 9.0) that is hosted on an Ubuntu 16.04 VPS. I have Pydev installed on the remote machine. I have been able to connect to the remote machine via SSH using a key for authentication and I can browse the remote file system. Refering to the documentation here; <http://www.pydev.org/manual_adv_remote_debugger.html> and reading the comments in the file located at; /usr/local/lib/python2.7/dist-packages/pydevd\_file\_utils.py it would seem that I need to map remote to local file system. To me this implies that the code must exist on both the remote and local (Eclipse) machines. If this is the case, how do I keep them in sync. I want to be able to develop with my code base on the remote machine. Do I need to copy every change to my local machine? It feels like I'm missing part of the puzzle and the documention that I've found is not detailed enough to be able to implement. Please let me know what steps remain outstanding to implement remote debugging and any implications for my workflow (such as having to copy all changes to both file systems).
2017/07/31
[ "https://Stackoverflow.com/questions/45415081", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6489912/" ]
If you want to develop the code all remotely (instead of locally), my suggestion is using sshfs (so, you'd do all the changes there directly). You should even be able to create a shell script to a remote interpreter in that case too (i.e.: the interpreter may be any script, so, you could chroot it or even run some python through ssh). Note I haven't actually tested this, but in theory it should work ;)
I found a way to get remote editing and remote debug going with eclipse and pydev from my mac to a Debian linux server (bitnami setup). To set up remote editing and debugging - Read these first <https://www.pydev.org/manual_adv_remote_debugger.html> <https://sites.google.com/site/programmersnotebook/remote-development-of-python-scripts-on-raspberry-pi-with-eclipse> Notes on my install * Installed pydevd in server python environment (did not need to copy pysrc as in raspy example above instructions). See links above for install steps. * Created remote project using RSE. (Eclipse Remote Shell extensions) Using RSE "Remote shell" window you can right click on source directory and create a local Eclipse project that points at the server files. See links above for install steps. * Edited `pydevd_file_utils.py` in server pydevd install directory. For me this was `/opt/python/lib/python3.7/site-packages`. If you're not sure where this is enter the following in your server python environment `import pydevd; print(pydevd.__file__)`. Added `PATHS_FROM_ECLIPSE_TO_PYTHON = [('/Users/<myusername>/dev/test/RemoteSystemsTempFiles/<server ref in RSE>/opt/bitnami/apps/odoo/data/addons/13.0/test/test.py','/opt/bitnami/apps/odoo/data/addons/13.0/test/test.py')]`. Read the comments and place it near the example lower down. * could add the following instead `PATHS_FROM_ECLIPSE_TO_PYTHON = [(r'/Users/andrewlemay/esp/test/RemoteSystemsTempFiles/34.253.144.28/',r'/')]` which means it would work for all RSE projects on the server. * Note the RemoteSystemTempFiles dir is part of Eclipse RSE path on your local machine * Add SSH remote port forwarding tunnel. This forwards the data from the server on port 5678 to client localhost:5678 to allow the server remote debugger to send info to the listening client debugger - see command below. With this I did not need IP address in settrace() command or to configure my router to port forward to my local machine. * INFO on SSH tunnels here <https://www.ssh.com/ssh/tunneling/example> **To run** * Set up secure SSH tunnel to server with remote port forwarding on 5678 * Run python script on server via console or RSE Remote Shell (Eclipse>Windowother>Remote systems>Remote Shell **Run commands** **Client** I'm using a private shared key and I enter the following in a local terminal `ssh -t -i "/Users/<username>/keys/<serverkeyname>.pem" <serverusername>@<serverIP> -R 5678:localhost:5678 -C -N` The process will block the terminal. End process with CTRL-C when debugging done to close the tunnel. If you don't need a private shared key you can lose the `-t -i "/Users/<username>/keys/<serverkeyname>.pem"` part. Start Pydev server in eclipse by clicking the `PyDev:start the pydev server` button (have to be in debug perspecive). [PyDev:start the pydev server](https://i.stack.imgur.com/QLdzC.png) You should then get a message in Console saying `Debug Server at port: 5678` **Server** You can use server terminal or Eclipse RSE Remote Shell Window `Python3 test.py` The local Eclipse debug server should burst into life! and allow debugging and breakpoints etc. **Test code - test.py** ``` import os import sys import pydevd pydevd.settrace() i = 3 p = 'Hello!' * i print(p) if __name__ == '__main__': pass print("Hello world 4") for k, v in os.environ.items(): print(f'{k}={v}') ``` Hope this is useful to someone...
17,791
49,168,556
For my project I need to extract the CSS Selectors for a given element that I will find through parsing. What I do is navigate to a page with selenium and then with python-beautiful soup I parse the page and find if there are any elements that I need the CSS Selector of. For example I may try to find any input tags with id "print". `soup.find_all('input', {'id': 'print')})` If I manage to find such an element I want to fetch its extract it's CSS Selector, something like "input#print". I don't just find using id's but also a combination of classes and regular expressions. Is there any way to achieve this?
2018/03/08
[ "https://Stackoverflow.com/questions/49168556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7445289/" ]
Try this. ``` from scrapy.selector import Selector from selenium import webdriver link = "https://example.com" xpath_desire = "normalize-space(//input[@id = 'print'])" path1 = "./chromedriver" driver = webdriver.Chrome(executable_path=path1) driver.get(link) temp_test = driver.find_element_by_css_selector("body") elem = temp_test.get_attribute('innerHTML') value = Selector(text=elem).xpath(xpath_desire).extract()[0] print(value) ```
Ok, I am totally new to Python so i am sure that there is a better answer for this, but here's my two cents :) ``` import requests from bs4 import BeautifulSoup url = "https://stackoverflow.com/questions/49168556/extract-css-selector-for- an-element-with-selenium" element = 'a' idName = 'nav-questions' page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') tags = soup.find_all(element, id = idName) if tags: for tag in tags : getClassNames = tag.get('class') classNames = ''.join(str('.' + x) for x in getClassNames) print element + '#' + idName + classNames else: print ':(' ``` This would print something like: ``` a#nav-questions.-link.js-gps-track ```
17,792
51,160,368
Since the start and end times of DST in a timezone can change every year, so how does python tell if dst is in effect or not?
2018/07/03
[ "https://Stackoverflow.com/questions/51160368", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10028264/" ]
So the answer was to use a function for a constant lookup: `$handler = new StreamHandler('/var/log/php/php.log', constant("Monolog\Logger::" . $level));`
``` <?php class Logger { const MY = 1; } $lookingfor = 'MY'; // approach 1 $value1 = (new ReflectionClass('Logger'))->getConstants()[$lookingfor]; // approach 2 $value2 = constant("Logger::" . $lookingfor); echo "$value1|$value2"; ?> ``` Result: "1|1"
17,793