qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
66,977,521
I want to run arbitrary "code" in an argument like an anonymous function in Python. How to do this in one single line? Lambdas seems that does not work since they only take one expression. ``` def call_func(callback): callback() def f(): pkg_set_status(package_name, status) print('ok') call_func(f) ``` Update: Adding some context since the question is not entirely clear in my opinion. I would like to run something like a multiline lambda or anonymous function. Something like: ``` call_func(lambda: # my multiline code here pkg_set_status(package_name, status) print('ok') ) ``` so that the argument itself is the anonymous function body... Update2: found the answer here <https://softwareengineering.stackexchange.com/questions/99243/why-doesnt-python-allow-multi-line-lambdas> Looks like python does not support multiline lambdas by design.
2021/04/06
[ "https://Stackoverflow.com/questions/66977521", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3274630/" ]
The structure defined by myList with the `<ol>` elements is never actually added to the document. If you concatenate the `<ol>`, then the `<li>` entries, then `</ol>` all to wrapper.innerHTML then it should work. For example something like... ``` var myList = "<ol>"; for (var i = 0; i < properties.length; i++) { myList += "<li>" + properties[i].unitNo + ": " + properties[i].owner + "</li>"; } myList += "</ol>"; wrapper.innerHTML = myList; ```
You need to loop first then place the concatenated value into the wrapper... If you look closely at your code when you inspect the parsed code in the browser inspector, you'll see that only the list items make it into the code. Then look at your myList variable, you never add that to the DOM. So by concatenating the variable and then adding it once you have the entire string added, you'll get the ordered list parent elements as they are intended. ```js function onloadFunction() { var properties = [{ unitNo: "C-8-1", owner: "Foo Yoke Wai" }, { unitNo: "C-3A-3A", owner: "Chia Kim Hooi" }, { unitNo: "B-18-8", owner: "Heng Tee See" }, { unitNo: "A-10-10", owner: "Tang So Ny" }, { unitNo: "B-19-10", owner: "Tang Xiao Mi" }, ]; var idk = document.getElementById("wrapper"); var myList = "<ol>"; for (var i = 0; i < properties.length; i++) { myList += "<li>" + properties[i].unitNo + ": " + properties[i].owner + "</li>"; } myList += "</ol>"; wrapper.innerHTML = myList; } ``` ```html <html> <body onload="onloadFunction()"> <head></head> <div id="wrapper"></div> </body> </html> ```
66,977,521
I want to run arbitrary "code" in an argument like an anonymous function in Python. How to do this in one single line? Lambdas seems that does not work since they only take one expression. ``` def call_func(callback): callback() def f(): pkg_set_status(package_name, status) print('ok') call_func(f) ``` Update: Adding some context since the question is not entirely clear in my opinion. I would like to run something like a multiline lambda or anonymous function. Something like: ``` call_func(lambda: # my multiline code here pkg_set_status(package_name, status) print('ok') ) ``` so that the argument itself is the anonymous function body... Update2: found the answer here <https://softwareengineering.stackexchange.com/questions/99243/why-doesnt-python-allow-multi-line-lambdas> Looks like python does not support multiline lambdas by design.
2021/04/06
[ "https://Stackoverflow.com/questions/66977521", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3274630/" ]
The structure defined by myList with the `<ol>` elements is never actually added to the document. If you concatenate the `<ol>`, then the `<li>` entries, then `</ol>` all to wrapper.innerHTML then it should work. For example something like... ``` var myList = "<ol>"; for (var i = 0; i < properties.length; i++) { myList += "<li>" + properties[i].unitNo + ": " + properties[i].owner + "</li>"; } myList += "</ol>"; wrapper.innerHTML = myList; ```
What you are trying to do is adding the innerHTML of `wrapper` directly as a `list`. You need to concat your data into `li` that will concat with your `ol` and at last add in DOM. ```js function onloadFunction() { var properties = [{ unitNo: "C-8-1", owner: "Foo Yoke Wai" }, { unitNo: "C-3A-3A", owner: "Chia Kim Hooi" }, { unitNo: "B-18-8", owner: "Heng Tee See" }, { unitNo: "A-10-10", owner: "Tang So Ny" }, { unitNo: "B-19-10", owner: "Tang Xiao Mi" }, ]; var idk = document.getElementById("wrapper"); var myList = "<ol>"; for (var i = 0; i < properties.length; i++) { myList += "<li>" + properties[i].unitNo + ": " + properties[i].owner + "</li>"; } myList += "</ol>"; wrapper.innerHTML = myList; } window.onload = onloadFunction; ``` ```html <div id="wrapper"></div> ```
46,966,690
I have data like this: ``` 0,tcp,http,SF,181,5450,0.11,0.00,0.00,0.00,,normal. 0,tcp,http,SF,239,486,0.05,0.00,0.00,0.00,normal. 0,tcp,http,SF,235,1337,0.03,0.00,0.00,0.00,normal. 0,tcp,http,SF,219,1337,0.03,0.00,0.00,0.00,normal. ``` The original data was stored in txt. I used list to store them in python. But the format is string. Then I want to change some columns of string into int like this: ``` '181' to 181 ``` Could anyone can help me? Thank you!
2017/10/27
[ "https://Stackoverflow.com/questions/46966690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7428504/" ]
When accessing data or methods from within the vue object, use `this.thing`. In your case, that would be `this.strSlug(this.shop.name)`.
Does not work even with 'this.' because that function has not been defined at the time data is being initialized. I think you have to do it in the created() life-cycle hook.
46,966,690
I have data like this: ``` 0,tcp,http,SF,181,5450,0.11,0.00,0.00,0.00,,normal. 0,tcp,http,SF,239,486,0.05,0.00,0.00,0.00,normal. 0,tcp,http,SF,235,1337,0.03,0.00,0.00,0.00,normal. 0,tcp,http,SF,219,1337,0.03,0.00,0.00,0.00,normal. ``` The original data was stored in txt. I used list to store them in python. But the format is string. Then I want to change some columns of string into int like this: ``` '181' to 181 ``` Could anyone can help me? Thank you!
2017/10/27
[ "https://Stackoverflow.com/questions/46966690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7428504/" ]
If you have a function in `data` that will be used in a context of another object (like an event handler, for example), then `this` will not point to the Vue instance. You will have to preserve the reference in another variable from the scope of `data()`: ``` methods: { shuffle() {} }, data() { var self = this; return { onClick: function() { self.shuffle() } } } ```
Does not work even with 'this.' because that function has not been defined at the time data is being initialized. I think you have to do it in the created() life-cycle hook.
61,605,694
I'm implementing a two-link acrobot simulation using pydrake and would like to enforce joint limits to prevent the lower link from colliding with the upper link. I've added the joint limits to the URDF and am parsing this URDF to generate an acrobot multibodyplant object. I've used functions to successfully verify that the joint limit is being parsed (position\_lower\_limits(), position\_upper\_limits()), but my simulation is not responding to those limits. I'd really appreciate some help in identifying why this may be. I'm creating and simulating the acrobot multibodyplant as follows: ``` plt_is_interactive = SetupMatplotlibBackend() builder = DiagramBuilder() # instantiate the pendulum and the scene graph double_pendulum, scene_graph = AddMultibodyPlantSceneGraph( builder, time_step=0.01 # discrete update period ) # parse the urdf urdf_path = 'double_pendulum_no_shoulder.urdf' Parser(double_pendulum).AddModelFromFile(urdf_path) double_pendulum.Finalize() #Simulation Set-Up # Set the constant torque torque_system = builder.AddSystem(ConstantVectorSource([[0.]])) builder.Connect(torque_system.get_output_port(0), double_pendulum.get_input_port(3)) visualizer = builder.AddSystem( PlanarSceneGraphVisualizer(scene_graph, xlim=[-10., 10.], ylim=[-10., 10.], show=plt_is_interactive)) builder.Connect(scene_graph.get_pose_bundle_output_port(), visualizer.get_input_port(0)) diagram = builder.Build() simulator = Simulator(diagram) context = simulator.get_mutable_context() plant_context = diagram.GetMutableSubsystemContext(double_pendulum, context) double_pendulum.SetPositionsAndVelocities(plant_context, [2, 0, 0, 0]) # Simulate duration = 10.0 if get_ipython() else 0.1 # sets a shorter duration during testing context.SetTime(0.0) AdvanceToAndVisualize(simulator, visualizer, duration) ``` and my URDF contains the following line within the associated joint to attempt to enforce joint limits. ``` <limit lower="-2.35" upper="2.35"/> ```
2020/05/05
[ "https://Stackoverflow.com/questions/61605694", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13471747/" ]
I just tried to reproduce, and was also surprised that it doesn't appear to work. Here is a runnable reproduction that violates the joint lower limit at the elbow, which is set to 0.0. It also prints out the limit from the joint, confirming that the parsing worked. <https://www.dropbox.com/s/2m12ws0g88t5uei/joint_limits_test.py?dl=0> <https://www.dropbox.com/s/fdjb4uv4m2i17i8/double_pendulum.urdf?dl=0>
> > ...but my simulation is not responding to those limits > > > what do you mean exactly?. First, you should know that our joint limits are "soft", meaning that they are not constraints but more like stiff springs. `MultibodyPlant` computes the stiffness of these springs automatically for you to ensure the stability of our solvers. It's a function of the time step. The smaller the time step, the stiffer the joint limits are; they converge to the rigid limit quadratically with time step. That being said, I see your time step is pretty large (10 ms). It might be your limits are just too soft and that's what you observe in the simulation results. Given you verified the correct parsing (I will trust you on that one), I'd suggest you run with a smaller time steps first as to verify this hypothesis. You should observe the quadratic convergence with time step.
61,605,694
I'm implementing a two-link acrobot simulation using pydrake and would like to enforce joint limits to prevent the lower link from colliding with the upper link. I've added the joint limits to the URDF and am parsing this URDF to generate an acrobot multibodyplant object. I've used functions to successfully verify that the joint limit is being parsed (position\_lower\_limits(), position\_upper\_limits()), but my simulation is not responding to those limits. I'd really appreciate some help in identifying why this may be. I'm creating and simulating the acrobot multibodyplant as follows: ``` plt_is_interactive = SetupMatplotlibBackend() builder = DiagramBuilder() # instantiate the pendulum and the scene graph double_pendulum, scene_graph = AddMultibodyPlantSceneGraph( builder, time_step=0.01 # discrete update period ) # parse the urdf urdf_path = 'double_pendulum_no_shoulder.urdf' Parser(double_pendulum).AddModelFromFile(urdf_path) double_pendulum.Finalize() #Simulation Set-Up # Set the constant torque torque_system = builder.AddSystem(ConstantVectorSource([[0.]])) builder.Connect(torque_system.get_output_port(0), double_pendulum.get_input_port(3)) visualizer = builder.AddSystem( PlanarSceneGraphVisualizer(scene_graph, xlim=[-10., 10.], ylim=[-10., 10.], show=plt_is_interactive)) builder.Connect(scene_graph.get_pose_bundle_output_port(), visualizer.get_input_port(0)) diagram = builder.Build() simulator = Simulator(diagram) context = simulator.get_mutable_context() plant_context = diagram.GetMutableSubsystemContext(double_pendulum, context) double_pendulum.SetPositionsAndVelocities(plant_context, [2, 0, 0, 0]) # Simulate duration = 10.0 if get_ipython() else 0.1 # sets a shorter duration during testing context.SetTime(0.0) AdvanceToAndVisualize(simulator, visualizer, duration) ``` and my URDF contains the following line within the associated joint to attempt to enforce joint limits. ``` <limit lower="-2.35" upper="2.35"/> ```
2020/05/05
[ "https://Stackoverflow.com/questions/61605694", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13471747/" ]
I just tried to reproduce, and was also surprised that it doesn't appear to work. Here is a runnable reproduction that violates the joint lower limit at the elbow, which is set to 0.0. It also prints out the limit from the joint, confirming that the parsing worked. <https://www.dropbox.com/s/2m12ws0g88t5uei/joint_limits_test.py?dl=0> <https://www.dropbox.com/s/fdjb4uv4m2i17i8/double_pendulum.urdf?dl=0>
We just were talking with @Russ Tedrake. As I mentioned, this is a "corner" case for MultibodyPlant that unfortunately it does not handle well, see Drake's issue [#12775](https://github.com/RobotLocomotion/drake/issues/12775). The "corner" case here is that the rotational inertia of the upper arm about elbow's axis is zero. That is because the upper arm is being modeled as a point mass concentrated at the joint's axis. A quick solution we can handle today is if you instead model the upper arm as a solid rod, which has non-zero moments of inertia about the joint's axis. That is, in your URDF you'd need to update the inertia values in `<inertia>` to be those of a rod of length 1.0 and mass 1.0 (in your case) about its center of mass. For your case I verified that the following change works: ``` <origin xyz="0 0 -0.5" rpy="0 0 0" /> <inertia ixx="0.08333333" ixy="0" ixz="0" iyy="0.08333333" iyz="0" izz="0"/> ``` Which effectively replaces your point mass by a thin rod of length 1.0 and mass 1.0.
57,221,919
I try to install `docker-ce` on `redhat 8` but it failed first, I try ``` # systemctl enable docker Failed to enable unit: Unit file docker.service does not exist. ``` So, I want to install `docker-ce` for the daemon ``` # yum install yum-utils # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # yum repolist -v # yum list docker-ce --showduplicates | sort -r # yum install docker-ce ``` but in this step, I have got this : ``` # yum install docker-ce Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:02:58 ago on Fri 26 Jul 2019 02:11:48 PM UTC. Error: Problem: package docker-ce-3:19.03.1-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed - cannot install the best candidate for the job - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded - package containerd.io-1.2.2-3.el7.x86_64 is excluded - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) ``` I create a redhat account, but I have got this problem : ``` # subscription-manager register --force Registering to: subscription.rhsm.redhat.com:443/subscription Username: xxxxxxxxxxx Password: The system has been registered with ID: 6c07b574-2601-4a84-90d4-a9dfdc499c2f The registered system name is: ip-172-31-11-95.us-east-2.compute.internal Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 566, in load ret = self._repo.load() File "/usr/lib64/python3.6/site-packages/libdnf/repo.py", line 503, in load return _repo.Repo_load(self) RuntimeError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 173, in update_check self._sync_with_server(uep, consumer_uuid) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 477, in _sync_with_server combined_profile = self.current_profile File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 430, in current_profile module_profile = get_profile('modulemd').collect() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 347, in get_profile profile = PROFILE_MAP[profile_type]() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 54, in __init__ self.content = self.__generate() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 76, in __generate base.fill_sack() File "/usr/lib/python3.6/site-packages/dnf/base.py", line 400, in fill_sack self._add_repo_to_sack(r) File "/usr/lib/python3.6/site-packages/dnf/base.py", line 135, in _add_repo_to_sack repo.load() File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 568, in load raise dnf.exceptions.RepoError(str(e)) dnf.exceptions.RepoError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/sbin/subscription-manager", line 11, in <module> load_entry_point('subscription-manager==1.23.8', 'console_scripts', 'subscription-manager')() File "/usr/lib64/python3.6/site-packages/subscription_manager/scripts/subscription_manager.py", line 85, in main return managercli.ManagerCLI().main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 2918, in main ret = CLI.main(self) File "/usr/lib64/python3.6/site-packages/subscription_manager/cli.py", line 183, in main return cmd.main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 506, in main return_code = self._do_command() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 1368, in _do_command profile_mgr.update_check(self.cp, consumer['uuid'], True) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 457, in update_check return CacheManager.update_check(self, uep, consumer_uuid, force) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 183, in update_check raise Exception(_("Error updating system data on the server, see /var/log/rhsm/rhsm.log " Exception: Error updating system data on the server, see /var/log/rhsm/rhsm.log for more details. ```
2019/07/26
[ "https://Stackoverflow.com/questions/57221919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11841974/" ]
yum install docker-ce --no-best worked for me ( Installed: docker-ce-3:18.09.1-3.el7.x86\_64 and Skipped: docker-ce-3:19.03.1-3.el7.x86\_64)
My guess, due to a missing subscription you cannot download packages from repositories the docker-ce package needs. So first register for a development account with redhat, then subscribe your host using the subscription Manager (remember, no production usage allowed then) and then retry the installation. Edit: Here are the subscription commands ``` subscription-manager register --username <yourname> --password <pw> subscription-manager role --set="Red Hat Enterprise Linux Server" subscription-manager service-level --set="Self-Support" subscription-manager usage --set="Development/Test" subscription-manager attach ``` Edit: sorry i answered too fast. Just tried on a new rhel8. docker-ce-18.06.3.ce-3.el7 is the most current Version for rhel 8 and installs fine
57,221,919
I try to install `docker-ce` on `redhat 8` but it failed first, I try ``` # systemctl enable docker Failed to enable unit: Unit file docker.service does not exist. ``` So, I want to install `docker-ce` for the daemon ``` # yum install yum-utils # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # yum repolist -v # yum list docker-ce --showduplicates | sort -r # yum install docker-ce ``` but in this step, I have got this : ``` # yum install docker-ce Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:02:58 ago on Fri 26 Jul 2019 02:11:48 PM UTC. Error: Problem: package docker-ce-3:19.03.1-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed - cannot install the best candidate for the job - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded - package containerd.io-1.2.2-3.el7.x86_64 is excluded - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) ``` I create a redhat account, but I have got this problem : ``` # subscription-manager register --force Registering to: subscription.rhsm.redhat.com:443/subscription Username: xxxxxxxxxxx Password: The system has been registered with ID: 6c07b574-2601-4a84-90d4-a9dfdc499c2f The registered system name is: ip-172-31-11-95.us-east-2.compute.internal Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 566, in load ret = self._repo.load() File "/usr/lib64/python3.6/site-packages/libdnf/repo.py", line 503, in load return _repo.Repo_load(self) RuntimeError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 173, in update_check self._sync_with_server(uep, consumer_uuid) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 477, in _sync_with_server combined_profile = self.current_profile File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 430, in current_profile module_profile = get_profile('modulemd').collect() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 347, in get_profile profile = PROFILE_MAP[profile_type]() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 54, in __init__ self.content = self.__generate() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 76, in __generate base.fill_sack() File "/usr/lib/python3.6/site-packages/dnf/base.py", line 400, in fill_sack self._add_repo_to_sack(r) File "/usr/lib/python3.6/site-packages/dnf/base.py", line 135, in _add_repo_to_sack repo.load() File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 568, in load raise dnf.exceptions.RepoError(str(e)) dnf.exceptions.RepoError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/sbin/subscription-manager", line 11, in <module> load_entry_point('subscription-manager==1.23.8', 'console_scripts', 'subscription-manager')() File "/usr/lib64/python3.6/site-packages/subscription_manager/scripts/subscription_manager.py", line 85, in main return managercli.ManagerCLI().main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 2918, in main ret = CLI.main(self) File "/usr/lib64/python3.6/site-packages/subscription_manager/cli.py", line 183, in main return cmd.main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 506, in main return_code = self._do_command() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 1368, in _do_command profile_mgr.update_check(self.cp, consumer['uuid'], True) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 457, in update_check return CacheManager.update_check(self, uep, consumer_uuid, force) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 183, in update_check raise Exception(_("Error updating system data on the server, see /var/log/rhsm/rhsm.log " Exception: Error updating system data on the server, see /var/log/rhsm/rhsm.log for more details. ```
2019/07/26
[ "https://Stackoverflow.com/questions/57221919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11841974/" ]
Disabling the `container-tools` module (installed by default) resolves the conflict: `dnf module disable container-tools` However, you first need to install the `container-selinux` dependency, which is provided by the same module: ``` [root@centos8-2gb-hel1-1 ~]# yum install container-selinux ... [root@centos8-2gb-hel1-1 ~]# yum module disable container-tools Last metadata expiration check: 0:05:02 ago on Mon Feb 10 16:31:17 2020. Dependencies resolved. ====================================================================================================================== Package Architecture Version Repository Size ====================================================================================================================== Disabling modules: container-tools Transaction Summary ====================================================================================================================== Is this ok [y/N]: y Complete! ``` The `docker-ce` package can now be installed normally with the previously installed `container-selinux` version and the new `containerd.io` depdency: ``` [root@centos8-2gb-hel1-1 ~]# yum install docker-ce Last metadata expiration check: 0:06:26 ago on Mon Feb 10 16:31:17 2020. Dependencies resolved. ====================================================================================================================== Package Arch Version Repository Size ====================================================================================================================== Installing: docker-ce x86_64 3:19.03.5-3.el7 docker-ce-stable 24 M Installing dependencies: libcgroup x86_64 0.41-19.el8 BaseOS 70 k tar x86_64 2:1.30-4.el8 BaseOS 838 k containerd.io x86_64 1.2.10-3.2.el7 docker-ce-stable 23 M docker-ce-cli x86_64 1:19.03.5-3.el7 docker-ce-stable 39 M Transaction Summary ====================================================================================================================== Install 5 Packages Total download size: 108 M Is this ok [y/N]: ``` RHEL/CentOS 8 excluding the `containerd.io` package is related to the `runc` provide conflicting with the runc package from the `container-tools` module: <https://bugzilla.redhat.com/show_bug.cgi?id=1756473> ``` [root@centos8-2gb-hel1-1 ~]# yum provides runc Last metadata expiration check: 0:07:21 ago on Mon Feb 10 16:31:17 2020. ... containerd.io-1.2.10-3.2.el7.x86_64 : An industry-standard container runtime Repo : docker-ce-stable Matched from: Provide : runc runc-1.0.0-60.rc8.module_el8.1.0+237+63e26edc.x86_64 : CLI for running Open Containers Repo : AppStream Matched from: Provide : runc = 1.0.0-60.rc8.module_el8.1.0+237+63e26edc ```
My guess, due to a missing subscription you cannot download packages from repositories the docker-ce package needs. So first register for a development account with redhat, then subscribe your host using the subscription Manager (remember, no production usage allowed then) and then retry the installation. Edit: Here are the subscription commands ``` subscription-manager register --username <yourname> --password <pw> subscription-manager role --set="Red Hat Enterprise Linux Server" subscription-manager service-level --set="Self-Support" subscription-manager usage --set="Development/Test" subscription-manager attach ``` Edit: sorry i answered too fast. Just tried on a new rhel8. docker-ce-18.06.3.ce-3.el7 is the most current Version for rhel 8 and installs fine
57,221,919
I try to install `docker-ce` on `redhat 8` but it failed first, I try ``` # systemctl enable docker Failed to enable unit: Unit file docker.service does not exist. ``` So, I want to install `docker-ce` for the daemon ``` # yum install yum-utils # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # yum repolist -v # yum list docker-ce --showduplicates | sort -r # yum install docker-ce ``` but in this step, I have got this : ``` # yum install docker-ce Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:02:58 ago on Fri 26 Jul 2019 02:11:48 PM UTC. Error: Problem: package docker-ce-3:19.03.1-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed - cannot install the best candidate for the job - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded - package containerd.io-1.2.2-3.el7.x86_64 is excluded - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) ``` I create a redhat account, but I have got this problem : ``` # subscription-manager register --force Registering to: subscription.rhsm.redhat.com:443/subscription Username: xxxxxxxxxxx Password: The system has been registered with ID: 6c07b574-2601-4a84-90d4-a9dfdc499c2f The registered system name is: ip-172-31-11-95.us-east-2.compute.internal Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 566, in load ret = self._repo.load() File "/usr/lib64/python3.6/site-packages/libdnf/repo.py", line 503, in load return _repo.Repo_load(self) RuntimeError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 173, in update_check self._sync_with_server(uep, consumer_uuid) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 477, in _sync_with_server combined_profile = self.current_profile File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 430, in current_profile module_profile = get_profile('modulemd').collect() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 347, in get_profile profile = PROFILE_MAP[profile_type]() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 54, in __init__ self.content = self.__generate() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 76, in __generate base.fill_sack() File "/usr/lib/python3.6/site-packages/dnf/base.py", line 400, in fill_sack self._add_repo_to_sack(r) File "/usr/lib/python3.6/site-packages/dnf/base.py", line 135, in _add_repo_to_sack repo.load() File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 568, in load raise dnf.exceptions.RepoError(str(e)) dnf.exceptions.RepoError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/sbin/subscription-manager", line 11, in <module> load_entry_point('subscription-manager==1.23.8', 'console_scripts', 'subscription-manager')() File "/usr/lib64/python3.6/site-packages/subscription_manager/scripts/subscription_manager.py", line 85, in main return managercli.ManagerCLI().main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 2918, in main ret = CLI.main(self) File "/usr/lib64/python3.6/site-packages/subscription_manager/cli.py", line 183, in main return cmd.main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 506, in main return_code = self._do_command() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 1368, in _do_command profile_mgr.update_check(self.cp, consumer['uuid'], True) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 457, in update_check return CacheManager.update_check(self, uep, consumer_uuid, force) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 183, in update_check raise Exception(_("Error updating system data on the server, see /var/log/rhsm/rhsm.log " Exception: Error updating system data on the server, see /var/log/rhsm/rhsm.log for more details. ```
2019/07/26
[ "https://Stackoverflow.com/questions/57221919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11841974/" ]
There is a dependency issue in `Centos 8` which references the `containerd` from `Centos 7`. Just install the `containerd` package manually, and it will work ``` yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.10-3.2.el7.x86_64.rpm ```
My guess, due to a missing subscription you cannot download packages from repositories the docker-ce package needs. So first register for a development account with redhat, then subscribe your host using the subscription Manager (remember, no production usage allowed then) and then retry the installation. Edit: Here are the subscription commands ``` subscription-manager register --username <yourname> --password <pw> subscription-manager role --set="Red Hat Enterprise Linux Server" subscription-manager service-level --set="Self-Support" subscription-manager usage --set="Development/Test" subscription-manager attach ``` Edit: sorry i answered too fast. Just tried on a new rhel8. docker-ce-18.06.3.ce-3.el7 is the most current Version for rhel 8 and installs fine
57,221,919
I try to install `docker-ce` on `redhat 8` but it failed first, I try ``` # systemctl enable docker Failed to enable unit: Unit file docker.service does not exist. ``` So, I want to install `docker-ce` for the daemon ``` # yum install yum-utils # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # yum repolist -v # yum list docker-ce --showduplicates | sort -r # yum install docker-ce ``` but in this step, I have got this : ``` # yum install docker-ce Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:02:58 ago on Fri 26 Jul 2019 02:11:48 PM UTC. Error: Problem: package docker-ce-3:19.03.1-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed - cannot install the best candidate for the job - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded - package containerd.io-1.2.2-3.el7.x86_64 is excluded - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) ``` I create a redhat account, but I have got this problem : ``` # subscription-manager register --force Registering to: subscription.rhsm.redhat.com:443/subscription Username: xxxxxxxxxxx Password: The system has been registered with ID: 6c07b574-2601-4a84-90d4-a9dfdc499c2f The registered system name is: ip-172-31-11-95.us-east-2.compute.internal Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 566, in load ret = self._repo.load() File "/usr/lib64/python3.6/site-packages/libdnf/repo.py", line 503, in load return _repo.Repo_load(self) RuntimeError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 173, in update_check self._sync_with_server(uep, consumer_uuid) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 477, in _sync_with_server combined_profile = self.current_profile File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 430, in current_profile module_profile = get_profile('modulemd').collect() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 347, in get_profile profile = PROFILE_MAP[profile_type]() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 54, in __init__ self.content = self.__generate() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 76, in __generate base.fill_sack() File "/usr/lib/python3.6/site-packages/dnf/base.py", line 400, in fill_sack self._add_repo_to_sack(r) File "/usr/lib/python3.6/site-packages/dnf/base.py", line 135, in _add_repo_to_sack repo.load() File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 568, in load raise dnf.exceptions.RepoError(str(e)) dnf.exceptions.RepoError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/sbin/subscription-manager", line 11, in <module> load_entry_point('subscription-manager==1.23.8', 'console_scripts', 'subscription-manager')() File "/usr/lib64/python3.6/site-packages/subscription_manager/scripts/subscription_manager.py", line 85, in main return managercli.ManagerCLI().main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 2918, in main ret = CLI.main(self) File "/usr/lib64/python3.6/site-packages/subscription_manager/cli.py", line 183, in main return cmd.main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 506, in main return_code = self._do_command() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 1368, in _do_command profile_mgr.update_check(self.cp, consumer['uuid'], True) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 457, in update_check return CacheManager.update_check(self, uep, consumer_uuid, force) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 183, in update_check raise Exception(_("Error updating system data on the server, see /var/log/rhsm/rhsm.log " Exception: Error updating system data on the server, see /var/log/rhsm/rhsm.log for more details. ```
2019/07/26
[ "https://Stackoverflow.com/questions/57221919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11841974/" ]
Disabling the `container-tools` module (installed by default) resolves the conflict: `dnf module disable container-tools` However, you first need to install the `container-selinux` dependency, which is provided by the same module: ``` [root@centos8-2gb-hel1-1 ~]# yum install container-selinux ... [root@centos8-2gb-hel1-1 ~]# yum module disable container-tools Last metadata expiration check: 0:05:02 ago on Mon Feb 10 16:31:17 2020. Dependencies resolved. ====================================================================================================================== Package Architecture Version Repository Size ====================================================================================================================== Disabling modules: container-tools Transaction Summary ====================================================================================================================== Is this ok [y/N]: y Complete! ``` The `docker-ce` package can now be installed normally with the previously installed `container-selinux` version and the new `containerd.io` depdency: ``` [root@centos8-2gb-hel1-1 ~]# yum install docker-ce Last metadata expiration check: 0:06:26 ago on Mon Feb 10 16:31:17 2020. Dependencies resolved. ====================================================================================================================== Package Arch Version Repository Size ====================================================================================================================== Installing: docker-ce x86_64 3:19.03.5-3.el7 docker-ce-stable 24 M Installing dependencies: libcgroup x86_64 0.41-19.el8 BaseOS 70 k tar x86_64 2:1.30-4.el8 BaseOS 838 k containerd.io x86_64 1.2.10-3.2.el7 docker-ce-stable 23 M docker-ce-cli x86_64 1:19.03.5-3.el7 docker-ce-stable 39 M Transaction Summary ====================================================================================================================== Install 5 Packages Total download size: 108 M Is this ok [y/N]: ``` RHEL/CentOS 8 excluding the `containerd.io` package is related to the `runc` provide conflicting with the runc package from the `container-tools` module: <https://bugzilla.redhat.com/show_bug.cgi?id=1756473> ``` [root@centos8-2gb-hel1-1 ~]# yum provides runc Last metadata expiration check: 0:07:21 ago on Mon Feb 10 16:31:17 2020. ... containerd.io-1.2.10-3.2.el7.x86_64 : An industry-standard container runtime Repo : docker-ce-stable Matched from: Provide : runc runc-1.0.0-60.rc8.module_el8.1.0+237+63e26edc.x86_64 : CLI for running Open Containers Repo : AppStream Matched from: Provide : runc = 1.0.0-60.rc8.module_el8.1.0+237+63e26edc ```
yum install docker-ce --no-best worked for me ( Installed: docker-ce-3:18.09.1-3.el7.x86\_64 and Skipped: docker-ce-3:19.03.1-3.el7.x86\_64)
57,221,919
I try to install `docker-ce` on `redhat 8` but it failed first, I try ``` # systemctl enable docker Failed to enable unit: Unit file docker.service does not exist. ``` So, I want to install `docker-ce` for the daemon ``` # yum install yum-utils # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # yum repolist -v # yum list docker-ce --showduplicates | sort -r # yum install docker-ce ``` but in this step, I have got this : ``` # yum install docker-ce Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:02:58 ago on Fri 26 Jul 2019 02:11:48 PM UTC. Error: Problem: package docker-ce-3:19.03.1-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed - cannot install the best candidate for the job - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded - package containerd.io-1.2.2-3.el7.x86_64 is excluded - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) ``` I create a redhat account, but I have got this problem : ``` # subscription-manager register --force Registering to: subscription.rhsm.redhat.com:443/subscription Username: xxxxxxxxxxx Password: The system has been registered with ID: 6c07b574-2601-4a84-90d4-a9dfdc499c2f The registered system name is: ip-172-31-11-95.us-east-2.compute.internal Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 566, in load ret = self._repo.load() File "/usr/lib64/python3.6/site-packages/libdnf/repo.py", line 503, in load return _repo.Repo_load(self) RuntimeError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 173, in update_check self._sync_with_server(uep, consumer_uuid) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 477, in _sync_with_server combined_profile = self.current_profile File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 430, in current_profile module_profile = get_profile('modulemd').collect() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 347, in get_profile profile = PROFILE_MAP[profile_type]() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 54, in __init__ self.content = self.__generate() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 76, in __generate base.fill_sack() File "/usr/lib/python3.6/site-packages/dnf/base.py", line 400, in fill_sack self._add_repo_to_sack(r) File "/usr/lib/python3.6/site-packages/dnf/base.py", line 135, in _add_repo_to_sack repo.load() File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 568, in load raise dnf.exceptions.RepoError(str(e)) dnf.exceptions.RepoError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/sbin/subscription-manager", line 11, in <module> load_entry_point('subscription-manager==1.23.8', 'console_scripts', 'subscription-manager')() File "/usr/lib64/python3.6/site-packages/subscription_manager/scripts/subscription_manager.py", line 85, in main return managercli.ManagerCLI().main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 2918, in main ret = CLI.main(self) File "/usr/lib64/python3.6/site-packages/subscription_manager/cli.py", line 183, in main return cmd.main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 506, in main return_code = self._do_command() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 1368, in _do_command profile_mgr.update_check(self.cp, consumer['uuid'], True) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 457, in update_check return CacheManager.update_check(self, uep, consumer_uuid, force) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 183, in update_check raise Exception(_("Error updating system data on the server, see /var/log/rhsm/rhsm.log " Exception: Error updating system data on the server, see /var/log/rhsm/rhsm.log for more details. ```
2019/07/26
[ "https://Stackoverflow.com/questions/57221919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11841974/" ]
Disabling the `container-tools` module (installed by default) resolves the conflict: `dnf module disable container-tools` However, you first need to install the `container-selinux` dependency, which is provided by the same module: ``` [root@centos8-2gb-hel1-1 ~]# yum install container-selinux ... [root@centos8-2gb-hel1-1 ~]# yum module disable container-tools Last metadata expiration check: 0:05:02 ago on Mon Feb 10 16:31:17 2020. Dependencies resolved. ====================================================================================================================== Package Architecture Version Repository Size ====================================================================================================================== Disabling modules: container-tools Transaction Summary ====================================================================================================================== Is this ok [y/N]: y Complete! ``` The `docker-ce` package can now be installed normally with the previously installed `container-selinux` version and the new `containerd.io` depdency: ``` [root@centos8-2gb-hel1-1 ~]# yum install docker-ce Last metadata expiration check: 0:06:26 ago on Mon Feb 10 16:31:17 2020. Dependencies resolved. ====================================================================================================================== Package Arch Version Repository Size ====================================================================================================================== Installing: docker-ce x86_64 3:19.03.5-3.el7 docker-ce-stable 24 M Installing dependencies: libcgroup x86_64 0.41-19.el8 BaseOS 70 k tar x86_64 2:1.30-4.el8 BaseOS 838 k containerd.io x86_64 1.2.10-3.2.el7 docker-ce-stable 23 M docker-ce-cli x86_64 1:19.03.5-3.el7 docker-ce-stable 39 M Transaction Summary ====================================================================================================================== Install 5 Packages Total download size: 108 M Is this ok [y/N]: ``` RHEL/CentOS 8 excluding the `containerd.io` package is related to the `runc` provide conflicting with the runc package from the `container-tools` module: <https://bugzilla.redhat.com/show_bug.cgi?id=1756473> ``` [root@centos8-2gb-hel1-1 ~]# yum provides runc Last metadata expiration check: 0:07:21 ago on Mon Feb 10 16:31:17 2020. ... containerd.io-1.2.10-3.2.el7.x86_64 : An industry-standard container runtime Repo : docker-ce-stable Matched from: Provide : runc runc-1.0.0-60.rc8.module_el8.1.0+237+63e26edc.x86_64 : CLI for running Open Containers Repo : AppStream Matched from: Provide : runc = 1.0.0-60.rc8.module_el8.1.0+237+63e26edc ```
There is a dependency issue in `Centos 8` which references the `containerd` from `Centos 7`. Just install the `containerd` package manually, and it will work ``` yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.10-3.2.el7.x86_64.rpm ```
28,299,754
``` a = [(24, 13), (23, 13), (22, 13), (21, 13), (20, 13), (19, 13), (19, 14), (19, 15), (18, 15), (17, 15), (16, 15), (15, 15), (14, 15), (13, 15), (13, 14), (13, 13), (13, 12), (13, 11), (13, 10), (12, 10), (11, 10), (10, 10), (9, 10), (8, 10), (7, 10), (7, 9), (7, 8), (7, 7), (7, 6), (7, 5), (7, 4), (6, 4), (5, 4), (4, 4)] ``` The above path (set of unique coordinates) has 6 turns. Could anyone please help me code the same in python? For example, for the above list `a` , the output should be `6`. ``` length = len(a)-3 print length for i in range(0,length): x1,y1 = a[i] x2,y2 = a[i+1] x3,y3 = a[i+2] if y1 is y2: if y1 is y3: x_flag = 1 else: x_flag = 0 if x_flag is 0: flag1 += 1 print 'Turn' print flag1 ```
2015/02/03
[ "https://Stackoverflow.com/questions/28299754", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4296482/" ]
Probably not the prettiest solution, but the straightforward way would be this: ``` a = [(24, 13), (23, 13), (22, 13), (21, 13), (20, 13), (19, 13), (19, 14), (19, 15), (18, 15), (17, 15), (16, 15), (15, 15), (14, 15), (13, 15), (13, 14), (13, 13), (13, 12), (13, 11), (13, 10), (12, 10), (11, 10), (10, 10), (9, 10), (8, 10), (7, 10), (7, 9), (7, 8), (7, 7), (7, 6), (7, 5), (7, 4), (6, 4), (5, 4), (4, 4)] count = 0 direction = -1 for i in range(1,len(a)): current_dir = 0 if a[i][0]-a[i-1][0] != 0 else 1 if direction != -1: if current_dir != direction: # print("changing direction") count += 1 direction = current_dir print count ``` It assumes you are only ever changing one direction (i.e. never moving diagonally).
you can convert the tuples to numpy array, and check if after two legs, you moved in both axis. ``` arr = np.array(a) ((np.abs(arr[2:] - arr[:-2])>0).sum(axis=1)==2).sum() ```
28,299,754
``` a = [(24, 13), (23, 13), (22, 13), (21, 13), (20, 13), (19, 13), (19, 14), (19, 15), (18, 15), (17, 15), (16, 15), (15, 15), (14, 15), (13, 15), (13, 14), (13, 13), (13, 12), (13, 11), (13, 10), (12, 10), (11, 10), (10, 10), (9, 10), (8, 10), (7, 10), (7, 9), (7, 8), (7, 7), (7, 6), (7, 5), (7, 4), (6, 4), (5, 4), (4, 4)] ``` The above path (set of unique coordinates) has 6 turns. Could anyone please help me code the same in python? For example, for the above list `a` , the output should be `6`. ``` length = len(a)-3 print length for i in range(0,length): x1,y1 = a[i] x2,y2 = a[i+1] x3,y3 = a[i+2] if y1 is y2: if y1 is y3: x_flag = 1 else: x_flag = 0 if x_flag is 0: flag1 += 1 print 'Turn' print flag1 ```
2015/02/03
[ "https://Stackoverflow.com/questions/28299754", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4296482/" ]
Probably not the prettiest solution, but the straightforward way would be this: ``` a = [(24, 13), (23, 13), (22, 13), (21, 13), (20, 13), (19, 13), (19, 14), (19, 15), (18, 15), (17, 15), (16, 15), (15, 15), (14, 15), (13, 15), (13, 14), (13, 13), (13, 12), (13, 11), (13, 10), (12, 10), (11, 10), (10, 10), (9, 10), (8, 10), (7, 10), (7, 9), (7, 8), (7, 7), (7, 6), (7, 5), (7, 4), (6, 4), (5, 4), (4, 4)] count = 0 direction = -1 for i in range(1,len(a)): current_dir = 0 if a[i][0]-a[i-1][0] != 0 else 1 if direction != -1: if current_dir != direction: # print("changing direction") count += 1 direction = current_dir print count ``` It assumes you are only ever changing one direction (i.e. never moving diagonally).
Here is what I suggest : ``` x0, y0 = a[0] previous_move_dir = '' turns_nb = -1 # start is not a turn for x1, y1 in a[1:]: if x1 == x0 and abs(y1-y0) == 1: # move is 1 in Y direction move_dir = 'Y' elif y1 == y0 and abs(x1-x0) == 1: # move is 1 in X direction move_dir = 'X' else: # move is anything else raise Exception('Bad coordinates definition') if move_dir != previous_move_dir: # there is a direction change turns_nb += 1 previous_move_dir = move_dir x0, y0 = x1, y1 print turns_nb ```
71,040,681
I have a Rancher Deskop(dockerd) on M1 MacOS and when I am trying to build below dockerfile I am getting an error such as below. Here is the command how I am trying to build the image `docker build -t te-grafana-dashboards-toolchain --no-cache .` I tried to change the platforms but nonae of them worked for me. I am a bit lost about this platform issue for M1 but any help will be appreciated, What I am doing wrong? What might the root cause of this? ``` Removing intermediate container 70af516d5d6b ---> a69229847153 Step 5/6 : RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb ---> Running in 13545862fffe qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory Removing intermediate container 13545862fffe ``` Dockerfile ``` FROM --platform=linux/amd64 ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-amd64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-amd64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-amd64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
2022/02/08
[ "https://Stackoverflow.com/questions/71040681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12530530/" ]
Incidentally, in case it's helpful to another who lands here, I have the same issue on an M1 Max MacBook Pro laptop attempting to do a `docker build` from a company repo that should be a pretty well traveled path, but I might be the only one (it's a small company) that has an ARM64 M1 "Apple Silicon" Mac. ***However I found the solution (well, a solution) to my situation was exactly the opposite of the solution to the OP's, and that was to add*** `--platform=linux/amd64` ***to the FROM line of the docker file.*** Otherwise it was using an ARM64 image to start from without me being the wiser but then later in the Dockerfile the build attempts to install and execute code compiled for x86\_64. Starting the build process by requesting the base image be `linux/amd64` ends up with then the base image having `/lib64/ld-linux-x86-64.so.2`. This probably means everything is being emulated as x86\_64 on the ARM64 CPU via `qemu-x86_64` and so if you have the option to start from an ARM64 image and can compile within the container during build time any software you can't install as ARM64 binaries, it'll probably go faster when you later run the container on the M1 based Mac. I'm not able to try that myself just yet for this case.
this resolved my issue. ``` FROM ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-arm64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-arm64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-arm64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest; ln -s /root/go/bin/jb /usr/bin/jb WORKDIR /workspace ```
71,040,681
I have a Rancher Deskop(dockerd) on M1 MacOS and when I am trying to build below dockerfile I am getting an error such as below. Here is the command how I am trying to build the image `docker build -t te-grafana-dashboards-toolchain --no-cache .` I tried to change the platforms but nonae of them worked for me. I am a bit lost about this platform issue for M1 but any help will be appreciated, What I am doing wrong? What might the root cause of this? ``` Removing intermediate container 70af516d5d6b ---> a69229847153 Step 5/6 : RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb ---> Running in 13545862fffe qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory Removing intermediate container 13545862fffe ``` Dockerfile ``` FROM --platform=linux/amd64 ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-amd64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-amd64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-amd64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
2022/02/08
[ "https://Stackoverflow.com/questions/71040681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12530530/" ]
this resolved my issue. ``` FROM ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-arm64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-arm64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-arm64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest; ln -s /root/go/bin/jb /usr/bin/jb WORKDIR /workspace ```
Passing following flag to C preprocessor as `CPPFLAGS` solved similar issue in my M1 ``` -DPNG_ARM_NEON_OPT=0 ``` Pass the value as env var with key `CPPFLAGS` to relevant service.
71,040,681
I have a Rancher Deskop(dockerd) on M1 MacOS and when I am trying to build below dockerfile I am getting an error such as below. Here is the command how I am trying to build the image `docker build -t te-grafana-dashboards-toolchain --no-cache .` I tried to change the platforms but nonae of them worked for me. I am a bit lost about this platform issue for M1 but any help will be appreciated, What I am doing wrong? What might the root cause of this? ``` Removing intermediate container 70af516d5d6b ---> a69229847153 Step 5/6 : RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb ---> Running in 13545862fffe qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory Removing intermediate container 13545862fffe ``` Dockerfile ``` FROM --platform=linux/amd64 ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-amd64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-amd64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-amd64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
2022/02/08
[ "https://Stackoverflow.com/questions/71040681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12530530/" ]
this resolved my issue. ``` FROM ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-arm64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-arm64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-arm64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest; ln -s /root/go/bin/jb /usr/bin/jb WORKDIR /workspace ```
Provided the base image includes the target architecture, another option that might work in your case is using Docker's built-in [`TARGETARCH`](https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope) build arg. This works for me on macOS M1. ``` FROM ubuntu:focal ARG TARGETARCH RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-${TARGETARCH}.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-${TARGETARCH}.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-${TARGETARCH}.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
71,040,681
I have a Rancher Deskop(dockerd) on M1 MacOS and when I am trying to build below dockerfile I am getting an error such as below. Here is the command how I am trying to build the image `docker build -t te-grafana-dashboards-toolchain --no-cache .` I tried to change the platforms but nonae of them worked for me. I am a bit lost about this platform issue for M1 but any help will be appreciated, What I am doing wrong? What might the root cause of this? ``` Removing intermediate container 70af516d5d6b ---> a69229847153 Step 5/6 : RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb ---> Running in 13545862fffe qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory Removing intermediate container 13545862fffe ``` Dockerfile ``` FROM --platform=linux/amd64 ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-amd64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-amd64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-amd64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
2022/02/08
[ "https://Stackoverflow.com/questions/71040681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12530530/" ]
Incidentally, in case it's helpful to another who lands here, I have the same issue on an M1 Max MacBook Pro laptop attempting to do a `docker build` from a company repo that should be a pretty well traveled path, but I might be the only one (it's a small company) that has an ARM64 M1 "Apple Silicon" Mac. ***However I found the solution (well, a solution) to my situation was exactly the opposite of the solution to the OP's, and that was to add*** `--platform=linux/amd64` ***to the FROM line of the docker file.*** Otherwise it was using an ARM64 image to start from without me being the wiser but then later in the Dockerfile the build attempts to install and execute code compiled for x86\_64. Starting the build process by requesting the base image be `linux/amd64` ends up with then the base image having `/lib64/ld-linux-x86-64.so.2`. This probably means everything is being emulated as x86\_64 on the ARM64 CPU via `qemu-x86_64` and so if you have the option to start from an ARM64 image and can compile within the container during build time any software you can't install as ARM64 binaries, it'll probably go faster when you later run the container on the M1 based Mac. I'm not able to try that myself just yet for this case.
Passing following flag to C preprocessor as `CPPFLAGS` solved similar issue in my M1 ``` -DPNG_ARM_NEON_OPT=0 ``` Pass the value as env var with key `CPPFLAGS` to relevant service.
71,040,681
I have a Rancher Deskop(dockerd) on M1 MacOS and when I am trying to build below dockerfile I am getting an error such as below. Here is the command how I am trying to build the image `docker build -t te-grafana-dashboards-toolchain --no-cache .` I tried to change the platforms but nonae of them worked for me. I am a bit lost about this platform issue for M1 but any help will be appreciated, What I am doing wrong? What might the root cause of this? ``` Removing intermediate container 70af516d5d6b ---> a69229847153 Step 5/6 : RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb ---> Running in 13545862fffe qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory Removing intermediate container 13545862fffe ``` Dockerfile ``` FROM --platform=linux/amd64 ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-amd64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-amd64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-amd64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
2022/02/08
[ "https://Stackoverflow.com/questions/71040681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12530530/" ]
Incidentally, in case it's helpful to another who lands here, I have the same issue on an M1 Max MacBook Pro laptop attempting to do a `docker build` from a company repo that should be a pretty well traveled path, but I might be the only one (it's a small company) that has an ARM64 M1 "Apple Silicon" Mac. ***However I found the solution (well, a solution) to my situation was exactly the opposite of the solution to the OP's, and that was to add*** `--platform=linux/amd64` ***to the FROM line of the docker file.*** Otherwise it was using an ARM64 image to start from without me being the wiser but then later in the Dockerfile the build attempts to install and execute code compiled for x86\_64. Starting the build process by requesting the base image be `linux/amd64` ends up with then the base image having `/lib64/ld-linux-x86-64.so.2`. This probably means everything is being emulated as x86\_64 on the ARM64 CPU via `qemu-x86_64` and so if you have the option to start from an ARM64 image and can compile within the container during build time any software you can't install as ARM64 binaries, it'll probably go faster when you later run the container on the M1 based Mac. I'm not able to try that myself just yet for this case.
Modifying `Dockerfile` seems to be the most popular answer but you can also set the `DOCKER_DEFAULT_PLATFORM` environment variable to `linux/amd64`. ```bash export DOCKER_DEFAULT_PLATFORM=linux/amd64 ``` The cause seems to reside in the AArch64 image.
71,040,681
I have a Rancher Deskop(dockerd) on M1 MacOS and when I am trying to build below dockerfile I am getting an error such as below. Here is the command how I am trying to build the image `docker build -t te-grafana-dashboards-toolchain --no-cache .` I tried to change the platforms but nonae of them worked for me. I am a bit lost about this platform issue for M1 but any help will be appreciated, What I am doing wrong? What might the root cause of this? ``` Removing intermediate container 70af516d5d6b ---> a69229847153 Step 5/6 : RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb ---> Running in 13545862fffe qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory Removing intermediate container 13545862fffe ``` Dockerfile ``` FROM --platform=linux/amd64 ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-amd64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-amd64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-amd64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
2022/02/08
[ "https://Stackoverflow.com/questions/71040681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12530530/" ]
Incidentally, in case it's helpful to another who lands here, I have the same issue on an M1 Max MacBook Pro laptop attempting to do a `docker build` from a company repo that should be a pretty well traveled path, but I might be the only one (it's a small company) that has an ARM64 M1 "Apple Silicon" Mac. ***However I found the solution (well, a solution) to my situation was exactly the opposite of the solution to the OP's, and that was to add*** `--platform=linux/amd64` ***to the FROM line of the docker file.*** Otherwise it was using an ARM64 image to start from without me being the wiser but then later in the Dockerfile the build attempts to install and execute code compiled for x86\_64. Starting the build process by requesting the base image be `linux/amd64` ends up with then the base image having `/lib64/ld-linux-x86-64.so.2`. This probably means everything is being emulated as x86\_64 on the ARM64 CPU via `qemu-x86_64` and so if you have the option to start from an ARM64 image and can compile within the container during build time any software you can't install as ARM64 binaries, it'll probably go faster when you later run the container on the M1 based Mac. I'm not able to try that myself just yet for this case.
Provided the base image includes the target architecture, another option that might work in your case is using Docker's built-in [`TARGETARCH`](https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope) build arg. This works for me on macOS M1. ``` FROM ubuntu:focal ARG TARGETARCH RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-${TARGETARCH}.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-${TARGETARCH}.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-${TARGETARCH}.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
71,040,681
I have a Rancher Deskop(dockerd) on M1 MacOS and when I am trying to build below dockerfile I am getting an error such as below. Here is the command how I am trying to build the image `docker build -t te-grafana-dashboards-toolchain --no-cache .` I tried to change the platforms but nonae of them worked for me. I am a bit lost about this platform issue for M1 but any help will be appreciated, What I am doing wrong? What might the root cause of this? ``` Removing intermediate container 70af516d5d6b ---> a69229847153 Step 5/6 : RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb ---> Running in 13545862fffe qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory Removing intermediate container 13545862fffe ``` Dockerfile ``` FROM --platform=linux/amd64 ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-amd64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-amd64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-amd64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
2022/02/08
[ "https://Stackoverflow.com/questions/71040681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12530530/" ]
Modifying `Dockerfile` seems to be the most popular answer but you can also set the `DOCKER_DEFAULT_PLATFORM` environment variable to `linux/amd64`. ```bash export DOCKER_DEFAULT_PLATFORM=linux/amd64 ``` The cause seems to reside in the AArch64 image.
Passing following flag to C preprocessor as `CPPFLAGS` solved similar issue in my M1 ``` -DPNG_ARM_NEON_OPT=0 ``` Pass the value as env var with key `CPPFLAGS` to relevant service.
71,040,681
I have a Rancher Deskop(dockerd) on M1 MacOS and when I am trying to build below dockerfile I am getting an error such as below. Here is the command how I am trying to build the image `docker build -t te-grafana-dashboards-toolchain --no-cache .` I tried to change the platforms but nonae of them worked for me. I am a bit lost about this platform issue for M1 but any help will be appreciated, What I am doing wrong? What might the root cause of this? ``` Removing intermediate container 70af516d5d6b ---> a69229847153 Step 5/6 : RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb ---> Running in 13545862fffe qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory Removing intermediate container 13545862fffe ``` Dockerfile ``` FROM --platform=linux/amd64 ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-amd64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-amd64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-amd64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
2022/02/08
[ "https://Stackoverflow.com/questions/71040681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12530530/" ]
Passing following flag to C preprocessor as `CPPFLAGS` solved similar issue in my M1 ``` -DPNG_ARM_NEON_OPT=0 ``` Pass the value as env var with key `CPPFLAGS` to relevant service.
Provided the base image includes the target architecture, another option that might work in your case is using Docker's built-in [`TARGETARCH`](https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope) build arg. This works for me on macOS M1. ``` FROM ubuntu:focal ARG TARGETARCH RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-${TARGETARCH}.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-${TARGETARCH}.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-${TARGETARCH}.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
71,040,681
I have a Rancher Deskop(dockerd) on M1 MacOS and when I am trying to build below dockerfile I am getting an error such as below. Here is the command how I am trying to build the image `docker build -t te-grafana-dashboards-toolchain --no-cache .` I tried to change the platforms but nonae of them worked for me. I am a bit lost about this platform issue for M1 but any help will be appreciated, What I am doing wrong? What might the root cause of this? ``` Removing intermediate container 70af516d5d6b ---> a69229847153 Step 5/6 : RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb ---> Running in 13545862fffe qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory Removing intermediate container 13545862fffe ``` Dockerfile ``` FROM --platform=linux/amd64 ubuntu:focal RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-amd64.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-amd64.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-amd64.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
2022/02/08
[ "https://Stackoverflow.com/questions/71040681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12530530/" ]
Modifying `Dockerfile` seems to be the most popular answer but you can also set the `DOCKER_DEFAULT_PLATFORM` environment variable to `linux/amd64`. ```bash export DOCKER_DEFAULT_PLATFORM=linux/amd64 ``` The cause seems to reside in the AArch64 image.
Provided the base image includes the target architecture, another option that might work in your case is using Docker's built-in [`TARGETARCH`](https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope) build arg. This works for me on macOS M1. ``` FROM ubuntu:focal ARG TARGETARCH RUN apt update; apt install -y curl jq build-essential python3.8 python3-pip docker-compose jsonnet bison mercurial RUN ln -s /usr/bin/python3.8 /usr/bin/python RUN curl -OL https://golang.org/dl/go1.17.linux-${TARGETARCH}.tar.gz; mkdir /etc/golang; tar -xvzf go1.17.linux-${TARGETARCH}.tar.gz -C /etc/golang; ln -s /etc/golang/go/bin/go /usr/bin/go; rm -f go1.17.linux-${TARGETARCH}.tar.gz RUN GO111MODULE="on" go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb; ln -s $(go env GOPATH)/bin/jb /usr/bin/jb WORKDIR /workspace ```
10,851,121
I am trying to use python to extract certain information from html code. for example: ``` <a href="#tips">Visit the Useful Tips Section</a> and I would like to get result : Visit the Useful Tips Section <div id="menu" style="background-color:#FFD700;height:200px;width:100px;float:left;"> <b>Menu</b><br /> HTML<br /> CSS<br /> and I would like to get Menu HTML CSS ``` In other word, I wish to get everything between <>and<> I am trying to write a python function that takes the html code as a string, and then extract information from there. I am stuck at string.split('<').
2012/06/01
[ "https://Stackoverflow.com/questions/10851121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1401233/" ]
You should use a proper HTML parsing library, such as the [HTMLParser](http://docs.python.org/library/htmlparser.html) module.
``` string = '<a href="#tips">Visit the Useful Tips Section</a>' re.findall('<[^>]*>(.*)<[^>]*>', string) //return 'Visit the Useful Tips Section' ```
10,851,121
I am trying to use python to extract certain information from html code. for example: ``` <a href="#tips">Visit the Useful Tips Section</a> and I would like to get result : Visit the Useful Tips Section <div id="menu" style="background-color:#FFD700;height:200px;width:100px;float:left;"> <b>Menu</b><br /> HTML<br /> CSS<br /> and I would like to get Menu HTML CSS ``` In other word, I wish to get everything between <>and<> I am trying to write a python function that takes the html code as a string, and then extract information from there. I am stuck at string.split('<').
2012/06/01
[ "https://Stackoverflow.com/questions/10851121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1401233/" ]
You should use a proper HTML parsing library, such as the [HTMLParser](http://docs.python.org/library/htmlparser.html) module.
I understand you are trying to strip out the HTML tags and keep only the text. You can define a regular expression that represents the tags. Then substitute all matches with the empty string. Example: ``` def remove_html_tags(data): p = re.compile(r'<.*?>') return p.sub('', data) ``` References: [Example](http://love-python.blogspot.co.uk/2008/07/strip-html-tags-using-python.html) [Docs about python regular expressions](http://docs.python.org/library/re.html)
10,851,121
I am trying to use python to extract certain information from html code. for example: ``` <a href="#tips">Visit the Useful Tips Section</a> and I would like to get result : Visit the Useful Tips Section <div id="menu" style="background-color:#FFD700;height:200px;width:100px;float:left;"> <b>Menu</b><br /> HTML<br /> CSS<br /> and I would like to get Menu HTML CSS ``` In other word, I wish to get everything between <>and<> I am trying to write a python function that takes the html code as a string, and then extract information from there. I am stuck at string.split('<').
2012/06/01
[ "https://Stackoverflow.com/questions/10851121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1401233/" ]
You should use a proper HTML parsing library, such as the [HTMLParser](http://docs.python.org/library/htmlparser.html) module.
You can use [`lxml`](http://lxml.de/) html parser. ``` >>> import lxml.html as lh >>> st = ''' load your above html content into a string ''' >>> d = lh.fromstring(st) >>> d.text_content() 'Visit the Useful Tips Section \nand I would like to get result : Visit the Useful Tips Section\n\n\nMenu\nHTML\nCSS\nand I would like to get Menu HTML CSS\n' ``` or you can do ``` >>> for content in d.text_content().split("\n"): ... if content: ... print content ... Visit the Useful Tips Section and I would like to get result : Visit the Useful Tips Section Menu HTML CSS and I would like to get Menu HTML CSS >>> ```
10,851,121
I am trying to use python to extract certain information from html code. for example: ``` <a href="#tips">Visit the Useful Tips Section</a> and I would like to get result : Visit the Useful Tips Section <div id="menu" style="background-color:#FFD700;height:200px;width:100px;float:left;"> <b>Menu</b><br /> HTML<br /> CSS<br /> and I would like to get Menu HTML CSS ``` In other word, I wish to get everything between <>and<> I am trying to write a python function that takes the html code as a string, and then extract information from there. I am stuck at string.split('<').
2012/06/01
[ "https://Stackoverflow.com/questions/10851121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1401233/" ]
You should use a proper HTML parsing library, such as the [HTMLParser](http://docs.python.org/library/htmlparser.html) module.
I'd use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) - it gets much less cranky with mal-formed html.
10,851,121
I am trying to use python to extract certain information from html code. for example: ``` <a href="#tips">Visit the Useful Tips Section</a> and I would like to get result : Visit the Useful Tips Section <div id="menu" style="background-color:#FFD700;height:200px;width:100px;float:left;"> <b>Menu</b><br /> HTML<br /> CSS<br /> and I would like to get Menu HTML CSS ``` In other word, I wish to get everything between <>and<> I am trying to write a python function that takes the html code as a string, and then extract information from there. I am stuck at string.split('<').
2012/06/01
[ "https://Stackoverflow.com/questions/10851121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1401233/" ]
``` string = '<a href="#tips">Visit the Useful Tips Section</a>' re.findall('<[^>]*>(.*)<[^>]*>', string) //return 'Visit the Useful Tips Section' ```
I understand you are trying to strip out the HTML tags and keep only the text. You can define a regular expression that represents the tags. Then substitute all matches with the empty string. Example: ``` def remove_html_tags(data): p = re.compile(r'<.*?>') return p.sub('', data) ``` References: [Example](http://love-python.blogspot.co.uk/2008/07/strip-html-tags-using-python.html) [Docs about python regular expressions](http://docs.python.org/library/re.html)
10,851,121
I am trying to use python to extract certain information from html code. for example: ``` <a href="#tips">Visit the Useful Tips Section</a> and I would like to get result : Visit the Useful Tips Section <div id="menu" style="background-color:#FFD700;height:200px;width:100px;float:left;"> <b>Menu</b><br /> HTML<br /> CSS<br /> and I would like to get Menu HTML CSS ``` In other word, I wish to get everything between <>and<> I am trying to write a python function that takes the html code as a string, and then extract information from there. I am stuck at string.split('<').
2012/06/01
[ "https://Stackoverflow.com/questions/10851121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1401233/" ]
``` string = '<a href="#tips">Visit the Useful Tips Section</a>' re.findall('<[^>]*>(.*)<[^>]*>', string) //return 'Visit the Useful Tips Section' ```
I'd use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) - it gets much less cranky with mal-formed html.
10,851,121
I am trying to use python to extract certain information from html code. for example: ``` <a href="#tips">Visit the Useful Tips Section</a> and I would like to get result : Visit the Useful Tips Section <div id="menu" style="background-color:#FFD700;height:200px;width:100px;float:left;"> <b>Menu</b><br /> HTML<br /> CSS<br /> and I would like to get Menu HTML CSS ``` In other word, I wish to get everything between <>and<> I am trying to write a python function that takes the html code as a string, and then extract information from there. I am stuck at string.split('<').
2012/06/01
[ "https://Stackoverflow.com/questions/10851121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1401233/" ]
You can use [`lxml`](http://lxml.de/) html parser. ``` >>> import lxml.html as lh >>> st = ''' load your above html content into a string ''' >>> d = lh.fromstring(st) >>> d.text_content() 'Visit the Useful Tips Section \nand I would like to get result : Visit the Useful Tips Section\n\n\nMenu\nHTML\nCSS\nand I would like to get Menu HTML CSS\n' ``` or you can do ``` >>> for content in d.text_content().split("\n"): ... if content: ... print content ... Visit the Useful Tips Section and I would like to get result : Visit the Useful Tips Section Menu HTML CSS and I would like to get Menu HTML CSS >>> ```
I understand you are trying to strip out the HTML tags and keep only the text. You can define a regular expression that represents the tags. Then substitute all matches with the empty string. Example: ``` def remove_html_tags(data): p = re.compile(r'<.*?>') return p.sub('', data) ``` References: [Example](http://love-python.blogspot.co.uk/2008/07/strip-html-tags-using-python.html) [Docs about python regular expressions](http://docs.python.org/library/re.html)
10,851,121
I am trying to use python to extract certain information from html code. for example: ``` <a href="#tips">Visit the Useful Tips Section</a> and I would like to get result : Visit the Useful Tips Section <div id="menu" style="background-color:#FFD700;height:200px;width:100px;float:left;"> <b>Menu</b><br /> HTML<br /> CSS<br /> and I would like to get Menu HTML CSS ``` In other word, I wish to get everything between <>and<> I am trying to write a python function that takes the html code as a string, and then extract information from there. I am stuck at string.split('<').
2012/06/01
[ "https://Stackoverflow.com/questions/10851121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1401233/" ]
You can use [`lxml`](http://lxml.de/) html parser. ``` >>> import lxml.html as lh >>> st = ''' load your above html content into a string ''' >>> d = lh.fromstring(st) >>> d.text_content() 'Visit the Useful Tips Section \nand I would like to get result : Visit the Useful Tips Section\n\n\nMenu\nHTML\nCSS\nand I would like to get Menu HTML CSS\n' ``` or you can do ``` >>> for content in d.text_content().split("\n"): ... if content: ... print content ... Visit the Useful Tips Section and I would like to get result : Visit the Useful Tips Section Menu HTML CSS and I would like to get Menu HTML CSS >>> ```
I'd use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) - it gets much less cranky with mal-formed html.
2,509,927
i have some python code(some functions) and i want to implement this in bigger matlab program!how can i do this?any help will be useful....
2010/03/24
[ "https://Stackoverflow.com/questions/2509927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275695/" ]
You can use the [system](http://www.mathworks.com/access/helpdesk/help/techdoc/ref/system.html) command to execute the Python code externally. To link it in more "natively" I think you'll have to go through C. That is, embed your Python code in C code and then expose it with a DLL to Matlab. P.S. On windows you can also expose Python code to Matlab via COM
There is a library called [PyMat](http://claymore.engineer.gvsu.edu/~steriana/Python/pymat.html). It allows to call python code from matlab.
2,509,927
i have some python code(some functions) and i want to implement this in bigger matlab program!how can i do this?any help will be useful....
2010/03/24
[ "https://Stackoverflow.com/questions/2509927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275695/" ]
You should probably avoid this. Use one or the other - preferably Python. But if you have to use both, you could try this: <http://github.com/kw/pymex> (scroll down for the readme) Disclaimer: I wrote this. It may be somewhat difficult to get it to compile and work, particularly if you're on Windows (there is a pre-compiled win32 binary in the downloads area that might work). I don't have access to a lot of different machines with Matlab on them, so I haven't got that nailed down yet.
You can use the [system](http://www.mathworks.com/access/helpdesk/help/techdoc/ref/system.html) command to execute the Python code externally. To link it in more "natively" I think you'll have to go through C. That is, embed your Python code in C code and then expose it with a DLL to Matlab. P.S. On windows you can also expose Python code to Matlab via COM
2,509,927
i have some python code(some functions) and i want to implement this in bigger matlab program!how can i do this?any help will be useful....
2010/03/24
[ "https://Stackoverflow.com/questions/2509927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275695/" ]
The only thing I know of is pythoncall but it is a little out of date and I'm not sure it will work reliably with recent versions of matlab. <http://www.elisanet.fi/ptvirtan/software/pythoncall/index.html> Otherwise you would have to interact with matlab through the shell (a bit of a pain I know). If you are dealing with large amounts of data and are on an OS where you can easily create a ramdisk saving matlab files to a ramdisk and passing the filename would be one way to get data from matlab to python without too much of a performance penalty.
There is a library called [PyMat](http://claymore.engineer.gvsu.edu/~steriana/Python/pymat.html). It allows to call python code from matlab.
2,509,927
i have some python code(some functions) and i want to implement this in bigger matlab program!how can i do this?any help will be useful....
2010/03/24
[ "https://Stackoverflow.com/questions/2509927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275695/" ]
You should probably avoid this. Use one or the other - preferably Python. But if you have to use both, you could try this: <http://github.com/kw/pymex> (scroll down for the readme) Disclaimer: I wrote this. It may be somewhat difficult to get it to compile and work, particularly if you're on Windows (there is a pre-compiled win32 binary in the downloads area that might work). I don't have access to a lot of different machines with Matlab on them, so I haven't got that nailed down yet.
There is a library called [PyMat](http://claymore.engineer.gvsu.edu/~steriana/Python/pymat.html). It allows to call python code from matlab.
2,509,927
i have some python code(some functions) and i want to implement this in bigger matlab program!how can i do this?any help will be useful....
2010/03/24
[ "https://Stackoverflow.com/questions/2509927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275695/" ]
You should probably avoid this. Use one or the other - preferably Python. But if you have to use both, you could try this: <http://github.com/kw/pymex> (scroll down for the readme) Disclaimer: I wrote this. It may be somewhat difficult to get it to compile and work, particularly if you're on Windows (there is a pre-compiled win32 binary in the downloads area that might work). I don't have access to a lot of different machines with Matlab on them, so I haven't got that nailed down yet.
The only thing I know of is pythoncall but it is a little out of date and I'm not sure it will work reliably with recent versions of matlab. <http://www.elisanet.fi/ptvirtan/software/pythoncall/index.html> Otherwise you would have to interact with matlab through the shell (a bit of a pain I know). If you are dealing with large amounts of data and are on an OS where you can easily create a ramdisk saving matlab files to a ramdisk and passing the filename would be one way to get data from matlab to python without too much of a performance penalty.
39,136,134
**cat test.py** ``` from importlib import import_module bar = import_module('bar', package='project') ``` **ls project/** ``` __init__.py __init__.pyc bar.py bar.pyc ``` **python test.py** ``` Traceback (most recent call last): File "test.py", line 5, in <module> bar = import_module('bar', package='project') File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) ImportError: No module named bar ``` Listing imported modules (sys.modules) doesn't show any module *project*. I can import *bar* using the python shell. Any ideas?
2016/08/25
[ "https://Stackoverflow.com/questions/39136134", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1252307/" ]
It needs a dot in front of bar .. :-( ``` bar = import_module('.bar', package='project') ```
The documentation for [import\_lib](https://docs.python.org/2/library/importlib.html) says that > > If the name is specified in relative terms, then the package argument must be specified to the package which is to act as the anchor for resolving the package name (e.g. import\_module('..mod', 'pkg.subpkg') will import pkg.mod). > > > Thus the expression can also be written as ``` bar = import_module('..bar',package='project.bar') ```
43,600,114
Im using python 3.4 and I am trying to make a recursive guessing game. The game should take a min value and a max value and have a "magic" number. The game is going to generate a random number in between the range of x and y. Then ask the user to insert y for yes l for too low, and h for too high. if it is yes 'congratulations message', then exit. if it is too low, take the min val x, and add 1 to it and that will be the new min value. Same logic for h too high. Then re generate random number. Im getting stuck on where to call the recursion. ``` def recursive_guess(x,y,): correct = False while not correct: print('I will guess a number y for yes, l for too low, h for too high') guess = random.randrange(x, y) print(guess) anwser = input('y for yes l for low h for high') if anwser == 'y': print('Got it right') correct = True elif anwser == 'l': guess = random.randrange(minVal + 1, maxVal) elif anwser == 'h': guess = random.randomrange(minVal, maxVal-1) ```
2017/04/25
[ "https://Stackoverflow.com/questions/43600114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7916637/" ]
This can be done with a simple function using four `if` statements to avoid adding out-of-bounds values: ``` #include <stdio.h> #define BOARD_SZ 10 int sum_neighborhood(int, int, int [][BOARD_SZ], int, int); int main(void) { int board[BOARD_SZ][BOARD_SZ] = { { 1, 0, 0, 1, 1, 1, 1, 1, 0, 1 }, { 1, 1, 1, 1, 0, 0, 0, 0, 0, 0 }, { 0, 1, 0, 1, 0, 1, 0, 0, 0, 0 }, { 0, 0, 0, 1, 0, 1, 0, 1, 1, 1 }, { 1, 0, 1, 1, 1, 0, 0, 0, 0, 1 }, { 0, 1, 1, 1, 1, 1, 0, 1, 1, 0 }, { 1, 1, 1, 1, 1, 1, 0, 0, 1, 1 }, { 0, 1, 0, 0, 1, 0, 1, 0, 1, 0 }, { 0, 1, 0, 0, 0, 1, 1, 0, 0, 1 }, { 1, 1, 0, 1, 1, 0, 1, 0, 0, 0 } }; printf("Sum at [0][0] is %d\n", sum_neighborhood(0, 0, board, BOARD_SZ, BOARD_SZ)); printf("Sum at [0][1] is %d\n", sum_neighborhood(0, 1, board, BOARD_SZ, BOARD_SZ)); printf("Sum at [5][3] is %d\n", sum_neighborhood(5, 3, board, BOARD_SZ, BOARD_SZ)); return 0; } int sum_neighborhood(int row, int col, int arr[][BOARD_SZ], int rows, int cols) { int sum = arr[row][col]; if (row - 1 >= 0) { sum += arr[row-1][col]; } if (row + 1 < rows) { sum += arr[row+1][col]; } if (col - 1 >= 0) { sum += arr[row][col-1]; } if (col + 1 < cols) { sum += arr[row][col+1]; } return sum; } ``` Program output: ```none Sum at [0][0] is 2 Sum at [0][1] is 2 Sum at [5][3] is 5 ```
``` int add(const int **arr, size_t nrows, size_t ncols) { int sum = 0; for (size_t i = 0; i < nrows; ++i) for (size_t j = 0; j < ncols; ++j) sum += arr[i][j]; return sum; } ``` Use a pair of nested loops to walk the rows and columns.
43,600,114
Im using python 3.4 and I am trying to make a recursive guessing game. The game should take a min value and a max value and have a "magic" number. The game is going to generate a random number in between the range of x and y. Then ask the user to insert y for yes l for too low, and h for too high. if it is yes 'congratulations message', then exit. if it is too low, take the min val x, and add 1 to it and that will be the new min value. Same logic for h too high. Then re generate random number. Im getting stuck on where to call the recursion. ``` def recursive_guess(x,y,): correct = False while not correct: print('I will guess a number y for yes, l for too low, h for too high') guess = random.randrange(x, y) print(guess) anwser = input('y for yes l for low h for high') if anwser == 'y': print('Got it right') correct = True elif anwser == 'l': guess = random.randrange(minVal + 1, maxVal) elif anwser == 'h': guess = random.randomrange(minVal, maxVal-1) ```
2017/04/25
[ "https://Stackoverflow.com/questions/43600114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7916637/" ]
This can be done with a simple function using four `if` statements to avoid adding out-of-bounds values: ``` #include <stdio.h> #define BOARD_SZ 10 int sum_neighborhood(int, int, int [][BOARD_SZ], int, int); int main(void) { int board[BOARD_SZ][BOARD_SZ] = { { 1, 0, 0, 1, 1, 1, 1, 1, 0, 1 }, { 1, 1, 1, 1, 0, 0, 0, 0, 0, 0 }, { 0, 1, 0, 1, 0, 1, 0, 0, 0, 0 }, { 0, 0, 0, 1, 0, 1, 0, 1, 1, 1 }, { 1, 0, 1, 1, 1, 0, 0, 0, 0, 1 }, { 0, 1, 1, 1, 1, 1, 0, 1, 1, 0 }, { 1, 1, 1, 1, 1, 1, 0, 0, 1, 1 }, { 0, 1, 0, 0, 1, 0, 1, 0, 1, 0 }, { 0, 1, 0, 0, 0, 1, 1, 0, 0, 1 }, { 1, 1, 0, 1, 1, 0, 1, 0, 0, 0 } }; printf("Sum at [0][0] is %d\n", sum_neighborhood(0, 0, board, BOARD_SZ, BOARD_SZ)); printf("Sum at [0][1] is %d\n", sum_neighborhood(0, 1, board, BOARD_SZ, BOARD_SZ)); printf("Sum at [5][3] is %d\n", sum_neighborhood(5, 3, board, BOARD_SZ, BOARD_SZ)); return 0; } int sum_neighborhood(int row, int col, int arr[][BOARD_SZ], int rows, int cols) { int sum = arr[row][col]; if (row - 1 >= 0) { sum += arr[row-1][col]; } if (row + 1 < rows) { sum += arr[row+1][col]; } if (col - 1 >= 0) { sum += arr[row][col-1]; } if (col + 1 < cols) { sum += arr[row][col+1]; } return sum; } ``` Program output: ```none Sum at [0][0] is 2 Sum at [0][1] is 2 Sum at [5][3] is 5 ```
One easy approach is to only consider values with valid indices, but iterate over the entire square around a given index `i`, `j`. Thus: ``` typedef struct { int ** data; ///< data in row order int rows, columns; ///< size of the data } Array; /// arr - pointer to the data /// i, j - row and column to add values around int sum(const Array * arr, int i, int j) { int sumVal = 0; for (int k = i-1; k <= i+1; k++) for (int l = j-1; l < j+1; l++) if (k >= 0 && k < arr->rows && l >= 0 && l < arr->columns) sumVal += arr->data[k][l]; return sumVal; } ``` Then, to find the first largest block: ``` #include <limits.h> typedef struct { int sum, i, j; } Sum_t; Sum_t largest(const Array * arr) { Sum_t max = {INT_MIN, 0, 0}; for (int i = 0; i < arr->rows; ++i) for (int j = 0; j < arr->columns; ++j) { int const s = sum(arr, i, j); if (s > max.sum) { max.sum = s; max.i = i; max.j = j; } } return max; } ```
11,463,749
I visited <http://www.sublimetext.com> today and was intrigued by the screencast animation the website has on its opening page. It looks like a mashup between video and slideshow. I've seen them at other modern websites as well, but I assumed it was some HTML5 video tag trickery. But when I looked at the source of sublimetext webpage, I was confused. The animation on this page is created on a HTML5 2D canvas with plain javascript using base PNG images. Each slide is loaded from a PNG file. The animation is achieved by modifying only handful of pixels from the image. The animation javascript applied these changes on the original PNGs at regular intervals. You will find these deltas stored in \*\_timeline variables in the script. My question is what tool can generate such deltas? How can one record one's desktop screen and create such base PNG + animation deltas? I like this approach because it seems most efficient format for screencasts, where changes in consecutive frames are minimal. **Update 1** I know there are techniques to achieve this using GIF (Check <https://askubuntu.com/q/107726>), but what cool tool can generate this javascript code that can convert PNG deltas to animation. Googling hasn't helped me find it out. **Update 2** The author of the screencast (and sublimetext) replied to my question on sublimetext forum. He did it using a custom python script and plans to write a blog post about it sometime <http://www.sublimetext.com/forum/viewtopic.php?p=34252#p34252>
2012/07/13
[ "https://Stackoverflow.com/questions/11463749", "https://Stackoverflow.com", "https://Stackoverflow.com/users/174527/" ]
Jon Skinner, the creator of Sublime Text wrote about the process [on his website](http://www.sublimetext.com/~jps/animated_gifs_the_hard_way.html) He also published the encoder he wrote [on GitHub](https://github.com/sublimehq/anim_encoder)
look at one of their images and you'll understand how it works: <http://www.sublimetext.com/anim/command_palette_packed.png> they just cutting and inserting appropriate portions of image into the appropriate positions of canvas element, but if your question is how to make such image, I don't have an idea. I guess with some software, not by hand, but I don't know such soft...
52,678,981
I'm new to programing on python and working with scrapy. I am facing an error with the web crawling. I have used similar help pages on this site and even followed a tutorial from beginning to end to no avail, any help will be appreciated. Error says: Spider error processing http://quotes.toscrape.com/> (referer: None) Traceback (most recent call last): I found some similar code and similar one is runing well but what i write isn't Here is my code: ``` #-*- coding: utf-8 -*- import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" allowed_domains = ["quotes.toscrape.com"] start_urls = ( 'http://quotes.toscrape.com/', ) def parse(self, response): quotes = response.xpath('//*[@class="quote"]') for quote in quotes: text = quote.xpath('.//*[@class="text"]/text()').extract_first() author = quote.xpath('.//*[@itemprop="author"]/text()').extract_first() tags = quote.xpath('.//*[@itemprop="keywords"]/@content').extract_first() print '\n' print text print author print tags print '\n' next_page_url = response.xpath('//*[@class="next"]/a/@href').extract_first() absolute_next_page_url = response.urljoin(next_page_url) yield scrapy.Request(absolute_next_page_url) ``` Here is command promp: ``` C:\Users\cancun\Desktop\Dosyalar\Coding\Phyton\Spider\quotes_spider>scrapy crawl quotes 2018-10-06 14:42:07 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: quotes_spider) 2018-10-06 14:42:07 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 2.7.15 |Anaconda, Inc.| (default, May 1 2018, 18:37:09) [MSC v.1500 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o 27 Mar 2018), cryptography 2.2.2, Platform Windows-10-10.0.16299 2018-10-06 14:42:07 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'quotes_spider.spiders', 'SPIDER_MODULES': ['quotes_spider.spiders'], 'BOT_NAME': 'quotes_spider'} 2018-10-06 14:42:07 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.logstats.LogStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.corestats.CoreStats'] 2018-10-06 14:42:07 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2018-10-06 14:42:07 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2018-10-06 14:42:07 [scrapy.middleware] INFO: Enabled item pipelines: [] 2018-10-06 14:42:07 [scrapy.core.engine] INFO: Spider opened 2018-10-06 14:42:07 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2018-10-06 14:42:07 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024 2018-10-06 14:42:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/> (referer: None) 2018-10-06 14:42:07 [scrapy.core.scraper] ERROR: Spider error processing <GET http://quotes.toscrape.com/> (referer: None) Traceback (most recent call last): File "c:\programdata\anaconda2\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback yield next(it) File "c:\programdata\anaconda2\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 30, in process_spider_output for x in result: File "c:\programdata\anaconda2\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr> return (_set_referer(r) for r in result or ()) File "c:\programdata\anaconda2\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr> return (r for r in result or () if _filter(r)) File "c:\programdata\anaconda2\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr> return (r for r in result or () if _filter(r)) File "C:\Users\cancun\Desktop\Dosyalar\Coding\Phyton\Spider\quotes_spider\quotes_spider\spiders\quotes.py", line 20, in parse print text File "c:\programdata\anaconda2\lib\encodings\cp437.py", line 12, in encode return codecs.charmap_encode(input,errors,encoding_map) UnicodeEncodeError: 'charmap' codec can't encode character u'\u201c' in position 0: character maps to <undefined> 2018-10-06 14:42:07 [scrapy.core.engine] INFO: Closing spider (finished) 2018-10-06 14:42:07 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 218, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 2333, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2018, 10, 6, 11, 42, 7, 594000), 'log_count/DEBUG': 2, 'log_count/ERROR': 1, 'log_count/INFO': 7, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/UnicodeEncodeError': 1, 'start_time': datetime.datetime(2018, 10, 6, 11, 42, 7, 309000)} 2018-10-06 14:42:07 [scrapy.core.engine] INFO: Spider closed (finished) ``` Thanks!
2018/10/06
[ "https://Stackoverflow.com/questions/52678981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10465578/" ]
The problem is your command prompt uses [cp437](https://en.wikipedia.org/wiki/Code_page_437) character set, which doesn't support some characters that you have just scraped (for example, “ and ”) I don't think it is possible to print that characters to windows shell (maybe new PowerShells support it, I don't know). You may create a new file, select a rich character set for it, and write all your output there: ``` import io out_file = io.open('output.txt', 'a', encoding='UTF-8') ``` If you really want to use the default character encoding that your operating system uses, create the file like that: ``` import io out_file = io.open('output.txt', 'a', errors='backslashreplace') ``` With the latter option, you will see something like that: ``` \u201cThe world as we have created it is a process of our thinking.\u201d ```
I change a little my code and its work the way expected here is updated code: ``` #-*- coding: utf-8 -*- import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" allowed_domains = ["quotes.toscrape.com"] start_urls = ( 'http://quotes.toscrape.com/', ) def parse(self, response): quotes = response.xpath('//*[@class="quote"]') for quote in quotes: text = quote.xpath('.//*[@class="text"]/text()').extract_first() author = quote.xpath('.//*[@itemprop="author"]/text()').extract_first() tags = quote.xpath('.//*[@itemprop="keywords"]/@content').extract_first() yield{'Text':text, 'Author':author, 'Tags':tags} next_page_url = response.xpath('//*[@class="next"]/a/@href').extract_first() absolute_next_page_url = response.urljoin(next_page_url) yield scrapy.Request(absolute_next_page_url) ``` Thanks for all the support!!!
356,138
I'm having a very peculiar problem in my Python FastCGI code - sys.stdout has a file descriptor of '-1', so I can't write to it. I'm checking this at the first line of my program, so I know it's not any of my code changing it. I've tried `sys.stdout = os.fdopen(1, 'w')`, but anything written there won't get to my browser. The same application works without difficulty under Apache. I'm using the Microsoft-provided FastCGI extension for IIS documented here: <http://learn.iis.net/page.aspx/248/configuring-fastcgi-extension-for-iis60/> I am using these settings in fcgiext.ini: ``` ExePath=C:\Python23\python.exe Arguments=-u C:\app\app_wsgi.py FlushNamedPipe=1 RequestTimeout=45 IdleTimeout=120 ActivityTimeout=30 ``` Can anyone tell what's wrong or tell me where I should look to find out? All suggestions greatly appreciated...
2008/12/10
[ "https://Stackoverflow.com/questions/356138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/44992/" ]
Forgive me if this is a dumb question, but I notice this line in your config file: > > Arguments=-u C:\app\app\_wsgi.py > > > Are you running a WSGI application or a FastCGI app? There *is* a difference. In WSGI, writing to stdout isn't a good idea. Your program should have an application object that can be called with an environment dict and a start\_response function (for more info, see [PEP 333](http://www.python.org/dev/peps/pep-0333/)). At any rate, your application's method of returning will be to return an iterable object that contains the response body, not writing to stdout. Either way, you should also consider using [isapi-wsgi](http://code.google.com/p/isapi-wsgi/). I've never used it myself, but I hear good things about it.
On windows, it's possible to launch a proces without a valid stdin and stdout. For example, if you execute a python script with pythonw.exe, the stdout is invdalid and if you insist on writing to it, it will block after 140 characters or something. Writing to another destination than stdout looks like the safest solution.
356,138
I'm having a very peculiar problem in my Python FastCGI code - sys.stdout has a file descriptor of '-1', so I can't write to it. I'm checking this at the first line of my program, so I know it's not any of my code changing it. I've tried `sys.stdout = os.fdopen(1, 'w')`, but anything written there won't get to my browser. The same application works without difficulty under Apache. I'm using the Microsoft-provided FastCGI extension for IIS documented here: <http://learn.iis.net/page.aspx/248/configuring-fastcgi-extension-for-iis60/> I am using these settings in fcgiext.ini: ``` ExePath=C:\Python23\python.exe Arguments=-u C:\app\app_wsgi.py FlushNamedPipe=1 RequestTimeout=45 IdleTimeout=120 ActivityTimeout=30 ``` Can anyone tell what's wrong or tell me where I should look to find out? All suggestions greatly appreciated...
2008/12/10
[ "https://Stackoverflow.com/questions/356138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/44992/" ]
Forgive me if this is a dumb question, but I notice this line in your config file: > > Arguments=-u C:\app\app\_wsgi.py > > > Are you running a WSGI application or a FastCGI app? There *is* a difference. In WSGI, writing to stdout isn't a good idea. Your program should have an application object that can be called with an environment dict and a start\_response function (for more info, see [PEP 333](http://www.python.org/dev/peps/pep-0333/)). At any rate, your application's method of returning will be to return an iterable object that contains the response body, not writing to stdout. Either way, you should also consider using [isapi-wsgi](http://code.google.com/p/isapi-wsgi/). I've never used it myself, but I hear good things about it.
Following the PEP 333 you can try to log to environ['wsgi.errors'] which is usualy the logger of the web server itself when you use fastcgi. Of course this is only available when a request is called but not during application startup. You can get an example in the pylons code: <http://pylonshq.com/docs/en/0.9.7/logging/#logging-to-wsgi-errors>
356,138
I'm having a very peculiar problem in my Python FastCGI code - sys.stdout has a file descriptor of '-1', so I can't write to it. I'm checking this at the first line of my program, so I know it's not any of my code changing it. I've tried `sys.stdout = os.fdopen(1, 'w')`, but anything written there won't get to my browser. The same application works without difficulty under Apache. I'm using the Microsoft-provided FastCGI extension for IIS documented here: <http://learn.iis.net/page.aspx/248/configuring-fastcgi-extension-for-iis60/> I am using these settings in fcgiext.ini: ``` ExePath=C:\Python23\python.exe Arguments=-u C:\app\app_wsgi.py FlushNamedPipe=1 RequestTimeout=45 IdleTimeout=120 ActivityTimeout=30 ``` Can anyone tell what's wrong or tell me where I should look to find out? All suggestions greatly appreciated...
2008/12/10
[ "https://Stackoverflow.com/questions/356138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/44992/" ]
Do you have to use FastCGI? If not, you may want to try a ISAPI WSGI method. I have had success using: <http://code.google.com/p/isapi-wsgi/> and have also used PyISAPIe in the past: <http://sourceforge.net/apps/trac/pyisapie>
On windows, it's possible to launch a proces without a valid stdin and stdout. For example, if you execute a python script with pythonw.exe, the stdout is invdalid and if you insist on writing to it, it will block after 140 characters or something. Writing to another destination than stdout looks like the safest solution.
356,138
I'm having a very peculiar problem in my Python FastCGI code - sys.stdout has a file descriptor of '-1', so I can't write to it. I'm checking this at the first line of my program, so I know it's not any of my code changing it. I've tried `sys.stdout = os.fdopen(1, 'w')`, but anything written there won't get to my browser. The same application works without difficulty under Apache. I'm using the Microsoft-provided FastCGI extension for IIS documented here: <http://learn.iis.net/page.aspx/248/configuring-fastcgi-extension-for-iis60/> I am using these settings in fcgiext.ini: ``` ExePath=C:\Python23\python.exe Arguments=-u C:\app\app_wsgi.py FlushNamedPipe=1 RequestTimeout=45 IdleTimeout=120 ActivityTimeout=30 ``` Can anyone tell what's wrong or tell me where I should look to find out? All suggestions greatly appreciated...
2008/12/10
[ "https://Stackoverflow.com/questions/356138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/44992/" ]
I believe having stdout closed/invalid is in accordance to the [FastCGI spec](http://www.fastcgi.com/devkit/doc/fcgi-spec.html): > > The Web server leaves a single file > descriptor, FCGI\_LISTENSOCK\_FILENO, > open when the application begins > execution. This descriptor refers to a > listening socket created by the Web > server. > > > FCGI\_LISTENSOCK\_FILENO equals > STDIN\_FILENO. The standard descriptors > STDOUT\_FILENO and STDERR\_FILENO are > closed when the application begins > execution. A reliable method for an > application to determine whether it > was invoked using CGI or FastCGI is to > call > getpeername(FCGI\_LISTENSOCK\_FILENO), > which returns -1 with errno set to > ENOTCONN for a FastCGI application. > > >
On windows, it's possible to launch a proces without a valid stdin and stdout. For example, if you execute a python script with pythonw.exe, the stdout is invdalid and if you insist on writing to it, it will block after 140 characters or something. Writing to another destination than stdout looks like the safest solution.
356,138
I'm having a very peculiar problem in my Python FastCGI code - sys.stdout has a file descriptor of '-1', so I can't write to it. I'm checking this at the first line of my program, so I know it's not any of my code changing it. I've tried `sys.stdout = os.fdopen(1, 'w')`, but anything written there won't get to my browser. The same application works without difficulty under Apache. I'm using the Microsoft-provided FastCGI extension for IIS documented here: <http://learn.iis.net/page.aspx/248/configuring-fastcgi-extension-for-iis60/> I am using these settings in fcgiext.ini: ``` ExePath=C:\Python23\python.exe Arguments=-u C:\app\app_wsgi.py FlushNamedPipe=1 RequestTimeout=45 IdleTimeout=120 ActivityTimeout=30 ``` Can anyone tell what's wrong or tell me where I should look to find out? All suggestions greatly appreciated...
2008/12/10
[ "https://Stackoverflow.com/questions/356138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/44992/" ]
Do you have to use FastCGI? If not, you may want to try a ISAPI WSGI method. I have had success using: <http://code.google.com/p/isapi-wsgi/> and have also used PyISAPIe in the past: <http://sourceforge.net/apps/trac/pyisapie>
Following the PEP 333 you can try to log to environ['wsgi.errors'] which is usualy the logger of the web server itself when you use fastcgi. Of course this is only available when a request is called but not during application startup. You can get an example in the pylons code: <http://pylonshq.com/docs/en/0.9.7/logging/#logging-to-wsgi-errors>
356,138
I'm having a very peculiar problem in my Python FastCGI code - sys.stdout has a file descriptor of '-1', so I can't write to it. I'm checking this at the first line of my program, so I know it's not any of my code changing it. I've tried `sys.stdout = os.fdopen(1, 'w')`, but anything written there won't get to my browser. The same application works without difficulty under Apache. I'm using the Microsoft-provided FastCGI extension for IIS documented here: <http://learn.iis.net/page.aspx/248/configuring-fastcgi-extension-for-iis60/> I am using these settings in fcgiext.ini: ``` ExePath=C:\Python23\python.exe Arguments=-u C:\app\app_wsgi.py FlushNamedPipe=1 RequestTimeout=45 IdleTimeout=120 ActivityTimeout=30 ``` Can anyone tell what's wrong or tell me where I should look to find out? All suggestions greatly appreciated...
2008/12/10
[ "https://Stackoverflow.com/questions/356138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/44992/" ]
I believe having stdout closed/invalid is in accordance to the [FastCGI spec](http://www.fastcgi.com/devkit/doc/fcgi-spec.html): > > The Web server leaves a single file > descriptor, FCGI\_LISTENSOCK\_FILENO, > open when the application begins > execution. This descriptor refers to a > listening socket created by the Web > server. > > > FCGI\_LISTENSOCK\_FILENO equals > STDIN\_FILENO. The standard descriptors > STDOUT\_FILENO and STDERR\_FILENO are > closed when the application begins > execution. A reliable method for an > application to determine whether it > was invoked using CGI or FastCGI is to > call > getpeername(FCGI\_LISTENSOCK\_FILENO), > which returns -1 with errno set to > ENOTCONN for a FastCGI application. > > >
Following the PEP 333 you can try to log to environ['wsgi.errors'] which is usualy the logger of the web server itself when you use fastcgi. Of course this is only available when a request is called but not during application startup. You can get an example in the pylons code: <http://pylonshq.com/docs/en/0.9.7/logging/#logging-to-wsgi-errors>
41,231,316
I've been trying to write this reduce method and I can't find a nice way to do it in java. I managed in python but it makes use of lots of python stuff and porting that to java seems like a real pain. Is there a more java way to do it? Here's some test code, that should show what I mean if the title wasn't clear. My python test code: ``` def reduce_(duplicated): def get_factors(n): return set(reduce(list.__add__, ([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0))) factors = sorted(list(get_factors(len(duplicated)))) for factor in factors: chunks = set([tuple(duplicated[i:i + factor]) for i in xrange(0, len(duplicated), factor)]) if len(chunks) == 1: return list(chunks.pop()) return duplicated def verify(expected, duplicated): try: result = reduce_(duplicated) assert (expected == result) print expected, "passed" except AssertionError: print "expected", expected, "!=", duplicated #should return the same verify([1, 2, 3], [1,2,3]) verify([1,2], [1,2]) verify([1,1,2], [1,1,2]) verify([5,8,8], [5,8,8]) verify([8], [8]) verify([1,8,1], [1,8,1]) verify([5,2,2,5], [5,2,2,5]) verify([5,5,2,2], [5,5,2,2]) # repeated only once verify([1, 2, 3], [1,2,3,1,2,3]) verify([1,2], [1,2,1,2]) verify([1,1,2], [1,1,2,1,1,2]) verify([5,8,8], [5,8,8,5,8,8]) verify([8], [8,8]) verify([1,8,1], [1,8,1,1,8,1]) verify([5,2,2,5], [5,2,2,5,5,2,2,5]) verify([5,5,2,2], [5,5,2,2,5,5,2,2]) # repeated twice verify([1, 2, 3], [1,2,3,1,2,3,1,2,3]) verify([1,2], [1,2,1,2,1,2]) verify([1,1,2], [1,1,2,1,1,2,1,1,2]) verify([5,8,8], [5,8,8,5,8,8,5,8,8]) verify([8], [8,8,8]) verify([1,8,1], [1,8,1,1,8,1,1,8,1]) verify([5,2,2,5], [5,2,2,5,5,2,2,5,5,2,2,5]) verify([5,5,2,2], [5,5,2,2,5,5,2,2,5,5,2,2]) ``` which you can run here: <https://repl.it/EthR/0> And some Java test code for you which you can run here <https://www.compilejava.net/> ``` import java.util.*; public class HelloWorld { public static <T> T[] reduce(T[] duplicated) { return duplicated; // implement me! } // arguments are passed using the text field below this editor public static void main(String[] args) { // should return the same verify(new Integer[]{1, 2, 3}, new Integer[]{1,2,3}); verify(new Integer[]{1,2}, new Integer[]{1,2}); verify(new Integer[]{1,1,2}, new Integer[]{1,1,2}); verify(new Integer[]{5,8,8}, new Integer[]{5,8,8}); verify(new Integer[]{8}, new Integer[]{8}); verify(new Integer[]{1,8,1}, new Integer[]{1,8,1}); verify(new Integer[]{5,2,2,5}, new Integer[]{5,2,2,5}); verify(new Integer[]{5,5,2,2}, new Integer[]{5,5,2,2}); // repeated only once verify(new Integer[]{1, 2, 3}, new Integer[]{1,2,3,1,2,3}); verify(new Integer[]{1,2}, new Integer[]{1,2,1,2}); verify(new Integer[]{1,1,2}, new Integer[]{1,1,2,1,1,2}); verify(new Integer[]{5,8,8}, new Integer[]{5,8,8,5,8,8}); verify(new Integer[]{8}, new Integer[]{8,8}); verify(new Integer[]{1,8,1}, new Integer[]{1,8,1,1,8,1}); verify(new Integer[]{5,2,2,5}, new Integer[]{5,2,2,5,5,2,2,5}); verify(new Integer[]{5,5,2,2}, new Integer[]{5,5,2,2,5,5,2,2}); // repeated twice verify(new Integer[]{1, 2, 3}, new Integer[]{1,2,3,1,2,3,1,2,3}); verify(new Integer[]{1,2}, new Integer[]{1,2,1,2,1,2}); verify(new Integer[]{1,1,2}, new Integer[]{1,1,2,1,1,2,1,1,2}); verify(new Integer[]{5,8,8}, new Integer[]{5,8,8,5,8,8,5,8,8}); verify(new Integer[]{8}, new Integer[]{8,8,8}); verify(new Integer[]{1,8,1}, new Integer[]{1,8,1,1,8,1,1,8,1}); verify(new Integer[]{5,2,2,5}, new Integer[]{5,2,2,5,5,2,2,5,5,2,2,5}); verify(new Integer[]{5,5,2,2}, new Integer[]{5,5,2,2,5,5,2,2,5,5,2,2}); } public static <T> void verify(final T[] expected, final T[] duplicated) { if (expected == null || duplicated == null) throw new ComparisonException("Cannot be null"); final T[] result = reduce(duplicated); if (result == null) throw new ComparisonException("Cannot be null"); if (expected.length != result.length) { throw new ComparisonException("lengths do not match in " + Arrays.toString(expected) + " and " + Arrays.toString(result)); } for (int i = 0; i < expected.length; i++) { if (!result[i].equals(expected[i])) { throw new ComparisonException("Elem [" + i + "] does not match in " + Arrays.toString(expected) + " and " + Arrays.toString(result)); } } System.out.println(Arrays.toString(expected) + " passed: " + Arrays.toString(result)); } public static class ComparisonException extends RuntimeException { public ComparisonException(String message){ super(message);} } } ```
2016/12/19
[ "https://Stackoverflow.com/questions/41231316", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4058774/" ]
Not sure about "nice", but it works: ``` public static <T> T[] reduce(T[] duplicated) { int len = duplicated.length; for (int i = 1; i <= len / 2; i++) { if (len % i == 0) { if (checkFactors(i, duplicated)) { return Arrays.copyOf(duplicated, i); } } } return duplicated; } public static <T> boolean checkFactors(int factor, T[] arr) { int len = arr.length; for (int j = 1; j < len / factor; j++) { if (!rangeCompare(j * factor, factor, arr)) { return false; } } return true; } public static <T> boolean rangeCompare(int off, int len, T[] arr) { for (int i = 0; i < len; i++) { if (!arr[i].equals(arr[off + i])) { return false; } } return true; } ```
So you want to test if an array is a smaller array repeated - now if, by your definition, `bigArray.length % smallArray.length != 0` means that it is NOT the smaller array repeated, I can give you a solution. In other words: If the smaller array doesn't fit inside the bigger array an even number of times, does that mean that it is NOT the smaller array repeated? If that's true, try this: ``` public boolean isArrayAnotherArrayRepeated(ArrayType[] bigArray, ArrayType[] smallType) { double numberOfTimesSmallFitsInBig = bigArray.length / smallArray.length; if (numberOfTimesSmallFitsInBig % 1 == 0) { //checks the condition mentioned above, i.e. that small doesn't fit into big an even number of times return false; } for (int i = 0; i < (int) numberOfTimesSmallFitsInBig; i++) { for (int h = 0; h < smallArray.length; h++) { if (smallArray[h] != bigArray[i+h]) { //note that you may have to use .equals here, if you don't want to compare references or primitive data types. //that would then look like "if (!smallArray[h].equals(bigArray[i+h]))" return false; } } } return true; } ``` If this solution doesn't fit your needs, comment on my answer and I'll try to think of something else. It's late right now, though, otherwise I'd do so right now ^^ (Note that I've never done anything with Python, so I didn't bother trying to understand your code)
41,231,316
I've been trying to write this reduce method and I can't find a nice way to do it in java. I managed in python but it makes use of lots of python stuff and porting that to java seems like a real pain. Is there a more java way to do it? Here's some test code, that should show what I mean if the title wasn't clear. My python test code: ``` def reduce_(duplicated): def get_factors(n): return set(reduce(list.__add__, ([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0))) factors = sorted(list(get_factors(len(duplicated)))) for factor in factors: chunks = set([tuple(duplicated[i:i + factor]) for i in xrange(0, len(duplicated), factor)]) if len(chunks) == 1: return list(chunks.pop()) return duplicated def verify(expected, duplicated): try: result = reduce_(duplicated) assert (expected == result) print expected, "passed" except AssertionError: print "expected", expected, "!=", duplicated #should return the same verify([1, 2, 3], [1,2,3]) verify([1,2], [1,2]) verify([1,1,2], [1,1,2]) verify([5,8,8], [5,8,8]) verify([8], [8]) verify([1,8,1], [1,8,1]) verify([5,2,2,5], [5,2,2,5]) verify([5,5,2,2], [5,5,2,2]) # repeated only once verify([1, 2, 3], [1,2,3,1,2,3]) verify([1,2], [1,2,1,2]) verify([1,1,2], [1,1,2,1,1,2]) verify([5,8,8], [5,8,8,5,8,8]) verify([8], [8,8]) verify([1,8,1], [1,8,1,1,8,1]) verify([5,2,2,5], [5,2,2,5,5,2,2,5]) verify([5,5,2,2], [5,5,2,2,5,5,2,2]) # repeated twice verify([1, 2, 3], [1,2,3,1,2,3,1,2,3]) verify([1,2], [1,2,1,2,1,2]) verify([1,1,2], [1,1,2,1,1,2,1,1,2]) verify([5,8,8], [5,8,8,5,8,8,5,8,8]) verify([8], [8,8,8]) verify([1,8,1], [1,8,1,1,8,1,1,8,1]) verify([5,2,2,5], [5,2,2,5,5,2,2,5,5,2,2,5]) verify([5,5,2,2], [5,5,2,2,5,5,2,2,5,5,2,2]) ``` which you can run here: <https://repl.it/EthR/0> And some Java test code for you which you can run here <https://www.compilejava.net/> ``` import java.util.*; public class HelloWorld { public static <T> T[] reduce(T[] duplicated) { return duplicated; // implement me! } // arguments are passed using the text field below this editor public static void main(String[] args) { // should return the same verify(new Integer[]{1, 2, 3}, new Integer[]{1,2,3}); verify(new Integer[]{1,2}, new Integer[]{1,2}); verify(new Integer[]{1,1,2}, new Integer[]{1,1,2}); verify(new Integer[]{5,8,8}, new Integer[]{5,8,8}); verify(new Integer[]{8}, new Integer[]{8}); verify(new Integer[]{1,8,1}, new Integer[]{1,8,1}); verify(new Integer[]{5,2,2,5}, new Integer[]{5,2,2,5}); verify(new Integer[]{5,5,2,2}, new Integer[]{5,5,2,2}); // repeated only once verify(new Integer[]{1, 2, 3}, new Integer[]{1,2,3,1,2,3}); verify(new Integer[]{1,2}, new Integer[]{1,2,1,2}); verify(new Integer[]{1,1,2}, new Integer[]{1,1,2,1,1,2}); verify(new Integer[]{5,8,8}, new Integer[]{5,8,8,5,8,8}); verify(new Integer[]{8}, new Integer[]{8,8}); verify(new Integer[]{1,8,1}, new Integer[]{1,8,1,1,8,1}); verify(new Integer[]{5,2,2,5}, new Integer[]{5,2,2,5,5,2,2,5}); verify(new Integer[]{5,5,2,2}, new Integer[]{5,5,2,2,5,5,2,2}); // repeated twice verify(new Integer[]{1, 2, 3}, new Integer[]{1,2,3,1,2,3,1,2,3}); verify(new Integer[]{1,2}, new Integer[]{1,2,1,2,1,2}); verify(new Integer[]{1,1,2}, new Integer[]{1,1,2,1,1,2,1,1,2}); verify(new Integer[]{5,8,8}, new Integer[]{5,8,8,5,8,8,5,8,8}); verify(new Integer[]{8}, new Integer[]{8,8,8}); verify(new Integer[]{1,8,1}, new Integer[]{1,8,1,1,8,1,1,8,1}); verify(new Integer[]{5,2,2,5}, new Integer[]{5,2,2,5,5,2,2,5,5,2,2,5}); verify(new Integer[]{5,5,2,2}, new Integer[]{5,5,2,2,5,5,2,2,5,5,2,2}); } public static <T> void verify(final T[] expected, final T[] duplicated) { if (expected == null || duplicated == null) throw new ComparisonException("Cannot be null"); final T[] result = reduce(duplicated); if (result == null) throw new ComparisonException("Cannot be null"); if (expected.length != result.length) { throw new ComparisonException("lengths do not match in " + Arrays.toString(expected) + " and " + Arrays.toString(result)); } for (int i = 0; i < expected.length; i++) { if (!result[i].equals(expected[i])) { throw new ComparisonException("Elem [" + i + "] does not match in " + Arrays.toString(expected) + " and " + Arrays.toString(result)); } } System.out.println(Arrays.toString(expected) + " passed: " + Arrays.toString(result)); } public static class ComparisonException extends RuntimeException { public ComparisonException(String message){ super(message);} } } ```
2016/12/19
[ "https://Stackoverflow.com/questions/41231316", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4058774/" ]
Not sure about "nice", but it works: ``` public static <T> T[] reduce(T[] duplicated) { int len = duplicated.length; for (int i = 1; i <= len / 2; i++) { if (len % i == 0) { if (checkFactors(i, duplicated)) { return Arrays.copyOf(duplicated, i); } } } return duplicated; } public static <T> boolean checkFactors(int factor, T[] arr) { int len = arr.length; for (int j = 1; j < len / factor; j++) { if (!rangeCompare(j * factor, factor, arr)) { return false; } } return true; } public static <T> boolean rangeCompare(int off, int len, T[] arr) { for (int i = 0; i < len; i++) { if (!arr[i].equals(arr[off + i])) { return false; } } return true; } ```
How about this? Return value is 0 = not a duplicate, 1 = same, 2 = repeated once, 3 = repeated twice etc ``` public int FindDuplicates(Integer[] a, Integer[] b) { int bigIndex = 0; int smallIndex = 0; int duplicates = 0; if(b.length % a.length == 0) while(bigIndex < b.length) { if(a[smallIndex] == b[bigIndex]) { bigIndex++; smallIndex++; if(smallIndex == a.length) { smallIndex = 0; duplicates++; } } else { duplicates = 0; break; } } return duplicates; } ``` It's also O(n) because we only have to go through the largest array once to determine the number of duplicates :)
46,028,830
I've been fooling around with `__slots__` and searching about them a little, but I'm still confused about some specifics: I'm aware that `__slots__` generates some kind of descriptors: ```py >>> class C: ... __slots__ = ('x',) ... >>> C.x <member 'x' of 'C' objects> >>> C.x.__get__ <method-wrapper '__get__' of member_descriptor object at 0x7f001de183a8> >>> C.x.__get__ <method-wrapper '__get__' of member_descriptor object at 0x7f001de183a8> ``` But I was wondering: where are the values actually stored? Because the common recipe/idiom I've seen so far for descriptors is: ```py >>> class Descr: ... ... def __init__(self, attrname): ... self.attrname = attrname ... ... def __get__(self, obj, cls): ... return obj.__dict__[self.attrname] ... ... def __set__(self, obj, val): ... obj.__dict__[self.attrname] = val ... ... class C: ... ... def __init__(self, x): ... self.x = x ``` When used in addition with `__slots__`, there are two problems: 1. Clash with names: ```py >>> class C: ... ... __slots__ = ('x',) ... ... def __init__(self, x): ... self.x = x ... ... x = Descr('x') ... Traceback (most recent call last) ... ValueError: 'x' in __slots__ conflicts with class variable ``` So a workaround is to name the actual attribute '\_x'. 2. No `__dict__` (unless you explicitly add it to `__slots__`): ```py >>> class C: ... ... __slots__ = ('_x',) ... ... def __init__(self, x): ... self._x = x ... ... x = Descr('_x') ... >>> c = C(0) >>> c.x Traceback (most recent call last) ... AttributeError: 'C' object has no attribute '__dict__' ``` So you have to use `getattr()` and `setattr()` instead. You can end with a generic descriptor wich can work with `__dict__` and `__slots__`: ```py class WorksWithDictAndSlotsDescriptor: def __init__(self, attr_name): self.attr_name = attr_name def __get__(self, instance, owner): try: return instance.__dict__[self.attr_name] except AttributeError: return getattr(instance, self.attr_name) def __set__(self, instance, value): try: instance.__dict__[self.attr_name] = value except AttributeError: setattr(instance, self.attr_name, value) ``` (Which won't work as excepted if there is both `__slots__` and `__dict__`.) But recently I've found a way to hijack the `__get__` and `__set__` method with a wrapper: ```py def slot_wrapper(cls, slotname, slot, descriptor): '''Wrapper replacing a slot descriptor with another one''' class InnerDescr(descriptor): def __get__(self, obj, cls): print("Hijacking __get__ method of a member-descriptor") return slot.__get__(obj, cls) def __set__(self, obj, val): print("Hijacking __set__ method of a member-descriptor") slot.__set__(obj, val) return InnerDescr(slotname, cls) ``` (The use case is to add type checking and data validation, plus enforcing encapsulation.) So after the creation of a class (or before using a metaclass) you can keep the same name for the slot and your descriptor. Works well, but feels a little dirty... I think it may be better to implement my own slots to keep using one name for the descriptor. But I don't know how. So here are some questions: 1. Where are the values actually stored (since there is no dict)? I was thinking it's something implemented in C and not directly accessible with Python code. 2. How can I implement a pure python equivalent without losing performance optimization? 3. Is it preferable to stick with my wrapper?
2017/09/04
[ "https://Stackoverflow.com/questions/46028830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7770274/" ]
> > Where are actually storerd the values (since there is no dict) ? I was thinking it's something implemented in C and not directly accessible with Python code. > > > Memory is allocated for the `PyObject *` pointers directly in the object itself. You can see the handling in [`Objects/typeobject.c`](https://github.com/python/cpython/blob/3.6/Objects/typeobject.c#L2621-L2652). The generated descriptors will access the memory reserved for their slot in an object of the appropriate type. > > How can I implement a pure python equivalent without losing performance optimization? > > > You cannot. The closest you can get is something like extending `tuple`. > > It is prefarable to stick with my wrapper? > > > No. Don't name your slots the same thing as an attribute you want to be handled by some other descriptor. Doing so is like naming two non-slot descriptors the same thing; you're expressing two contradictory intentions for how you want to handle the attribute with that name.
looking at your overall outlook, I've been working on performative solutions for custom setters on member descriptors for some time now, and this is the best I've come up with so far: (tested with Anaconda 2.3.0 (Python 3.4.3) on wine, and Python 3.5.2 on linux) **Note: This solution does not attempt to be pythonic, nor does it serve as a direct answer to the question, but rather an alternative implementation to the desired result.** ``` class A(object): __slots__ = ( 'attr', ) attrget = A['attr'].__get__ attrset = A['attr'].__set__ def setter( obj, val ): if type( val ) is int: attrset( obj, val ) else: raise TypeError( 'int type expected, got %s'%type( val ) ) setattr( A, 'attr', property( attrget, setter ) ) # ^ this is safer than A.__dict__['attr'] as a mapping_proxy is read-only ``` **Fun Fact:** For `i = A()`, while `i.attr` is less efficient (more CPU spikes), it's actually about ~20ns faster (relative to my machine) compared to the basic member\_descriptor on average. The same also applies to `i.attr = 0` without the custom setter. (feel free to test for yourself, timeit should work similarly (except it includes the time for the for loop). (note that my tests didn't change the value), and make sure to run multiple tests) here's the test results for Python 3.5.2 on linux: ``` 10000 iterations; threshold of min + 250ns: ________code___|_______min______|_______max______|_______avg______|_efficiency ⡇⢠⠀⠀⠀⠀⠀⠀⠀⠀⡄⠀⢰⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡀⠀⠀⠀⠀⠀⠀ ⣿⣾⣴⣄⣤⣄⣄⣰⣦⣤⣧⣰⣼⣤⣤⣆⡀⡀⢀⣄⣠⣀⣴⣶⣦⣤⣤⣦⣶⣄⣄⣠⣄⣴⣤⣶⣸⣦⣤⣤⣴⣴⣴⣷⣶⣴⣦⣤⣶⣆⣤⣤⣦⣶⣤⣴⣠⣷⣤⣶⣾⣷⣤⣆ i.mdsc = 1 | 564.964ns | 17341.983ns | 638.568ns | 88.473% ⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⡀⠀⢠⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠀⠀⠀⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⣷⣶⣶⣴⣤⣤⣦⣶⣶⣦⣧⣼⣼⣴⣄⣦⡄⣄⣀⣄⣴⡄⣼⣾⣶⣦⣴⣧⣶⣄⣄⣴⣦⣾⣴⣴⣤⣦⣆⣶⣴⣤⣴⣷⣿⣼⣾⣦⣷⣦⣧⣾⣦⣿⣤⣴⣤⣿⣤⣧⣾⣷⣶⣧ i.prop = 1 | 538.013ns | 8267.001ns | 624.045ns | 86.214% 10000 iterations; threshold of min + 175ns: ____code___|_______min______|_______max______|_______avg______|_efficiency ⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡆ ⣇⣴⣠⣤⣠⣄⣤⣄⣀⣀⣀⡀⣀⣀⣀⣄⣠⣠⣄⣦⣤⣤⣄⣤⣤⣠⣤⣧⣤⣤⣠⣤⣤⣤⣤⣤⣤⣼⣤⣤⣤⣶⣤⣶⣦⣤⣀⣄⣤⣤⣤⣤⣤⣤⣤⣤⣤⣶⣦⣷⣤⣶⣄⣧ i.mdsc | 364.962ns | 27579.023ns | 411.621ns | 88.665% ⡇⢠⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠀⠀⠀⠀⠀⠀ ⣷⣾⣦⣦⣴⣠⣿⣦⣠⣠⣄⣀⣄⡄⣠⣴⣠⣤⣴⣴⣦⣼⣤⣦⣤⣤⣤⣧⣴⣶⣦⣶⣶⣶⣶⣶⣦⣶⣶⣶⣷⣿⣷⣿⣷⣾⣶⣶⣶⣾⣾⣾⣶⣶⣴⣶⣴⣾⣷⣿⣿⣷⣶⣶ i.prop | 341.039ns | 2000.015ns | 400.054ns | 85.248% ``` Finally, if you downvote this answer, please explain why. (please do not downvote if your tests do not compare to mine) ^ my results are just an example to show a small boost in performance, and should not be taken at face value.
65,767,823
I am new to using terminal in Mac. When I type any python3 command it only checks the users folder on my PC, HOW can I change the directory to open a folder in the users section and check for the .py file there?
2021/01/18
[ "https://Stackoverflow.com/questions/65767823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15026906/" ]
Access the desired path using `cd` command ``` cd path/to/access ``` Then you can run the python command to run the scripts.
If you know the name of the folder in which you want to check, you can change the current python directory using: `os.chdir` <https://docs.python.org/3/library/os.html#os.chdir> In that case it doesn't matter from where you're running your python script.
65,767,823
I am new to using terminal in Mac. When I type any python3 command it only checks the users folder on my PC, HOW can I change the directory to open a folder in the users section and check for the .py file there?
2021/01/18
[ "https://Stackoverflow.com/questions/65767823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15026906/" ]
Access the desired path using `cd` command ``` cd path/to/access ``` Then you can run the python command to run the scripts.
You can use the **OS** library which comes shipped with Python. to change the current directory inside the script, you can simply write: ``` import os os.chdir("newDir/") ``` if you should want to get all directories which are in the user's current location, you can use the method: ``` os.listdir() ``` Hope that helps you out.
65,767,823
I am new to using terminal in Mac. When I type any python3 command it only checks the users folder on my PC, HOW can I change the directory to open a folder in the users section and check for the .py file there?
2021/01/18
[ "https://Stackoverflow.com/questions/65767823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15026906/" ]
Access the desired path using `cd` command ``` cd path/to/access ``` Then you can run the python command to run the scripts.
You have to use the command cd to your folder first.
62,292,262
I am making a simple program in Python and I want from one file import a class that I made in another class. My code is the following: ``` #file cPoint.py import math class Point: def __init__(self,x,y): self.x=x self.y=y def printC(self): print ("(",self.x,",",self.y,")") ``` and my other file is: ``` #file pointTest import cPoint def main(): p=Point(3,4) p.printC() if __name__=="__main__": main() ``` The problem is that when I run my program from the file pointTest, by the way both of them are in the same directory, I got the following error: ``` File "E:\python\pointTest.py", line 15, in <module> main() File "E:\python\pointTest.py", line 11, in main p=Point(3,4) NameError: name 'Point' is not defined ``` When I test my program all in the same file it works, and also when I call the class from the command prompt by using: ``` import cPoint p=Point(3,4) p.printC() ( 3 , 4 ) ``` what am I missing? I am using WinPython in Anaconda.
2020/06/09
[ "https://Stackoverflow.com/questions/62292262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1960266/" ]
Create another row above your column title with the following formula. This will isolate the number, which can then be sorted according to its numerical value (tested, see screenshot below): ``` =LEFT(RIGHT(A3;LEN(A3)-16);LEN(A3)-21) ``` [![enter image description here](https://i.stack.imgur.com/VQm16.png)](https://i.stack.imgur.com/VQm16.png)
Try putting a zero in front of the single digits. Pad with more zeros if you have higher numbers. 56g\_flux\_data39(01)1992, 56g\_flux\_data39(02)1992, 56g\_flux\_data39(03)1992, all the way to 56g\_flux\_data39(11)1992
61,169,193
I have a python code that prints Arabic text on vscode terminal, the output in the terminal appears in "unreadable" format. code: ``` print('مرحبا') ``` output: ``` ┘à╪▒╪¡╪¿╪º ``` How can I solve this issue?
2020/04/12
[ "https://Stackoverflow.com/questions/61169193", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3261292/" ]
to have a `String` variable formatted as {"Head1","Head2","Head3";1,2,3;4,5,6;7,8,9}, you can do as follows ``` Dim sValue As String, r As Range For Each r In ThisWorkbook.Names("MyRange").RefersToRange.Rows sValue = sValue & Join(Application.Transpose(Application.Transpose(r.Value)), ",") & ";" Next sValue = "{" & Left(sValue, Len(sValue) - 1) & "}" ``` while to store a named range in an array is much simpler (and I'd say, easy to use): ``` Dim sValue As Variant sValue = ThisWorkbook.Names("MyRange").RefersToRange.Value ```
I think you are looking for ``` sValue = ThisWorkbook.Names("MyRange"),name ``` to find all names on Activesheet: ``` Sub test_names() Dim wsName As String wsName = ActiveSheet.Name Dim nameRange As Variant For Each nameRange In ThisWorkbook.Names Set rngName = Range(nameRange) wsParentName = rngName.Parent.CodeName If (wsParentName = wsName) Then Debug.Print "Found range " & nameRange.Name End If Next nameRange End Sub ```
16,569,784
I'm using a *private* view in my Django project for an AJAX request. ```python def HereIsSomeJSON(request, label): if not request.method == "POST": raise PermissionDenied # Here is the job of my AJAX, basically feeding a JSON json = {...} return HttpResponse(json, "application/json") ``` Using JavaScript, I request for the AJAX with jQuery as so: ```js function FeedMeWithJSON() { // Django needs his Cross-Site Request Forgery token to welome POST datas oPostDatas = { 'csrfmiddlewaretoken': '{{ csrf_token }}' }; jQuery.post("/url/to/HereIsSomeJSON", oPostDatas, function(oData, sStatus) { // Here is the job to be done with the fetched JSON }, "json"); } ``` Everything works just fine thanks to the `request.method` verification I do in the view. The user isn't able to manually (by entering my AJAX url in his browser) access to my view. However, as I will need more AJAX views I was wondering if I was doing the right thing. So I thought of creating a custom Django decorator which I could use above every one of my AJAX views. Is is the good way of protecting my *private* views ? And if so, how do I do it ? Thanks, Edit ==== --- Apparently, this was not clear enough. I am using a Django view to do an AJAX request. But I don't want the user to be able to type in the URL to read database contents. I know one could always use `curl` or something similar to send POST datas and thus bypassing my thing, even though he would have to send the right `{% csrf_token %}`. Plus, in *a near future* the *login* feature will be implemented and I will add the `@login_required` decorator. Thanks,
2013/05/15
[ "https://Stackoverflow.com/questions/16569784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1824222/" ]
Your approach of requiring POST for your ajax views is basically OK, and there is an [existing decorator](https://docs.djangoproject.com/en/dev/topics/http/decorators/) to handle it: ``` from django.views.decorators.http import require_POST @require_POST def my_view(request): # I can assume now that only POST requests make it this far # ... pass ``` Also, there is an easier way to handle adding CSRF tokens to your jQuery AJAX calls, documented [here](https://docs.djangoproject.com/en/dev/ref/contrib/csrf/#ajax). The basic idea is that you read the CSRF token from the cookie, and use the `beforeSend` option in `$.ajaxSetup` to add it to your all of your `$.ajax` calls (including shortcut syntax like `$.post`). Since this code isn't relying on template variables, it doesn't have to be in an inline `<script>` tag. ``` // using jQuery function getCookie(name) { var cookieValue = null; if (document.cookie && document.cookie != '') { var cookies = document.cookie.split(';'); for (var i = 0; i < cookies.length; i++) { var cookie = jQuery.trim(cookies[i]); // Does this cookie string begin with the name we want? if (cookie.substring(0, name.length + 1) == (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } var csrftoken = getCookie('csrftoken'); function csrfSafeMethod(method) { // these HTTP methods do not require CSRF protection return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method)); } $.ajaxSetup({ crossDomain: false, // obviates need for sameOrigin test beforeSend: function(xhr, settings) { if (!csrfSafeMethod(settings.type)) { xhr.setRequestHeader("X-CSRFToken", csrftoken); } } }); ```
As [@armonge](https://stackoverflow.com/users/407762/armonge) said, just checking for POST method doesn't do any protection. Suppose you need some checks, that user is registered and maybe have some permissions or needed properties. For this purpose, such decorator can be used (probably needs to be customized): ``` def apply_permissions(view_func): def _wrapped_view(request, *args, **kwargs): # it is possible to add some other checks, that return booleans # or do it in a separate `if` statement # for example, check for some user permissions or properties permissions = [ request.is_ajax(), request.method == "POST", request.user.is_authenticated() ] if not all(permissions): raise PermissionDenied return view_func(request, *args, **kwargs) return _wrapped_view @apply_permissions def HereIsSomeJSON(request, label): # your code here without any permissions checks # Here is the job of my AJAX, basically feeding a JSON ```
16,569,784
I'm using a *private* view in my Django project for an AJAX request. ```python def HereIsSomeJSON(request, label): if not request.method == "POST": raise PermissionDenied # Here is the job of my AJAX, basically feeding a JSON json = {...} return HttpResponse(json, "application/json") ``` Using JavaScript, I request for the AJAX with jQuery as so: ```js function FeedMeWithJSON() { // Django needs his Cross-Site Request Forgery token to welome POST datas oPostDatas = { 'csrfmiddlewaretoken': '{{ csrf_token }}' }; jQuery.post("/url/to/HereIsSomeJSON", oPostDatas, function(oData, sStatus) { // Here is the job to be done with the fetched JSON }, "json"); } ``` Everything works just fine thanks to the `request.method` verification I do in the view. The user isn't able to manually (by entering my AJAX url in his browser) access to my view. However, as I will need more AJAX views I was wondering if I was doing the right thing. So I thought of creating a custom Django decorator which I could use above every one of my AJAX views. Is is the good way of protecting my *private* views ? And if so, how do I do it ? Thanks, Edit ==== --- Apparently, this was not clear enough. I am using a Django view to do an AJAX request. But I don't want the user to be able to type in the URL to read database contents. I know one could always use `curl` or something similar to send POST datas and thus bypassing my thing, even though he would have to send the right `{% csrf_token %}`. Plus, in *a near future* the *login* feature will be implemented and I will add the `@login_required` decorator. Thanks,
2013/05/15
[ "https://Stackoverflow.com/questions/16569784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1824222/" ]
Your approach of requiring POST for your ajax views is basically OK, and there is an [existing decorator](https://docs.djangoproject.com/en/dev/topics/http/decorators/) to handle it: ``` from django.views.decorators.http import require_POST @require_POST def my_view(request): # I can assume now that only POST requests make it this far # ... pass ``` Also, there is an easier way to handle adding CSRF tokens to your jQuery AJAX calls, documented [here](https://docs.djangoproject.com/en/dev/ref/contrib/csrf/#ajax). The basic idea is that you read the CSRF token from the cookie, and use the `beforeSend` option in `$.ajaxSetup` to add it to your all of your `$.ajax` calls (including shortcut syntax like `$.post`). Since this code isn't relying on template variables, it doesn't have to be in an inline `<script>` tag. ``` // using jQuery function getCookie(name) { var cookieValue = null; if (document.cookie && document.cookie != '') { var cookies = document.cookie.split(';'); for (var i = 0; i < cookies.length; i++) { var cookie = jQuery.trim(cookies[i]); // Does this cookie string begin with the name we want? if (cookie.substring(0, name.length + 1) == (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } var csrftoken = getCookie('csrftoken'); function csrfSafeMethod(method) { // these HTTP methods do not require CSRF protection return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method)); } $.ajaxSetup({ crossDomain: false, // obviates need for sameOrigin test beforeSend: function(xhr, settings) { if (!csrfSafeMethod(settings.type)) { xhr.setRequestHeader("X-CSRFToken", csrftoken); } } }); ```
I figured out that requiring POST is not enough, one could possibly cheat using `curl` for example. That is why a custom decorator is - in my opinion - the way to go. ```py from django.http import HttpResponseBadRequest def require_AJAX(function): """Return a bad request instance if the view is not using AJAX function -- the view """ def wrap(request, *args, **kwargs): if not request.is_ajax(): return HttpResponseBadRequest() return function(request, *args, **kwargs) wrap.__doc__ = function.__doc__ wrap.__name__ = function.__name__ return wrap ``` This decorator does actually less than @stalk's but this what I needed.
5,523,279
it seems that I have some trouble understanding the semantics of the SUID bit, perhaps someone can help me clarify the situation. My understanding of the semantic of the SUID bit are as follows: When I set the SUID bit with a file, then the file will be executed as the owner of the file and not as the caller of the file. So to test this behavior I wrote the following python script: ``` #!/usr/bin/python3 -O import os def main(): print('Real UserID: %d' % os.getuid()) print('Effective UserID: %d' % os.geteuid()) if __name__ == '__main__': main() ``` After that I created a user named "testuser" with the corresponding group "testuser" and adjusted the file permissions (chown testuser file, chgrp testuser file, chmod u+s,g+x file). Next I added my main user to the "testuser" group so that I can execute the file as a member of the group. After all that the file permissions looked like this: ``` -rwsr-xr-- 1 testuser testuser 168 2011-04-02 13:35 procred.py* ``` So when I am login as the testuser the script produces the output: ``` Real UserID: 1001 Effective UserID: 1001 ``` ...and when I run the script as my main user the script outputs: ``` Real UserID: 1000 Effective UserID: 1000 ``` Now as of my understanding the script should have run as the user with the uid 1001 (the owner of the file) in the latter execution. Am I getting the whole concept wrong or where is my mistake?
2011/04/02
[ "https://Stackoverflow.com/questions/5523279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/350403/" ]
Setting the SUID bit on a \*.py file does not help in any way here since the script is executed by the Python interpreter which must be set SUID in this case. Using 'sudo' is your better friend here.
Setting SUID does not work for scripts, because the kernel sees the #! (shebang - magic number 0x23 0x21 - man magic) and drops the privileges before calling the interpreter /usr/bin/python with the script as input. A way around is setting the python interpreter SUID root and add functionality to change privileges to the user owning the script before executing the script, in case SUID bit is set. Doing this in a naive way will impose security problems. How to do it in a smart way can be found here: <http://perldoc.perl.org/perlsec.html> Additional Links: * <http://www.in-ulm.de/~mascheck/various/shebang/> * <https://unix.stackexchange.com/questions/364/allow-setuid-on-shell-scripts>
5,523,279
it seems that I have some trouble understanding the semantics of the SUID bit, perhaps someone can help me clarify the situation. My understanding of the semantic of the SUID bit are as follows: When I set the SUID bit with a file, then the file will be executed as the owner of the file and not as the caller of the file. So to test this behavior I wrote the following python script: ``` #!/usr/bin/python3 -O import os def main(): print('Real UserID: %d' % os.getuid()) print('Effective UserID: %d' % os.geteuid()) if __name__ == '__main__': main() ``` After that I created a user named "testuser" with the corresponding group "testuser" and adjusted the file permissions (chown testuser file, chgrp testuser file, chmod u+s,g+x file). Next I added my main user to the "testuser" group so that I can execute the file as a member of the group. After all that the file permissions looked like this: ``` -rwsr-xr-- 1 testuser testuser 168 2011-04-02 13:35 procred.py* ``` So when I am login as the testuser the script produces the output: ``` Real UserID: 1001 Effective UserID: 1001 ``` ...and when I run the script as my main user the script outputs: ``` Real UserID: 1000 Effective UserID: 1000 ``` Now as of my understanding the script should have run as the user with the uid 1001 (the owner of the file) in the latter execution. Am I getting the whole concept wrong or where is my mistake?
2011/04/02
[ "https://Stackoverflow.com/questions/5523279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/350403/" ]
Setting the SUID bit on a \*.py file does not help in any way here since the script is executed by the Python interpreter which must be set SUID in this case. Using 'sudo' is your better friend here.
I found this [link](http://selliott.org/python/suid-python.c) in the web. you can set SUID to this wrapper and use this one. but personally prefer sudo solution. ;)
5,523,279
it seems that I have some trouble understanding the semantics of the SUID bit, perhaps someone can help me clarify the situation. My understanding of the semantic of the SUID bit are as follows: When I set the SUID bit with a file, then the file will be executed as the owner of the file and not as the caller of the file. So to test this behavior I wrote the following python script: ``` #!/usr/bin/python3 -O import os def main(): print('Real UserID: %d' % os.getuid()) print('Effective UserID: %d' % os.geteuid()) if __name__ == '__main__': main() ``` After that I created a user named "testuser" with the corresponding group "testuser" and adjusted the file permissions (chown testuser file, chgrp testuser file, chmod u+s,g+x file). Next I added my main user to the "testuser" group so that I can execute the file as a member of the group. After all that the file permissions looked like this: ``` -rwsr-xr-- 1 testuser testuser 168 2011-04-02 13:35 procred.py* ``` So when I am login as the testuser the script produces the output: ``` Real UserID: 1001 Effective UserID: 1001 ``` ...and when I run the script as my main user the script outputs: ``` Real UserID: 1000 Effective UserID: 1000 ``` Now as of my understanding the script should have run as the user with the uid 1001 (the owner of the file) in the latter execution. Am I getting the whole concept wrong or where is my mistake?
2011/04/02
[ "https://Stackoverflow.com/questions/5523279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/350403/" ]
Setting SUID does not work for scripts, because the kernel sees the #! (shebang - magic number 0x23 0x21 - man magic) and drops the privileges before calling the interpreter /usr/bin/python with the script as input. A way around is setting the python interpreter SUID root and add functionality to change privileges to the user owning the script before executing the script, in case SUID bit is set. Doing this in a naive way will impose security problems. How to do it in a smart way can be found here: <http://perldoc.perl.org/perlsec.html> Additional Links: * <http://www.in-ulm.de/~mascheck/various/shebang/> * <https://unix.stackexchange.com/questions/364/allow-setuid-on-shell-scripts>
I found this [link](http://selliott.org/python/suid-python.c) in the web. you can set SUID to this wrapper and use this one. but personally prefer sudo solution. ;)
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
You can do something like this perhaps: ``` file1 = [[224,43,234],[22,5,224],[234,254,220]] for idx, inner_list in enumerate(file1): print('max_'+str(idx+1)+' : '+str(max(inner_list))) ```
You said you have a data frame, so I assume it's a `pandas` `DataFrame` object. In which case, you can use list comprehension to take the max from each sub-list in the list, and assign each element to a new column (this loop isn't elegant but will work): ``` df['max_colors'] = df['RGB'].apply(lambda x: [np.max(color) for color in x]) for i in range(3): df['max_'+str(i)] = df['max_colors'].apply(lambda x: x[i]) ```
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
I'll assume that you want the max values of R, G and B respectively. If you want this then, here is one way to do it: ``` a = np.array([ [224,43,234], [22,5,224], [234,254,220]]) r_max = a.T[0].max() g_max = a.T[1].max() b_max = a.T[2].max() ```
You can do something like this perhaps: ``` file1 = [[224,43,234],[22,5,224],[234,254,220]] for idx, inner_list in enumerate(file1): print('max_'+str(idx+1)+' : '+str(max(inner_list))) ```
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
Using `list-comprehension`: ``` a = np.array([[224,43,234], [22,5,224], [234,254,220]]) print([x.max() for x in a]) ``` **OUTPUT**: ``` [234, 224, 254] ```
Solved the problem like this. Thanks for having all the answers. ``` df['max_1'] = 0 df['max_2'] = 0 df['max_3'] = 0 for i in range(5): df['max_1'][i] = df['RGB'][i][0].max() df['max_2'][i] = df['RGB'][i][1].max() df['max_3'][i] = df['RGB'][i][2].max() ```
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
Another way: ``` import numpy as np a=np.array([[1,2,3],[11,12,13],[21,22,23]]) print(a) maxInRows = np.amax(a, axis=1) print('Max value of every Row: ', maxInRows) ```
You said you have a data frame, so I assume it's a `pandas` `DataFrame` object. In which case, you can use list comprehension to take the max from each sub-list in the list, and assign each element to a new column (this loop isn't elegant but will work): ``` df['max_colors'] = df['RGB'].apply(lambda x: [np.max(color) for color in x]) for i in range(3): df['max_'+str(i)] = df['max_colors'].apply(lambda x: x[i]) ```
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
I'll assume that you want the max values of R, G and B respectively. If you want this then, here is one way to do it: ``` a = np.array([ [224,43,234], [22,5,224], [234,254,220]]) r_max = a.T[0].max() g_max = a.T[1].max() b_max = a.T[2].max() ```
Another way: ``` import numpy as np a=np.array([[1,2,3],[11,12,13],[21,22,23]]) print(a) maxInRows = np.amax(a, axis=1) print('Max value of every Row: ', maxInRows) ```
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
I'll assume that you want the max values of R, G and B respectively. If you want this then, here is one way to do it: ``` a = np.array([ [224,43,234], [22,5,224], [234,254,220]]) r_max = a.T[0].max() g_max = a.T[1].max() b_max = a.T[2].max() ```
You said you have a data frame, so I assume it's a `pandas` `DataFrame` object. In which case, you can use list comprehension to take the max from each sub-list in the list, and assign each element to a new column (this loop isn't elegant but will work): ``` df['max_colors'] = df['RGB'].apply(lambda x: [np.max(color) for color in x]) for i in range(3): df['max_'+str(i)] = df['max_colors'].apply(lambda x: x[i]) ```
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
Another way: ``` import numpy as np a=np.array([[1,2,3],[11,12,13],[21,22,23]]) print(a) maxInRows = np.amax(a, axis=1) print('Max value of every Row: ', maxInRows) ```
Solved the problem like this. Thanks for having all the answers. ``` df['max_1'] = 0 df['max_2'] = 0 df['max_3'] = 0 for i in range(5): df['max_1'][i] = df['RGB'][i][0].max() df['max_2'][i] = df['RGB'][i][1].max() df['max_3'][i] = df['RGB'][i][2].max() ```
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
I'll assume that you want the max values of R, G and B respectively. If you want this then, here is one way to do it: ``` a = np.array([ [224,43,234], [22,5,224], [234,254,220]]) r_max = a.T[0].max() g_max = a.T[1].max() b_max = a.T[2].max() ```
Solved the problem like this. Thanks for having all the answers. ``` df['max_1'] = 0 df['max_2'] = 0 df['max_3'] = 0 for i in range(5): df['max_1'][i] = df['RGB'][i][0].max() df['max_2'][i] = df['RGB'][i][1].max() df['max_3'][i] = df['RGB'][i][2].max() ```
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
Using `list-comprehension`: ``` a = np.array([[224,43,234], [22,5,224], [234,254,220]]) print([x.max() for x in a]) ``` **OUTPUT**: ``` [234, 224, 254] ```
Another way: ``` import numpy as np a=np.array([[1,2,3],[11,12,13],[21,22,23]]) print(a) maxInRows = np.amax(a, axis=1) print('Max value of every Row: ', maxInRows) ```
55,683,072
Actually, i'm very new to python and working on some image problem statement. Stuck in a problem and not able to get out of this. I have data frame like: ``` Image RGB max_1 max_2 max_3 file1 [[224,43,234][22,5,224][234,254,220]] 234 224 254 file2 [[22,143,113][221,124,224][234,254,123]] 143 224 254 file3 [[44,45,2][2,5,4][34,254,220]] 45 5 254 file4 [[224,243,34][22,5,24][24,25,20]] 243 24 25 file5 [[210,13,34][22,5,224][234,254,220]] 210 224 254 ``` I tried `np.max()` but it gave me unexpected results that means for the first row this gave me output `254`, and so on. I'm expecting the output as column max\_1, max\_2, and max\_3.
2019/04/15
[ "https://Stackoverflow.com/questions/55683072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7721819/" ]
Using `list-comprehension`: ``` a = np.array([[224,43,234], [22,5,224], [234,254,220]]) print([x.max() for x in a]) ``` **OUTPUT**: ``` [234, 224, 254] ```
You can do something like this perhaps: ``` file1 = [[224,43,234],[22,5,224],[234,254,220]] for idx, inner_list in enumerate(file1): print('max_'+str(idx+1)+' : '+str(max(inner_list))) ```
59,333,904
(I'm very new to both python and stackoverflow.) ``` def A(): def B(): print("I'm B") A.B = B A() A.B() ``` output: ``` "I'm B" ``` This works well. What I want is to put that in a class like this(doesn't work. I just tried..) ``` class Student: def A(self): def B(): print("I'm B") self.A.B = B ``` I have no Idea how to make the class and how to call the sub function in the class.
2019/12/14
[ "https://Stackoverflow.com/questions/59333904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12386013/" ]
You don't need to reference *self* because the inner function *B* is defined there. It should be like this: ``` class Student: def A(self): def B(): print("I'm B") B() ```
I never use classes, but could you do it this way? ``` class A: def __call__(self): // so you can call it like a function def B(): print("i am B") B() call_A = A() // make the class callable call_A() // call it ```
59,333,904
(I'm very new to both python and stackoverflow.) ``` def A(): def B(): print("I'm B") A.B = B A() A.B() ``` output: ``` "I'm B" ``` This works well. What I want is to put that in a class like this(doesn't work. I just tried..) ``` class Student: def A(self): def B(): print("I'm B") self.A.B = B ``` I have no Idea how to make the class and how to call the sub function in the class.
2019/12/14
[ "https://Stackoverflow.com/questions/59333904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12386013/" ]
Python functions are `first-class objects`. [What is first class function in Python](https://stackoverflow.com/questions/27392402/what-is-first-class-function-in-python) So the first piece of code is perfectly valid. It just adds a property `B` to the function, which can later be called using `A.B()`. But for the second piece of code, this is invalid, as `self.A` returns a reference to `A` method of class `Student` ``` <bound method Student.A of <__main__.Student object at 0x7f5335d80828>> ``` `self.A` does not have an attribute `B`, so it returns an error ``` AttributeError: 'method' object has no attribute 'B' ``` Now a quick fix would be to assign it to `self.B` ``` class Student: def A(self): def B(): print("I'm B") self.B = B a = Student() a.A() a.B() ``` Although the above code works, it is a very bad way, as you will have to always call `A` for every object instantiated before calling `B`.
You don't need to reference *self* because the inner function *B* is defined there. It should be like this: ``` class Student: def A(self): def B(): print("I'm B") B() ```
59,333,904
(I'm very new to both python and stackoverflow.) ``` def A(): def B(): print("I'm B") A.B = B A() A.B() ``` output: ``` "I'm B" ``` This works well. What I want is to put that in a class like this(doesn't work. I just tried..) ``` class Student: def A(self): def B(): print("I'm B") self.A.B = B ``` I have no Idea how to make the class and how to call the sub function in the class.
2019/12/14
[ "https://Stackoverflow.com/questions/59333904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12386013/" ]
Python functions are `first-class objects`. [What is first class function in Python](https://stackoverflow.com/questions/27392402/what-is-first-class-function-in-python) So the first piece of code is perfectly valid. It just adds a property `B` to the function, which can later be called using `A.B()`. But for the second piece of code, this is invalid, as `self.A` returns a reference to `A` method of class `Student` ``` <bound method Student.A of <__main__.Student object at 0x7f5335d80828>> ``` `self.A` does not have an attribute `B`, so it returns an error ``` AttributeError: 'method' object has no attribute 'B' ``` Now a quick fix would be to assign it to `self.B` ``` class Student: def A(self): def B(): print("I'm B") self.B = B a = Student() a.A() a.B() ``` Although the above code works, it is a very bad way, as you will have to always call `A` for every object instantiated before calling `B`.
I never use classes, but could you do it this way? ``` class A: def __call__(self): // so you can call it like a function def B(): print("i am B") B() call_A = A() // make the class callable call_A() // call it ```
21,175,923
The problem I am tackle with is to find the first occurrence node in its inorder traversal in a BST. The code I have is given below ``` def Inorder_search_recursive(node,key): if not node: return None InOrder_search_recursive(node.lChild) if node.value==key: return node InOrder_search_recursive(node.rChild) ``` This code always return None, what's wrong with it. I think I've return node when I find a node with value k. Why cannot python pass this node???Thanks in advance
2014/01/17
[ "https://Stackoverflow.com/questions/21175923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1824922/" ]
When you call yourself recursively, like this: ``` InOrder_search_recursive(node.lChild) ``` That's just a normal function call, like any other. It just calls the function and gets back a result. It doesn't automatically `return` the value from that function, or do anything else. So, you do the left-subtree search, ignore the results, then go on to check `node.value == key`, and, if that fails, you do the right-subtree search, again ignore the results, and fall off the end of the function, meaning you return `None`. To make this work, you need to `return` the value you got back. But, of course, only if it's `not None`. Also, you forgot to pass the `key` argument down to the recursive call, so you're just going to get a `TypeError`. (I'm guessing your real code doesn't have this problem, but since you didn't show us your real code, or a working example, I can't be sure…) So: ``` def Inorder_search_recursive(node, key): if not node: return None result = InOrder_search_recursive(node.lChild, key) if result is not None: return result if node.value==key: return node return InOrder_search_recursive(node.rChild, key) ``` (You don't need the `not None` check for the right-side search, because if it returns `None`, we have nothing else to try and are just going to return `None` anyway.)
Since your problem is `to find the first occurrence node in its inorder traversal`, you should 1) traverse the tree in-order and 2) stop when you find the first occurrence. ``` def search(node, key): if node is None: return None # Search the left subtree and return early if key is found n = search(node.lChild, key) if n is not None: return n # Check middle and return early if key is found if node.value == key: return node # Search right subtree return search(node.rChild, key) ```
21,175,923
The problem I am tackle with is to find the first occurrence node in its inorder traversal in a BST. The code I have is given below ``` def Inorder_search_recursive(node,key): if not node: return None InOrder_search_recursive(node.lChild) if node.value==key: return node InOrder_search_recursive(node.rChild) ``` This code always return None, what's wrong with it. I think I've return node when I find a node with value k. Why cannot python pass this node???Thanks in advance
2014/01/17
[ "https://Stackoverflow.com/questions/21175923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1824922/" ]
When you call yourself recursively, like this: ``` InOrder_search_recursive(node.lChild) ``` That's just a normal function call, like any other. It just calls the function and gets back a result. It doesn't automatically `return` the value from that function, or do anything else. So, you do the left-subtree search, ignore the results, then go on to check `node.value == key`, and, if that fails, you do the right-subtree search, again ignore the results, and fall off the end of the function, meaning you return `None`. To make this work, you need to `return` the value you got back. But, of course, only if it's `not None`. Also, you forgot to pass the `key` argument down to the recursive call, so you're just going to get a `TypeError`. (I'm guessing your real code doesn't have this problem, but since you didn't show us your real code, or a working example, I can't be sure…) So: ``` def Inorder_search_recursive(node, key): if not node: return None result = InOrder_search_recursive(node.lChild, key) if result is not None: return result if node.value==key: return node return InOrder_search_recursive(node.rChild, key) ``` (You don't need the `not None` check for the right-side search, because if it returns `None`, we have nothing else to try and are just going to return `None` anyway.)
My other answer gives the novice-friendly solution, but if you want more powerful and concise answer: ``` def Inorder_search_recursive_all(node, key): if not node: return yield from InOrder_search_recursive(node.lChild, key) if node.value==key: yield node yield from InOrder_search_recursive(node.rChild, key) ``` This generates *all* matches in the tree, in order. And it gives them to you as an iterator, so if you just want the first, you can stop as soon as you find one, with no wasted work: ``` def Inorder_search_recursive(node, key): return next(Inorder_search_recursive_all(node, key), None) ``` The tutorial section on [Iterators](http://docs.python.org/3/tutorial/classes.html#iterators) and the following section on Generators explains most of how this works. The only missing bit is an explanation of `yield from`, which is explained in [PEP 380](http://www.python.org/dev/peps/pep-0380/).
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
You didn't show a full output. It probably looks like ``` $ gunicorn elcarweb.wsgi [2015-10-27 21:01:47 +0000] [5429] [INFO] Starting gunicorn 19.2.1 [2015-10-27 21:01:47 +0000] [5429] [INFO] Listening at: http://127.0.0.1:8000 (5429) [2015-10-27 21:01:47 +0000] [5429] [INFO] Using worker: sync [2015-10-27 21:01:47 +0000] [5434] [INFO] Booting worker with pid: 5434 [2015-10-27 21:01:47 +0000] [5434] [ERROR] Exception in worker process: Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 503, in spawn_worker worker.init_process() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/workers/base.py", line 116, in init_process self.wsgi = self.app.wsgi() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load return self.load_wsgiapp() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp return util.import_app(self.app_uri) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/util.py", line 355, in import_app __import__(module) ImportError: No module named elcarweb.wsgi Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 503, in spawn_worker worker.init_process() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/workers/base.py", line 116, in init_process self.wsgi = self.app.wsgi() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load return self.load_wsgiapp() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp return util.import_app(self.app_uri) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/util.py", line 355, in import_app __import__(module) ImportError: No module named elcarweb.wsgi [2015-10-27 21:01:47 +0000] [5434] [INFO] Worker exiting (pid: 5434) Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/bin/gunicorn", line 11, in <module> sys.exit(run()) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 189, in run super(Application, self).run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 72, in run Arbiter(self).run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 170, in run self.manage_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 473, in manage_workers self.spawn_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 210, in handle_chld self.reap_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 455, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` Last trackback is almost the same, but before You see required informations: ``` ImportError: No module named elcarweb.wsgi ``` If You see this error, then solution is run gunicorn with `--chdir=/your/app/dir` parameter.
For anyone facing the same issue, the problem is usually something in django itself. Activate your venv and run ./manage.py runserver This will usually give you a more detailed error message.
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
run guncorn with --preload can see the error log, like this ``` gunicorn app:application --preload -b 0.0.0.0:5000 ``` This will usually give you a more detailed error message.
I got the very same error. [Gunicorn](http://gunicorn.org/) was working when executed as root, but failed this way when executed as non-privileged user. In my case I install the 'dataset' python module with 'pip install dataset' and it screwed the permissions on `/usr/lib/python2.7/site-packages/dataset` to be readable only by root account. On other module installed as dependency like that was normality. Fix : ``` chmod -R a+rX /usr/lib/python2.7/site-packages/dataset* chmod -R a+rX /usr/lib/python2.7/site-packages/normality* ``` Some another issue was with the yaml package auto-installed by pip, but I have not found what exactly. The fix was to reinstall from fedora package: ``` dnf -y install python-yaml ```
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
run guncorn with --preload can see the error log, like this ``` gunicorn app:application --preload -b 0.0.0.0:5000 ``` This will usually give you a more detailed error message.
I also had a similar error in Ubuntu 16.04, Django 1.11, using systemd. My problem was that I had split my settings out into separate files; ie: instead of having all settings in project/settings.py, I have a few settings files like project/settings/dev.py. I had to update DJANGO\_SETTINGS\_MODULE in the project/wsgi.py file from: ``` os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings") ``` to ``` os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.dev") ``` then gunicorn worked.
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
For anyone facing the same issue, the problem is usually something in django itself. Activate your venv and run ./manage.py runserver This will usually give you a more detailed error message.
I got the very same error. [Gunicorn](http://gunicorn.org/) was working when executed as root, but failed this way when executed as non-privileged user. In my case I install the 'dataset' python module with 'pip install dataset' and it screwed the permissions on `/usr/lib/python2.7/site-packages/dataset` to be readable only by root account. On other module installed as dependency like that was normality. Fix : ``` chmod -R a+rX /usr/lib/python2.7/site-packages/dataset* chmod -R a+rX /usr/lib/python2.7/site-packages/normality* ``` Some another issue was with the yaml package auto-installed by pip, but I have not found what exactly. The fix was to reinstall from fedora package: ``` dnf -y install python-yaml ```
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
You didn't show a full output. It probably looks like ``` $ gunicorn elcarweb.wsgi [2015-10-27 21:01:47 +0000] [5429] [INFO] Starting gunicorn 19.2.1 [2015-10-27 21:01:47 +0000] [5429] [INFO] Listening at: http://127.0.0.1:8000 (5429) [2015-10-27 21:01:47 +0000] [5429] [INFO] Using worker: sync [2015-10-27 21:01:47 +0000] [5434] [INFO] Booting worker with pid: 5434 [2015-10-27 21:01:47 +0000] [5434] [ERROR] Exception in worker process: Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 503, in spawn_worker worker.init_process() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/workers/base.py", line 116, in init_process self.wsgi = self.app.wsgi() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load return self.load_wsgiapp() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp return util.import_app(self.app_uri) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/util.py", line 355, in import_app __import__(module) ImportError: No module named elcarweb.wsgi Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 503, in spawn_worker worker.init_process() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/workers/base.py", line 116, in init_process self.wsgi = self.app.wsgi() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load return self.load_wsgiapp() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp return util.import_app(self.app_uri) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/util.py", line 355, in import_app __import__(module) ImportError: No module named elcarweb.wsgi [2015-10-27 21:01:47 +0000] [5434] [INFO] Worker exiting (pid: 5434) Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/bin/gunicorn", line 11, in <module> sys.exit(run()) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 189, in run super(Application, self).run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 72, in run Arbiter(self).run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 170, in run self.manage_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 473, in manage_workers self.spawn_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 210, in handle_chld self.reap_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 455, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` Last trackback is almost the same, but before You see required informations: ``` ImportError: No module named elcarweb.wsgi ``` If You see this error, then solution is run gunicorn with `--chdir=/your/app/dir` parameter.
I also had a similar error in Ubuntu 16.04, Django 1.11, using systemd. My problem was that I had split my settings out into separate files; ie: instead of having all settings in project/settings.py, I have a few settings files like project/settings/dev.py. I had to update DJANGO\_SETTINGS\_MODULE in the project/wsgi.py file from: ``` os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings") ``` to ``` os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.dev") ``` then gunicorn worked.
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
You didn't show a full output. It probably looks like ``` $ gunicorn elcarweb.wsgi [2015-10-27 21:01:47 +0000] [5429] [INFO] Starting gunicorn 19.2.1 [2015-10-27 21:01:47 +0000] [5429] [INFO] Listening at: http://127.0.0.1:8000 (5429) [2015-10-27 21:01:47 +0000] [5429] [INFO] Using worker: sync [2015-10-27 21:01:47 +0000] [5434] [INFO] Booting worker with pid: 5434 [2015-10-27 21:01:47 +0000] [5434] [ERROR] Exception in worker process: Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 503, in spawn_worker worker.init_process() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/workers/base.py", line 116, in init_process self.wsgi = self.app.wsgi() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load return self.load_wsgiapp() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp return util.import_app(self.app_uri) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/util.py", line 355, in import_app __import__(module) ImportError: No module named elcarweb.wsgi Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 503, in spawn_worker worker.init_process() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/workers/base.py", line 116, in init_process self.wsgi = self.app.wsgi() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load return self.load_wsgiapp() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp return util.import_app(self.app_uri) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/util.py", line 355, in import_app __import__(module) ImportError: No module named elcarweb.wsgi [2015-10-27 21:01:47 +0000] [5434] [INFO] Worker exiting (pid: 5434) Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/bin/gunicorn", line 11, in <module> sys.exit(run()) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 189, in run super(Application, self).run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 72, in run Arbiter(self).run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 170, in run self.manage_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 473, in manage_workers self.spawn_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 210, in handle_chld self.reap_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 455, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` Last trackback is almost the same, but before You see required informations: ``` ImportError: No module named elcarweb.wsgi ``` If You see this error, then solution is run gunicorn with `--chdir=/your/app/dir` parameter.
I got the very same error. [Gunicorn](http://gunicorn.org/) was working when executed as root, but failed this way when executed as non-privileged user. In my case I install the 'dataset' python module with 'pip install dataset' and it screwed the permissions on `/usr/lib/python2.7/site-packages/dataset` to be readable only by root account. On other module installed as dependency like that was normality. Fix : ``` chmod -R a+rX /usr/lib/python2.7/site-packages/dataset* chmod -R a+rX /usr/lib/python2.7/site-packages/normality* ``` Some another issue was with the yaml package auto-installed by pip, but I have not found what exactly. The fix was to reinstall from fedora package: ``` dnf -y install python-yaml ```
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
run guncorn with --preload can see the error log, like this ``` gunicorn app:application --preload -b 0.0.0.0:5000 ``` This will usually give you a more detailed error message.
For anyone facing the same issue, the problem is usually something in django itself. Activate your venv and run ./manage.py runserver This will usually give you a more detailed error message.
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
Actually the problem here was the wsgi file itself, previously before django 1.3 the wsgi file was named with an extension of `.wsgi`, but now in the recent versions it will be created with and extension of `.py` that is the wsgi file must be a python module so the file should be `hello_wsgi.py` and command should be ``` gunicorn hello:application -b xx.xxx.xxx.xx:8000 ```
You didn't show a full output. It probably looks like ``` $ gunicorn elcarweb.wsgi [2015-10-27 21:01:47 +0000] [5429] [INFO] Starting gunicorn 19.2.1 [2015-10-27 21:01:47 +0000] [5429] [INFO] Listening at: http://127.0.0.1:8000 (5429) [2015-10-27 21:01:47 +0000] [5429] [INFO] Using worker: sync [2015-10-27 21:01:47 +0000] [5434] [INFO] Booting worker with pid: 5434 [2015-10-27 21:01:47 +0000] [5434] [ERROR] Exception in worker process: Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 503, in spawn_worker worker.init_process() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/workers/base.py", line 116, in init_process self.wsgi = self.app.wsgi() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load return self.load_wsgiapp() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp return util.import_app(self.app_uri) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/util.py", line 355, in import_app __import__(module) ImportError: No module named elcarweb.wsgi Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 503, in spawn_worker worker.init_process() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/workers/base.py", line 116, in init_process self.wsgi = self.app.wsgi() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load return self.load_wsgiapp() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp return util.import_app(self.app_uri) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/util.py", line 355, in import_app __import__(module) ImportError: No module named elcarweb.wsgi [2015-10-27 21:01:47 +0000] [5434] [INFO] Worker exiting (pid: 5434) Traceback (most recent call last): File "/home/tomek/Elcar/elcarweb/env/bin/gunicorn", line 11, in <module> sys.exit(run()) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 189, in run super(Application, self).run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/app/base.py", line 72, in run Arbiter(self).run() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 170, in run self.manage_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 473, in manage_workers self.spawn_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 210, in handle_chld self.reap_workers() File "/home/tomek/Elcar/elcarweb/env/lib/python2.7/site-packages/gunicorn/arbiter.py", line 455, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` Last trackback is almost the same, but before You see required informations: ``` ImportError: No module named elcarweb.wsgi ``` If You see this error, then solution is run gunicorn with `--chdir=/your/app/dir` parameter.
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
Actually the problem here was the wsgi file itself, previously before django 1.3 the wsgi file was named with an extension of `.wsgi`, but now in the recent versions it will be created with and extension of `.py` that is the wsgi file must be a python module so the file should be `hello_wsgi.py` and command should be ``` gunicorn hello:application -b xx.xxx.xxx.xx:8000 ```
I also had a similar error in Ubuntu 16.04, Django 1.11, using systemd. My problem was that I had split my settings out into separate files; ie: instead of having all settings in project/settings.py, I have a few settings files like project/settings/dev.py. I had to update DJANGO\_SETTINGS\_MODULE in the project/wsgi.py file from: ``` os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings") ``` to ``` os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.dev") ``` then gunicorn worked.
24,488,891
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like `python manage.py runserver` I installed the gunicorn using pip like `pip install gunicorn` and django version is `1.5.3` when i run the below command inside the virtual env like below `gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000` and faced the error ``` Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` So why actually the above error is encountered and whats the fix ?
2014/06/30
[ "https://Stackoverflow.com/questions/24488891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1342109/" ]
For anyone facing the same issue, the problem is usually something in django itself. Activate your venv and run ./manage.py runserver This will usually give you a more detailed error message.
I also had a similar error in Ubuntu 16.04, Django 1.11, using systemd. My problem was that I had split my settings out into separate files; ie: instead of having all settings in project/settings.py, I have a few settings files like project/settings/dev.py. I had to update DJANGO\_SETTINGS\_MODULE in the project/wsgi.py file from: ``` os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings") ``` to ``` os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.dev") ``` then gunicorn worked.
2,337,216
Besides wrapping all your code in `try` `except`, is there any way of achieving the same thing as running your script like `python -mpdb script`? I'd like to be able to see what went wrong when an exception gets raised.
2010/02/25
[ "https://Stackoverflow.com/questions/2337216", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31610/" ]
If you do not want to modify the source then yOu could run it from [ipython](http://ipython.scipy.org/moin/) - an enhanced interactive python shell. e.g. run ipython then execute `%pdb on` to enable post-mortem debugging. `%run scriptname` will then run the script and automatically enter the debugger on any uncaught exceptions. Alternatively `%run -d scriptname` will start the script in the debugger.
`import pdb; pdb.set_trace()` Source: <http://docs.python.org/library/pdb.html>
2,337,216
Besides wrapping all your code in `try` `except`, is there any way of achieving the same thing as running your script like `python -mpdb script`? I'd like to be able to see what went wrong when an exception gets raised.
2010/02/25
[ "https://Stackoverflow.com/questions/2337216", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31610/" ]
``` python -i script ``` will leave you in the interactive shell when an exception gets raised; then ``` import pdb pdb.pm() ``` will put you into the post-mortem debugger so you can do all the usual debugging things. This should work as long as your script does not call sys.exit. (Which scripts should never do, because it breaks this very useful technique! as well as making them harder to write tests for.)
`import pdb; pdb.set_trace()` Source: <http://docs.python.org/library/pdb.html>
2,337,216
Besides wrapping all your code in `try` `except`, is there any way of achieving the same thing as running your script like `python -mpdb script`? I'd like to be able to see what went wrong when an exception gets raised.
2010/02/25
[ "https://Stackoverflow.com/questions/2337216", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31610/" ]
If you do not want to modify the source then yOu could run it from [ipython](http://ipython.scipy.org/moin/) - an enhanced interactive python shell. e.g. run ipython then execute `%pdb on` to enable post-mortem debugging. `%run scriptname` will then run the script and automatically enter the debugger on any uncaught exceptions. Alternatively `%run -d scriptname` will start the script in the debugger.
``` python -i script ``` will leave you in the interactive shell when an exception gets raised; then ``` import pdb pdb.pm() ``` will put you into the post-mortem debugger so you can do all the usual debugging things. This should work as long as your script does not call sys.exit. (Which scripts should never do, because it breaks this very useful technique! as well as making them harder to write tests for.)
19,298,147
I always get this error for this code: ``` Traceback (most recent call last): File "E:/ankosh/trial13.py", line 14, in <module> if grades_mix[index_no]=="HM1": IndexError: list index out of range) ``` I would really appreciate the help. ``` `file_pointer=open("C:/python27/Doc/student_grades.txt", "r") read_grades=file_pointer.readline() my_list=[] while 0==0: grades_mix=read_grades.split() name_str=grades_mix[0] contained_list=[name_str,0,0,0,0,0.0] index_no=1 count_num=0 sum_float=0.0 avg_float=0.0 while 0==0: if grades_mix[index_no]=="HM1": index_no+=1 grade_num=int(grades_mix[index_no]) count_num+=1 sum_float+=grade_num contained_list[1]=grade_num elif grades_mix[index_no]=="HM2": index_no+=1 grade_num=int(grades_mix[index_no]) count_num+=1 sum_float+=grade_num contained_list[2]=grade_num elif grades_mix[index_no]=="HM3": index_no+=1 grade_num=int(grades_mix[index_no]) count_num+=1 sum_float+=grade_num contained_list[3]=grade_num elif grades_mix[index_no]=="HM4": index_no+=1 grade_num=int(grades_mix[index_no]) count_num+=1 sum_float+=grade_num contained_list[4]=grade_num index_no+=1 if count_num>0: avg_float=sum_float/count_num contained_list[5]=avg_float index_num=0 while index_num<len(my_list): if my_list[0]>name_str: break index_no+=1 my_list.insert(index_num, contained_list) read_grades=file_pointer.readline() file_pointer.close() print format ("Name","<10")+" | "+format("HM1"," >5")+" | "+format("HM2"," >5")+" | "+format("HM3"," >5")+" | "+format("HM4"," >5")+" | "+format("avg_float"," <10")+" | " for index_no in range(0, len(my_list)): print format(my_list[index_num][0], "<10") + " | " + \ format(my_list[index_num][1], " >5") + " | " + \ format(my_list[index_num][2], " >5") + " | " + \ format(my_list[index_num][3], " >5") + " | " + \ format(my_list[index_num][4], " >5") + " | " + \ format(my_list[index_num][5], " >10.2f") + " | " ```
2013/10/10
[ "https://Stackoverflow.com/questions/19298147", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2833577/" ]
You have, in abbreviated form: ``` while 0==0: if grades_mix[index_no]=="HM1": index_no+=1 ``` with no break statement to ever get out of the loop (which, how about `while True`?). Sooner or later you're going to go out of range.
The error message ``` IndexError: list index out of range ``` indicates that you are using an index that does not exist in the array. Try to check the size of the array by printing out its length print "length of array grades\_mix =" ,len(grades\_mix) If the length of the array is less than or equal to 1, then you will get an error at the first step when calling ``` grades_mix[index_no] ``` with ``` index_no=1 ``` It might be that your file contains empty lines or lines with a single word, you need to take care of that in your script.
2,408,677
I'm trying to implement the calculation of correlation coefficient of people between two sets of data in php. I'm just trying to do the porting python script that can be found at this url <http://answers.oreilly.com/topic/1066-how-to-find-similar-users-with-python/> my implementation is the following: ``` class LB_Similarity_PearsonCorrelation implements LB_Similarity_Interface{ public function similarity($user1, $user2){ $sharedItem = array(); $pref1 = array(); $pref2 = array(); $result1 = $user1->fetchAllPreferences(); $result2 = $user2->fetchAllPreferences(); foreach($result1 as $pref){ $pref1[$pref->item_id] = $pref->rate; } foreach($result2 as $pref){ $pref2[$pref->item_id] = $pref->rate; } foreach ($pref1 as $item => $preferenza){ if(key_exists($item,$pref2)){ $sharedItem[$item] = 1; } } $n = count($sharedItem); if ($n == 0) return 0; $sum1 = 0;$sum2 = 0;$sumSq1 = 0;$sumSq2 = 0;$pSum = 0; foreach ($sharedItem as $item_id => $pre) { $sum1 += $pref1[$item_id]; $sum2 += $pref2[$item_id]; $sumSq1 += pow($pref1[$item_id],2); $sumSq2 += pow($pref2[$item_id],2); $pSum += $pref1[$item_id] * $pref2[$item_id]; } $num = $pSum - (($sum1 * $sum2) / $n); $den = sqrt(($sumSq1 - pow($sum1,2)/$n) * ($sumSq2 - pow($sum2,2)/$n)); if ($den == 0) return 0; return $num/$den; } } ``` clarification to better understand the code, the method fetchAllPreferences return back a set of objects that are actually the items, turns them into an array for ease of management I'm not sure that this implementation is correct, in particular I have some doubts about the correctness of the calculation of the denominator. any advice is welcome. thanks in advance!
2010/03/09
[ "https://Stackoverflow.com/questions/2408677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/76593/" ]
Your algorithm looks mathematically correct but numerically unstable. Finding the sum of squares explicitly is a recipe for disaster. What if you have numbers like `array(10000000001, 10000000002, 10000000003)`? A numerically stable one-pass algorithm for calculating the variance can be [found on Wikipedia](http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance), and the same principle can be applied to computing the covariance. Easier yet, if you don't care much about speed, you could just use two passes. Find the means in the first pass, then compute the variances and covariances using the textbook formula in the second pass.
try my package here <http://www.phpclasses.org/browse/package/5854.html>
2,408,677
I'm trying to implement the calculation of correlation coefficient of people between two sets of data in php. I'm just trying to do the porting python script that can be found at this url <http://answers.oreilly.com/topic/1066-how-to-find-similar-users-with-python/> my implementation is the following: ``` class LB_Similarity_PearsonCorrelation implements LB_Similarity_Interface{ public function similarity($user1, $user2){ $sharedItem = array(); $pref1 = array(); $pref2 = array(); $result1 = $user1->fetchAllPreferences(); $result2 = $user2->fetchAllPreferences(); foreach($result1 as $pref){ $pref1[$pref->item_id] = $pref->rate; } foreach($result2 as $pref){ $pref2[$pref->item_id] = $pref->rate; } foreach ($pref1 as $item => $preferenza){ if(key_exists($item,$pref2)){ $sharedItem[$item] = 1; } } $n = count($sharedItem); if ($n == 0) return 0; $sum1 = 0;$sum2 = 0;$sumSq1 = 0;$sumSq2 = 0;$pSum = 0; foreach ($sharedItem as $item_id => $pre) { $sum1 += $pref1[$item_id]; $sum2 += $pref2[$item_id]; $sumSq1 += pow($pref1[$item_id],2); $sumSq2 += pow($pref2[$item_id],2); $pSum += $pref1[$item_id] * $pref2[$item_id]; } $num = $pSum - (($sum1 * $sum2) / $n); $den = sqrt(($sumSq1 - pow($sum1,2)/$n) * ($sumSq2 - pow($sum2,2)/$n)); if ($den == 0) return 0; return $num/$den; } } ``` clarification to better understand the code, the method fetchAllPreferences return back a set of objects that are actually the items, turns them into an array for ease of management I'm not sure that this implementation is correct, in particular I have some doubts about the correctness of the calculation of the denominator. any advice is welcome. thanks in advance!
2010/03/09
[ "https://Stackoverflow.com/questions/2408677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/76593/" ]
This is my solution: ``` function php_correlation($x,$y){ if(count($x)!==count($y)){return -1;} $x=array_values($x); $y=array_values($y); $xs=array_sum($x)/count($x); $ys=array_sum($y)/count($y); $a=0;$bx=0;$by=0; for($i=0;$i<count($x);$i++){ $xr=$x[$i]-$xs; $yr=$y[$i]-$ys; $a+=$xr*$yr; $bx+=pow($xr,2); $by+=pow($yr,2); } $b = sqrt($bx*$by); if($b==0) return 0; return $a/$b; } ``` <http://profprog.ru/korrelyaciya-na-php-php-simple-pearson-correlation/>
try my package here <http://www.phpclasses.org/browse/package/5854.html>
17,550,136
My background is in more managed languages (C#, python) but I am becoming more experienced in C/C++. I am familiar with why the selection by reference (.) and selection through pointer operation (->) operators are different. In all cases I have encountered, if you use the incorrect one, it will result in a compile error. If that is the case, they why were they not made into one operator? Is there a case where using either on the same object results in different, meaningful and useful results? This question inspired by this answer: [Is this right way to call a function in c++?](https://stackoverflow.com/questions/17548007/is-this-right-way-to-call-a-function-in-c/17548054#17548054)
2013/07/09
[ "https://Stackoverflow.com/questions/17550136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518451/" ]
In C++ you can overload the `->`-operator, which is used in pretty much all smart pointer implementations. However, some of those also have their own methods, i.e. to release a reference. ``` struct test { int x; }; std::shared_ptr<int> ptr(new test); // Write to member x of the allocated object ptr->x = 3; // Reset the shared pointer to point to a different object. // If there are no further shared_ptrs pointing to the previously allocated one, // it is deleted. ptr.reset(new test) ``` Additionally, it would be quite messy for the compiler to resolve operator-`.` for something like multiple-level pointers, i.e. `test*** ptr`. With your logic, `ptr.x`, `(*ptr).x`, `(**ptr).x` and `(***ptr).x` would all be the same.
You cannot apply `->` to a reference to a basic type and you cannot apply `.` to a pointer, but you can apply both to a user-defined type and they will have different meanings. The simplest example is a smart pointer, like `std::shared_ptr`: ``` struct A { int x; }; std::shared_ptr<A> p(new A); p->x = 10; p.reset(); ```
17,550,136
My background is in more managed languages (C#, python) but I am becoming more experienced in C/C++. I am familiar with why the selection by reference (.) and selection through pointer operation (->) operators are different. In all cases I have encountered, if you use the incorrect one, it will result in a compile error. If that is the case, they why were they not made into one operator? Is there a case where using either on the same object results in different, meaningful and useful results? This question inspired by this answer: [Is this right way to call a function in c++?](https://stackoverflow.com/questions/17548007/is-this-right-way-to-call-a-function-in-c/17548054#17548054)
2013/07/09
[ "https://Stackoverflow.com/questions/17550136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518451/" ]
In C++ you can overload the `->`-operator, which is used in pretty much all smart pointer implementations. However, some of those also have their own methods, i.e. to release a reference. ``` struct test { int x; }; std::shared_ptr<int> ptr(new test); // Write to member x of the allocated object ptr->x = 3; // Reset the shared pointer to point to a different object. // If there are no further shared_ptrs pointing to the previously allocated one, // it is deleted. ptr.reset(new test) ``` Additionally, it would be quite messy for the compiler to resolve operator-`.` for something like multiple-level pointers, i.e. `test*** ptr`. With your logic, `ptr.x`, `(*ptr).x`, `(**ptr).x` and `(***ptr).x` would all be the same.
> > Is there a case where element selection by reference and element selection through pointer operation are both valid? > > > Since you can overload `operator->()` in C++, you can actually arrive at situations where you can use `->` and `.` interchangeably on the same object. You can even engineer things so that you get a different result, as per this example: ``` #include <iostream> struct Bar { void hello() const { std::cout << "Bar!!!\n"; } }; struct FooBar { Bar bar; void hello() const { std::cout << "FooBar!!!\n"; } const Bar* operator->() const {return &bar; } }; int main() { FooBar fb; fb->hello(); fb.hello(); } ``` Of course, in real code you would never do something as crazy as this (although I have seen this kind of thing in "production" code).
17,550,136
My background is in more managed languages (C#, python) but I am becoming more experienced in C/C++. I am familiar with why the selection by reference (.) and selection through pointer operation (->) operators are different. In all cases I have encountered, if you use the incorrect one, it will result in a compile error. If that is the case, they why were they not made into one operator? Is there a case where using either on the same object results in different, meaningful and useful results? This question inspired by this answer: [Is this right way to call a function in c++?](https://stackoverflow.com/questions/17548007/is-this-right-way-to-call-a-function-in-c/17548054#17548054)
2013/07/09
[ "https://Stackoverflow.com/questions/17550136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518451/" ]
In C++ you can overload the `->`-operator, which is used in pretty much all smart pointer implementations. However, some of those also have their own methods, i.e. to release a reference. ``` struct test { int x; }; std::shared_ptr<int> ptr(new test); // Write to member x of the allocated object ptr->x = 3; // Reset the shared pointer to point to a different object. // If there are no further shared_ptrs pointing to the previously allocated one, // it is deleted. ptr.reset(new test) ``` Additionally, it would be quite messy for the compiler to resolve operator-`.` for something like multiple-level pointers, i.e. `test*** ptr`. With your logic, `ptr.x`, `(*ptr).x`, `(**ptr).x` and `(***ptr).x` would all be the same.
the short answer would be a smart pointer you can access the smart pointer class arguments using the "." (if you make your own smart pointer class you can extract from there for instance the current reference count) while you would use the "->" operator to access whatever is being stored using the smart pointer.
72,623,017
As of may 30th, smtp is no longer accepted. <https://support.google.com/accounts/answer/6010255?hl=en&ref_topic=7188673> What is the new way to make a simple python emailer rather than a full application with the "login with google" option? Not sure why I was asked for the code and error, given that I already diagnosed the issue and was asking for alternative methods. Here it is. Its a handy emailer that texts me to workout when I work at home. ``` import time import smtplib import random gmail_user = 'usernameImNotSharing@gmail.com' gmail_password = 'TheCorrectPassword' sent_from = gmail_user to = ['myphonenumber@vtext.com'] exercises = ['push ups', 'jumps in place', '20lb curls', 'tricep extensions', 'quarter mile runs'] levels = [1, 2, 3] level1 = ['10', '15', '16', '20', '1'] level2 = ['15', '30', '30', '40', '2'] level3 = ['20', '50', '48', '70', '4'] while True: if int(time.strftime('%H')) > 9: if int(time.strftime('%H')) < 23: abc = random.uniform(0, 1) picker = random.randint(0, 4) if abc < 0.3: level = level1 if 0.3 < abc and abc < 0.8: level = level2 if abc > 0.8: level = level3 exersize = exercises[picker] amount = level[picker] try: subject = f'Test' body = f'Do {amount} {exersize}' server = smtplib.SMTP_SSL('smtp.gmail.com', 465) server.ehlo() server.login(gmail_user, gmail_password) server.sendmail(sent_from, to, body) server.close() print('Email sent!') except Exception as error: print(error) time.sleep(random.randint(1500, 4800)) time.sleep(100) ``` error: > > (535, b'5.7.8 Username and Password not accepted. Learn more at\n5.7.8 <https://support.google.com/mail/?p=BadCredentials> jj1-20020a170903048100b00163247b64bfsm7655137plb.115 - gsmtp') > > > Solved below: SMTP is still accepted for app passwords. App passwords creation steps can be found here, but you must enable 2 factor auth first, before app passwords can be created. <https://support.google.com/accounts/answer/185833> <https://myaccount.google.com/security>
2022/06/14
[ "https://Stackoverflow.com/questions/72623017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12339133/" ]
Correction after May 30 2022, sending the users actual password is no longer accepted by googles smtp server You should configuring an [apps password](https://support.google.com/accounts/answer/185833?hl=en#:%7E:text=An%20App%20Password%20is%20a,2%2DStep%20Verification%20turned%20on.) this works. Then replace the password in your code with this new apps password. > > An App Password is a 16-digit passcode that gives a less secure app or device permission to access your Google Account. App Passwords can only be used with accounts that have 2-Step Verification turned on. > > > ``` gmail_user = 'usernameImNotSharing@gmail.com' gmail_password = 'AppsPassword' ``` Another option is to use [Xoauth2](https://developers.google.com/gmail/imap/xoauth2-protocol) [Quick fix for SMTP username and password not accepted error](https://www.youtube.com/watch?v=Y_u5KIeXiVI)
``` import smtplib host = "server.smtp.com" server = smtplib.SMTP(host) FROM = "testpython@test.com" TO = "bla@test.com" MSG = "Subject: Test email python\n\nBody of your message!" server.sendmail(FROM, TO, MSG) server.quit() print ("Email Send") ```
65,526,849
I run through the following steps to attempt to start up an app for production: ``` -Setup a virtualenv for the python dependencies: virtualenv -p /usr/bin/python3.8 ~/app_env -Install pip dependencies: . ~/app_env/bin/activate && pip install -r ~/app/requirements.txt -Un-comment the lines for privilege dropping in uwsgi.ini and change the uid and gid to your account name -Login to root with sudo -s and re-source the env with source /home/usr/app_env/bin/activate -Set the courthouse to production mode by setting the environment variable with export PRODUCTION=1 -Start the app: cd /home/usr/app && ./start_script.sh ``` And I get the following error: ``` (app_env) root@usr-Spin-SP314-53N:/home/usr/Desktop/app# ./start.sh uwsgi: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory ``` I tried a few things such as installing a newer libpcre version like mentioned [here](https://github.com/facebook/watchman/issues/522), tried also the steps mentioned [here](https://stackoverflow.com/questions/43301339/pcre-issue-when-setting-up-wsgi-application/50087846) but that didn't work. Also the environment I'm setting up doesn't use anaconda but regular python. I even tried `pip install uwsgi` in my virtual env but it said the requirement was already satisfied. I'm not much of an expert when it comes to somewhat complex package management like this, help with how to solve this would be greatly appreciated. Thanks. I'm on Ubuntu 20.04, using python 3.8.
2021/01/01
[ "https://Stackoverflow.com/questions/65526849", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12027484/" ]
What solved it for me was apparently just reinstalling UWSGI, like in [this](https://stackoverflow.com/questions/21669354/rebuild-uwsgi-with-pcre-support?noredirect=1&lq=1) thread, in my virtual env while forcing it to ignore the cache so it could know to use the pcre library I installed. In order, doing this ``` uwsgi --version ``` Was giving me this ``` uwsgi: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory ``` So I made sure I had the latest libpcre installed ``` sudo apt-get install libpcre3-dev ``` And then what linked it all together was this ``` pip install uwsgi -I --no-cache-dir ```
I tried to solve this error but it did not work no matter what I did and then reinstalled uwsgi, but the following 2 lines solved my problem ``` sudo find / -name libpcre.so.* ``` #change the path of the /home/anaconda3/lib/libpcre.so.1 with the one appears after above one. ``` sudo ln -s /home/anaconda3/lib/libpcre.so.1 /lib which python ```
41,285,789
I came across a python library which has docs, which start like this: > > Quickstart > > > Include foolib in your requirements.txt file. > > > AFAIK dependencies should be specified via `install_requires` in `setup.py`. Should I talk the maintainer of the library and create a pull-request for the docs?
2016/12/22
[ "https://Stackoverflow.com/questions/41285789", "https://Stackoverflow.com", "https://Stackoverflow.com/users/633961/" ]
Both are acceptable. The difference is that specifying something in your `install_requires` will auto-download / install that package when you install the package using setup.py. Having a `requirements.txt` makes it easier to see at a glance what the requirements are. I personally prefer seeing libraries with a `requirements.txt`, since I can install all those requirements with `pip` into my virtualenv and be able to update them quickly if needed.
Add your dependencies in a requirements file and then parse this file in the setup.py. This will help you to: * Easily install dependencies without installing the entire package through pip * Get only one source for your dependencies * Get all way to install your package available (pip, easy\_install, command line, etc...)
55,721,564
Here is the 'smem' command I run on the Redhat/CentOS Linux system. I expect the output be printed without the fields with zero size however I would expect the heading columns. ``` smem -kt -c "pid user command swap" PID User Command Swap 7894 root /sbin/agetty --noclear tty1 0 9666 root ./nimbus /opt/nimsoft 0 7850 root /sbin/auditd 236.0K 7885 root /usr/sbin/irqbalance --fore 0 11205 root nimbus(hdb) 0 10701 root nimbus(spooler) 0 8446 trapsanalyzer1 /opt/traps/analyzerd/analyz 0 50316 apache /usr/sbin/httpd -DFOREGROUN 0 50310 apache /usr/sbin/httpd -DFOREGROUN 0 3971 root /usr/sbin/lvmetad -f 36.0K 63988 root su - 0 7905 ntp /usr/sbin/ntpd -u ntp:ntp - 4.0K 7876 dbus /usr/bin/dbus-daemon --syst 44.0K 9672 root nimbus(controller) 0 7888 root /usr/lib/systemd/systemd-lo 0 63990 root -bash 0 59978 postfix pickup -l -t unix -u 0 3977 root /usr/lib/systemd/systemd-ud 736.0K 9016 postfix qmgr -l -t unix -u 0 50303 root /usr/sbin/httpd -DFOREGROUN 0 3941 root /usr/lib/systemd/systemd-jo 52.0K 8199 root //usr/lib/vmware-caf/pme/bi 0 8598 daemon /opt/quest/sbin/.vasd -p /v 0 8131 root /usr/sbin/vmtoolsd 0 7881 root /usr/sbin/NetworkManager -- 8.0K 8364 root /opt/puppetlabs/puppet/bin/ 0 8616 daemon /opt/quest/sbin/.vasd -p /v 0 23290 root /usr/sbin/rsyslogd -n 3.8M 64091 root python /bin/smem -kt -c pid 0 7887 polkitd /usr/lib/polkit-1/polkitd - 0 8363 root /usr/bin/python2 -Es /usr/s 0 53606 root /usr/share/metricbeat/bin/m 0 24631 nagios /usr/local/ncpa/ncpa_passiv 0 24582 nagios /usr/local/ncpa/ncpa_listen 0 7886 root /opt/traps/bin/authorized 76.0K 7872 root /opt/traps/bin/pmd 12.0K 8374 root /opt/puppetlabs/puppet/bin/ 0 7883 root /opt/traps/bin/trapsd 64.0K ---------------------------------------------------- 54 10 5.1M ```
2019/04/17
[ "https://Stackoverflow.com/questions/55721564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11096022/" ]
Like this?: ``` $ awk '$NF!=0' file PID User Command Swap 7850 root /sbin/auditd 236.0K ... 7883 root /opt/traps/bin/trapsd 64.0K ---------------------------------------------------- 54 10 5.1M ``` But instead of using the form `awk ... file` you'd probably like to `smem ... | awk '$NF!=0'`.
Could you please try following, for extra precautions removing the space from last fields(in case it is there). ``` smem -kt -c "pid user command swap" | awk 'FNR==1{print;next} {sub(/[[:space:]]+$/,"")} $NF==0{next} 1' ```
35,282,609
I'm a new Python programmer working through the book *Automate the Boring Stuff with Python*. One of the end-of-chapter projects is to build a mad libs program. Based on what has been introduced so far, I think that the author intends for me to use regular expressions. Here is my code: ``` #! python3 # # madlibs.py - reads a text file and let's the user add their own text # anywhere the words ADJECTIVE, NOUN, ADVERB, or VERB appear in the text # file. import sys, re, copy # open text file, save text to variable if len(sys.argv) == 2: print('Opening text file...') textSource = open(sys.argv[1]) textContent = textSource.read() textSource.close() else: print('Usage: madlibs.py <textSource>') # locate instances of keywords keywordRegex = re.compile(r'ADJECTIVE|NOUN|ADVERB|VERB', re.I) matches = keywordRegex.findall(textContent) # prompt user to replace keywords with their own input answers = copy.copy(matches) for i in range(len(answers)): answers[i] = input() # create a new text file with the end result for i in range(len(matches)): findMatch = re.compile(matches[i]) textContent = findMatch.sub(answers[i], textContent) print(textContent) textEdited = open('madlibbed.txt', 'w') textEdited.write(textContent) textEdited.close() ``` The input I'm using for textSource is a text file that reads: > > This is the test source file. It has the keyword ADJECTIVE in it, as well as the keyword NOUN. Also, it has another instance of NOUN and then one of ADVERB. > > > My problem is that the findMatch.sub method is replacing both of the instances of NOUN at once. I understand that this is how the sub() method works, but I'm having trouble thinking of a simple way to work around it. How can I design this program so that it only targets and replaces one keyword at a time? I don't want all NOUNS to be replaced with the same word, but rather different words respective to the order in which the user types them.
2016/02/09
[ "https://Stackoverflow.com/questions/35282609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5901044/" ]
This is covered in detail in [BashFAQ #004](http://mywiki.wooledge.org/BashFAQ/004). Notably, [use of `ls` for this purpose is an antipattern and should be avoided](http://mywiki.wooledge.org/ParsingLs). ``` shopt -s dotglob # if including hidden files is desired files=( "$dir"/* ) [[ -e $files || -L $files ]] && echo "Directory is not empty" ``` `[[ -e $files ]]` doesn't actually check if the entire array's contents exist; rather, it checks the first name returned -- which handles the case when no files match, wherein the glob expression itself is returned as the sole result. --- Notably: * This is **far faster** than invoking `ls`, which requires using `fork()` to spawn a subshell, `execve()` to replace that subshell with `/bin/ls`, the operating system's dynamic linker to load shared libraries used by the `ls` binary, etc, etc. [An exception to this is extremely large directories, of tens of thousands of files -- a case in which `ls` will also be slow; see the `find`-based solution below for those]. * This is **more correct** than invoking `ls`: The list of files returned by globbing is guaranteed to exactly match the literal names of files, whereas `ls` can munge names with hidden characters. If the first entry is a valid filename, `"${files[@]}"` can be safely iterated over with assurance that each returned value will be a name, and there's no need to worry about filesystems with literal newlines in their names inflating the count if the local `ls` implementation does not escape them. --- That said, an alternative approach is to use `find`, if you have one with the `-empty` extension (available both from GNU find and from modern BSDs including Mac OS): ``` [[ $(find -H "$dir" -maxdepth 0 -type d -empty) ]] || echo "Directory is not empty" ``` ...if *any* result is given, the directory is nonempty. While slower than globbing on directories which are not unusually large, this is faster than *either* `ls` or globbing for extremely large directories not present in the direntry cache, as it can return results without a full scan.
**Robust pure Bash** solutions: For background on ***why* a pure Bash solution with globbing is superior to using `ls`**, see **[Charles Duffy's helpful answer](https://stackoverflow.com/a/35282784/45375)**, which also contains a **`find`-based alternative**, which is **much faster and less memory-intensive with *large* directories**.[1] Also consider **anubhava's equally fast and memory-efficient [`stat`-based answer](https://stackoverflow.com/a/35284977/45375)**, which, however, requires distinct syntax forms on Linux and BSD/OSX. *Updated* to a simpler solution, gratefully adapted from [this answer](https://stackoverflow.com/a/43402951/45375). ``` # EXCLUDING hidden files and folders - note the *quoted* use of glob '*' if compgen -G '*' >/dev/null; then echo 'not empty' else echo 'empty, but may have hidden files/dirs.' fi ``` * `compgen -G` is normally used for tab completion, but it is useful in this case as well: + Note that `compgen -G` does its *own* globbing, so you must pass it the glob (filename pattern) *in quotes* for it to output all matches. In this particular case, even passing an *unquoted* pattern up front would work, but the difference is worth nothing. + if nothing matches, `compgen -G` *always* produces *no* output (irrespective of the state of the `nullglob` option), and it indicates via its exit code whether at least 1 match was found, which is what the conditional takes advantage of (while suppressing any stdout output with `>/dev/null`). ``` # INCLUDING hidden files and folders - note the *unquoted* use of glob * if (shopt -s dotglob; compgen -G * >/dev/null); then echo 'not empty' else echo 'completely empty' fi ``` * `compgen -G` *never* matches *hidden* items (irrespective of the state of the `dotglob` option), so a workaround is needed to find hidden items too: + `(...)` creates a subshell for the conditional; that is, the commands executed in the subshell don't affect the current shell's environment, which allows us to set the `dotglob` option in a localized way. + `shopt -s dotglob` causes `*` to match hidden items too (except for `.` and `..`). + `compgen -G *` with *unquoted* `*`, thanks to *up-front* expansion by the shell, is either passed at least one filename, whether hidden or not (additional filenames are ignored) or the empty string, if neither hidden nor non-hidden items exists. In the former case the exit code is `0` (signaling success and therefore a nonempty directory), in the later `1` (signaling a truly empty directory). --- [1] This answer originally *falsely* claimed to offer a Bash-only solution that is efficient with large directories, based on the following approach: `(shopt -s nullglob dotglob; for f in "$dir"/*; do exit 0; done; exit 1)`. This is NOT more efficient, because, internally, Bash *still* collects all matches in an array first before entering the loop - in other words: `for *` is *not* evaluated lazily.
35,282,609
I'm a new Python programmer working through the book *Automate the Boring Stuff with Python*. One of the end-of-chapter projects is to build a mad libs program. Based on what has been introduced so far, I think that the author intends for me to use regular expressions. Here is my code: ``` #! python3 # # madlibs.py - reads a text file and let's the user add their own text # anywhere the words ADJECTIVE, NOUN, ADVERB, or VERB appear in the text # file. import sys, re, copy # open text file, save text to variable if len(sys.argv) == 2: print('Opening text file...') textSource = open(sys.argv[1]) textContent = textSource.read() textSource.close() else: print('Usage: madlibs.py <textSource>') # locate instances of keywords keywordRegex = re.compile(r'ADJECTIVE|NOUN|ADVERB|VERB', re.I) matches = keywordRegex.findall(textContent) # prompt user to replace keywords with their own input answers = copy.copy(matches) for i in range(len(answers)): answers[i] = input() # create a new text file with the end result for i in range(len(matches)): findMatch = re.compile(matches[i]) textContent = findMatch.sub(answers[i], textContent) print(textContent) textEdited = open('madlibbed.txt', 'w') textEdited.write(textContent) textEdited.close() ``` The input I'm using for textSource is a text file that reads: > > This is the test source file. It has the keyword ADJECTIVE in it, as well as the keyword NOUN. Also, it has another instance of NOUN and then one of ADVERB. > > > My problem is that the findMatch.sub method is replacing both of the instances of NOUN at once. I understand that this is how the sub() method works, but I'm having trouble thinking of a simple way to work around it. How can I design this program so that it only targets and replaces one keyword at a time? I don't want all NOUNS to be replaced with the same word, but rather different words respective to the order in which the user types them.
2016/02/09
[ "https://Stackoverflow.com/questions/35282609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5901044/" ]
This is covered in detail in [BashFAQ #004](http://mywiki.wooledge.org/BashFAQ/004). Notably, [use of `ls` for this purpose is an antipattern and should be avoided](http://mywiki.wooledge.org/ParsingLs). ``` shopt -s dotglob # if including hidden files is desired files=( "$dir"/* ) [[ -e $files || -L $files ]] && echo "Directory is not empty" ``` `[[ -e $files ]]` doesn't actually check if the entire array's contents exist; rather, it checks the first name returned -- which handles the case when no files match, wherein the glob expression itself is returned as the sole result. --- Notably: * This is **far faster** than invoking `ls`, which requires using `fork()` to spawn a subshell, `execve()` to replace that subshell with `/bin/ls`, the operating system's dynamic linker to load shared libraries used by the `ls` binary, etc, etc. [An exception to this is extremely large directories, of tens of thousands of files -- a case in which `ls` will also be slow; see the `find`-based solution below for those]. * This is **more correct** than invoking `ls`: The list of files returned by globbing is guaranteed to exactly match the literal names of files, whereas `ls` can munge names with hidden characters. If the first entry is a valid filename, `"${files[@]}"` can be safely iterated over with assurance that each returned value will be a name, and there's no need to worry about filesystems with literal newlines in their names inflating the count if the local `ls` implementation does not escape them. --- That said, an alternative approach is to use `find`, if you have one with the `-empty` extension (available both from GNU find and from modern BSDs including Mac OS): ``` [[ $(find -H "$dir" -maxdepth 0 -type d -empty) ]] || echo "Directory is not empty" ``` ...if *any* result is given, the directory is nonempty. While slower than globbing on directories which are not unusually large, this is faster than *either* `ls` or globbing for extremely large directories not present in the direntry cache, as it can return results without a full scan.
Here is a solution based on `stat` command that can return **number of hard links** if run against a directory (or link to a directory). It starts incrementing *number of hard links* from 3 as first two are `.` and `..` entries thus subtracting `2` from this number gives as actual number of entries in the given directory (this includes symlinks as well). So putting it all together: ``` (( ($(stat -Lc '%h' "$dir") - 2) > 0)) && echo 'not empty' || echo 'empty' ``` As per `man stat` options used are: ``` %h number of hard links -L --dereference, follow links ``` **EDIT:** To make it BSD/OSX compatible use: ``` (( ($(stat -Lf '%l' "$dir") - 2) > 0)) && echo 'not empty' || echo 'empty' ```
35,282,609
I'm a new Python programmer working through the book *Automate the Boring Stuff with Python*. One of the end-of-chapter projects is to build a mad libs program. Based on what has been introduced so far, I think that the author intends for me to use regular expressions. Here is my code: ``` #! python3 # # madlibs.py - reads a text file and let's the user add their own text # anywhere the words ADJECTIVE, NOUN, ADVERB, or VERB appear in the text # file. import sys, re, copy # open text file, save text to variable if len(sys.argv) == 2: print('Opening text file...') textSource = open(sys.argv[1]) textContent = textSource.read() textSource.close() else: print('Usage: madlibs.py <textSource>') # locate instances of keywords keywordRegex = re.compile(r'ADJECTIVE|NOUN|ADVERB|VERB', re.I) matches = keywordRegex.findall(textContent) # prompt user to replace keywords with their own input answers = copy.copy(matches) for i in range(len(answers)): answers[i] = input() # create a new text file with the end result for i in range(len(matches)): findMatch = re.compile(matches[i]) textContent = findMatch.sub(answers[i], textContent) print(textContent) textEdited = open('madlibbed.txt', 'w') textEdited.write(textContent) textEdited.close() ``` The input I'm using for textSource is a text file that reads: > > This is the test source file. It has the keyword ADJECTIVE in it, as well as the keyword NOUN. Also, it has another instance of NOUN and then one of ADVERB. > > > My problem is that the findMatch.sub method is replacing both of the instances of NOUN at once. I understand that this is how the sub() method works, but I'm having trouble thinking of a simple way to work around it. How can I design this program so that it only targets and replaces one keyword at a time? I don't want all NOUNS to be replaced with the same word, but rather different words respective to the order in which the user types them.
2016/02/09
[ "https://Stackoverflow.com/questions/35282609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5901044/" ]
**Robust pure Bash** solutions: For background on ***why* a pure Bash solution with globbing is superior to using `ls`**, see **[Charles Duffy's helpful answer](https://stackoverflow.com/a/35282784/45375)**, which also contains a **`find`-based alternative**, which is **much faster and less memory-intensive with *large* directories**.[1] Also consider **anubhava's equally fast and memory-efficient [`stat`-based answer](https://stackoverflow.com/a/35284977/45375)**, which, however, requires distinct syntax forms on Linux and BSD/OSX. *Updated* to a simpler solution, gratefully adapted from [this answer](https://stackoverflow.com/a/43402951/45375). ``` # EXCLUDING hidden files and folders - note the *quoted* use of glob '*' if compgen -G '*' >/dev/null; then echo 'not empty' else echo 'empty, but may have hidden files/dirs.' fi ``` * `compgen -G` is normally used for tab completion, but it is useful in this case as well: + Note that `compgen -G` does its *own* globbing, so you must pass it the glob (filename pattern) *in quotes* for it to output all matches. In this particular case, even passing an *unquoted* pattern up front would work, but the difference is worth nothing. + if nothing matches, `compgen -G` *always* produces *no* output (irrespective of the state of the `nullglob` option), and it indicates via its exit code whether at least 1 match was found, which is what the conditional takes advantage of (while suppressing any stdout output with `>/dev/null`). ``` # INCLUDING hidden files and folders - note the *unquoted* use of glob * if (shopt -s dotglob; compgen -G * >/dev/null); then echo 'not empty' else echo 'completely empty' fi ``` * `compgen -G` *never* matches *hidden* items (irrespective of the state of the `dotglob` option), so a workaround is needed to find hidden items too: + `(...)` creates a subshell for the conditional; that is, the commands executed in the subshell don't affect the current shell's environment, which allows us to set the `dotglob` option in a localized way. + `shopt -s dotglob` causes `*` to match hidden items too (except for `.` and `..`). + `compgen -G *` with *unquoted* `*`, thanks to *up-front* expansion by the shell, is either passed at least one filename, whether hidden or not (additional filenames are ignored) or the empty string, if neither hidden nor non-hidden items exists. In the former case the exit code is `0` (signaling success and therefore a nonempty directory), in the later `1` (signaling a truly empty directory). --- [1] This answer originally *falsely* claimed to offer a Bash-only solution that is efficient with large directories, based on the following approach: `(shopt -s nullglob dotglob; for f in "$dir"/*; do exit 0; done; exit 1)`. This is NOT more efficient, because, internally, Bash *still* collects all matches in an array first before entering the loop - in other words: `for *` is *not* evaluated lazily.
Here is a solution based on `stat` command that can return **number of hard links** if run against a directory (or link to a directory). It starts incrementing *number of hard links* from 3 as first two are `.` and `..` entries thus subtracting `2` from this number gives as actual number of entries in the given directory (this includes symlinks as well). So putting it all together: ``` (( ($(stat -Lc '%h' "$dir") - 2) > 0)) && echo 'not empty' || echo 'empty' ``` As per `man stat` options used are: ``` %h number of hard links -L --dereference, follow links ``` **EDIT:** To make it BSD/OSX compatible use: ``` (( ($(stat -Lf '%l' "$dir") - 2) > 0)) && echo 'not empty' || echo 'empty' ```