content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: Unable to run i386-elf-gcc When installing i386-elf-gcc using aur. Install goes smoothly and no errors when I build. I added /usr/local/i386elfgcc/bin to my path and I am able to run the command. When I run a simple command such as i386-elf-gcc -g "kernel.cpp" -o "kernel.o", I am getting errors: /usr/local/i386elfgcc/lib/gcc/i386-elf/10.2.0/../../../../i386-elf/bin/ld: cannot find crt0.o: No such file or directory /usr/local/i386elfgcc/lib/gcc/i386-elf/10.2.0/../../../../i386-elf/bin/ld: cannot find -lg /usr/local/i386elfgcc/lib/gcc/i386-elf/10.2.0/../../../../i386-elf/bin/ld: cannot find -lc Reinstalling has not worked. What can I do to fix this? I am using Arch. A: The original command was i386-elf-gcc -ffreestanding -m32 -g "kernel.cpp" -o "kernel.o" I needed to add -nostdlib Final command: i386-elf-gcc -nostdlib -ffreestanding -m32 -g "kernel.cpp" -o "kernel.o"
Unable to run i386-elf-gcc
When installing i386-elf-gcc using aur. Install goes smoothly and no errors when I build. I added /usr/local/i386elfgcc/bin to my path and I am able to run the command. When I run a simple command such as i386-elf-gcc -g "kernel.cpp" -o "kernel.o", I am getting errors: /usr/local/i386elfgcc/lib/gcc/i386-elf/10.2.0/../../../../i386-elf/bin/ld: cannot find crt0.o: No such file or directory /usr/local/i386elfgcc/lib/gcc/i386-elf/10.2.0/../../../../i386-elf/bin/ld: cannot find -lg /usr/local/i386elfgcc/lib/gcc/i386-elf/10.2.0/../../../../i386-elf/bin/ld: cannot find -lc Reinstalling has not worked. What can I do to fix this? I am using Arch.
[ "The original command was i386-elf-gcc -ffreestanding -m32 -g \"kernel.cpp\" -o \"kernel.o\"\nI needed to add -nostdlib\n\nFinal command:\ni386-elf-gcc -nostdlib -ffreestanding -m32 -g \"kernel.cpp\" -o \"kernel.o\"\n" ]
[ 0 ]
[]
[]
[ "arch", "cross_compiling", "gcc", "x86" ]
stackoverflow_0074679590_arch_cross_compiling_gcc_x86.txt
Q: Html div behaving strange [<!DOCTYPE html> <html lang="en"> <hea][1]d> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <style> div{ height: 100px; width: 200px; border: 10px solid black; padding: 10px; box-sizing: border-box; } </style> </head> <body> <div class="div"> </div> </body> </html> this code is seeming to generate three div blocks whereas if I use the class .div, it works as I expect it to work I was making a div element and styling it when this happened A: Because what happened? Do you mean the [ on top? Remove all the falsely placed [ symbols as shown below. And why do you give a div element a class name of div? In your css you are also reference the element directly and not via class name. And what do you mean by generating three div elements? There can't be more then defined body and this would be only a single div element at the moment. If you want two more div elements, define two more in the body section. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <style> div{ height: 100px; width: 200px; border: 10px solid black; padding: 10px; box-sizing: border-box; } </style> </head> <body> <div></div> <!-- Add two more div elements to achive three elements <div></div> <div></div> --> </body> </html>
Html div behaving strange
[<!DOCTYPE html> <html lang="en"> <hea][1]d> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <style> div{ height: 100px; width: 200px; border: 10px solid black; padding: 10px; box-sizing: border-box; } </style> </head> <body> <div class="div"> </div> </body> </html> this code is seeming to generate three div blocks whereas if I use the class .div, it works as I expect it to work I was making a div element and styling it when this happened
[ "Because what happened? Do you mean the [ on top? Remove all the falsely placed [ symbols as shown below. And why do you give a div element a class name of div? In your css you are also reference the element directly and not via class name.\nAnd what do you mean by generating three div elements? There can't be more then defined body and this would be only a single div element at the moment. If you want two more div elements, define two more in the body section.\n\n\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>Document</title>\n <style>\n div{\n height: 100px;\n width: 200px;\n border: 10px solid black;\n padding: 10px;\n box-sizing: border-box;\n }\n </style>\n</head>\n<body>\n <div></div>\n <!-- Add two more div elements to achive three elements\n <div></div>\n <div></div>\n -->\n</body>\n</html>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074674937_css_html.txt
Q: [React][useInfiniteQuery] How to restrict the first fetch call to happen only when the view is visible in the viewport I am using useInfiniteQuery hook for infinite scrolling items in my component. I want that the first fetch happens only when the view is visible in the viewport . I am using useInView/intersection observer for finding if the view is visible in viewport. I am using useInView/intersection observer for finding if the view is visible in viewport, but unable to restrict the first fetch on the basis of this. A: I'm assuming you are talking about the useInfiniteQuery hook from @tanstack/react-query. This hook actually inherits all options from useQuery, which takes an enabled property, that determines when it will be triggered. Assuming you have a boolean that determines the state of your IntersectionObserver, you could do something like this: useInfiniteQuery('data', fetchData, { enabled: valueFromIntersectionObserver }); Refer to the documentation of the useQuery hook for more details.
[React][useInfiniteQuery] How to restrict the first fetch call to happen only when the view is visible in the viewport
I am using useInfiniteQuery hook for infinite scrolling items in my component. I want that the first fetch happens only when the view is visible in the viewport . I am using useInView/intersection observer for finding if the view is visible in viewport. I am using useInView/intersection observer for finding if the view is visible in viewport, but unable to restrict the first fetch on the basis of this.
[ "I'm assuming you are talking about the useInfiniteQuery hook from @tanstack/react-query. This hook actually inherits all options from useQuery, which takes an enabled property, that determines when it will be triggered.\nAssuming you have a boolean that determines the state of your IntersectionObserver, you could do something like this:\nuseInfiniteQuery('data', fetchData, {\n enabled: valueFromIntersectionObserver\n});\n\nRefer to the documentation of the useQuery hook for more details.\n" ]
[ 0 ]
[]
[]
[ "intersection_observer", "react_hooks", "react_infinite_scroll", "react_intersection_observer", "reactjs" ]
stackoverflow_0074680090_intersection_observer_react_hooks_react_infinite_scroll_react_intersection_observer_reactjs.txt
Q: Hide specific field in drf_yasg django swagger documentation I have a project with drf_yasg, there is a serializer with a to_internal_value method which is writing a field('mobile_operator_code') for me. The field is overriden, so we don't need to input it in request. Though we need to output it in response. Documentation(drf_yasg) takes fields='all' and documents everything for request, though we don't need the 'mobile_operator_code' in request documentation. How to hide this field for request, but not for response? Serializer is as follows: class ClientSerializer(serializers.ModelSerializer): tags = ClientTagsField(required=False) mobile_operator_code = serializers.CharField(required=False) class Meta: model = Client fields = '__all__' def to_internal_value(self, data): data_overriden = data.copy() if 'phone' in data: data_overriden['mobile_operator_code'] = get_operator_code_from_phone(data['phone']) return super().to_internal_value(data_overriden) View is as follows: class CreateClientView(mixins.CreateModelMixin, generics.GenericAPIView): serializer_class = ClientSerializer def post(self, request, *args, **kwargs): return self.create(request, *args, **kwargs) Model: class Client(models.Model): phone = models.CharField(max_length=11, verbose_name="номер телефона клиента") mobile_operator_code = models.CharField(max_length=3, verbose_name="код мобильного оператора") tags = models.JSONField(null=True, blank=True, verbose_name="теги") timezone = models.CharField(max_length=50, verbose_name="временная зона") # https://en.wikipedia.org/wiki/List_of_tz_database_time_zones def __str__(self): return str(self.phone) + ' ' + str(self.tags) if self.tags else str(self.phone) Swagger documentation: swagger documentation What I tried: read_only in field makes this field not writeble by to_internal_value excluding the field from fields does the same you cannot use different serializer, because you will still have to exclude the field for request Thanks in advance! A: #To exclude the mobile_operator_code field from the request documentation in drf_yasg, you can specify the fields attribute in the Meta class of the ClientSerializer to include only the fields that should be included in the request documentation. For example, you could do something like this: class ClientSerializer(serializers.ModelSerializer): tags = ClientTagsField(required=False) mobile_operator_code = serializers.CharField(required=False) class Meta: model = Client fields = ('phone', 'tags', 'timezone') def to_internal_value(self, data): data_overriden = data.copy() if 'phone' in data: data_overriden['mobile_operator_code'] = get_operator_code_from_phone(data['phone']) return super().to_internal_value(data_overriden) #This will tell drf_yasg to only include the phone, tags, and timezone fields in the request documentation, and exclude the mobile_operator_code field.
Hide specific field in drf_yasg django swagger documentation
I have a project with drf_yasg, there is a serializer with a to_internal_value method which is writing a field('mobile_operator_code') for me. The field is overriden, so we don't need to input it in request. Though we need to output it in response. Documentation(drf_yasg) takes fields='all' and documents everything for request, though we don't need the 'mobile_operator_code' in request documentation. How to hide this field for request, but not for response? Serializer is as follows: class ClientSerializer(serializers.ModelSerializer): tags = ClientTagsField(required=False) mobile_operator_code = serializers.CharField(required=False) class Meta: model = Client fields = '__all__' def to_internal_value(self, data): data_overriden = data.copy() if 'phone' in data: data_overriden['mobile_operator_code'] = get_operator_code_from_phone(data['phone']) return super().to_internal_value(data_overriden) View is as follows: class CreateClientView(mixins.CreateModelMixin, generics.GenericAPIView): serializer_class = ClientSerializer def post(self, request, *args, **kwargs): return self.create(request, *args, **kwargs) Model: class Client(models.Model): phone = models.CharField(max_length=11, verbose_name="номер телефона клиента") mobile_operator_code = models.CharField(max_length=3, verbose_name="код мобильного оператора") tags = models.JSONField(null=True, blank=True, verbose_name="теги") timezone = models.CharField(max_length=50, verbose_name="временная зона") # https://en.wikipedia.org/wiki/List_of_tz_database_time_zones def __str__(self): return str(self.phone) + ' ' + str(self.tags) if self.tags else str(self.phone) Swagger documentation: swagger documentation What I tried: read_only in field makes this field not writeble by to_internal_value excluding the field from fields does the same you cannot use different serializer, because you will still have to exclude the field for request Thanks in advance!
[ "#To exclude the mobile_operator_code field from the request documentation in drf_yasg, you can specify the fields attribute in the Meta class of the ClientSerializer to include only the fields that should be included in the request documentation. For example, you could do something like this:\n\nclass ClientSerializer(serializers.ModelSerializer):\n tags = ClientTagsField(required=False)\n mobile_operator_code = serializers.CharField(required=False)\n\n class Meta:\n model = Client\n fields = ('phone', 'tags', 'timezone')\n\n def to_internal_value(self, data):\n data_overriden = data.copy()\n if 'phone' in data:\n data_overriden['mobile_operator_code'] = get_operator_code_from_phone(data['phone'])\n return super().to_internal_value(data_overriden)\n\n#This will tell drf_yasg to only include the phone, tags, and timezone fields in the request documentation, and exclude the mobile_operator_code field.\n\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "drf_yasg", "swagger" ]
stackoverflow_0074680126_django_django_rest_framework_drf_yasg_swagger.txt
Q: Laravel validation for row uniqueness Working in Laravel 9, and I am doing my validations in FormRequests. I have a email_updates table. I have 3 columns, email, product_uuid, affiliate_uuid, and I am looking to enforce row uniqueness. An email can signup for multiple products, or even the same product from a different affiliate. There is a shortened scenario of my data. The first 4 rows are valid. +--------+--------------+----------------+------------+ | email | product_uuid | affiliate_uuid | created_at | +--------+--------------+----------------+------------+ | a@a.co | 3ed | 21c | 2022-01-01 | | b@b.co | 46a | 21c | 2022-01-01 | | a@a.co | 46a | 21c | 2022-01-01 | | a@a.co | 46a | 899 | 2022-01-01 | +--------+--------------+----------------+------------+ But I need the validator to refuse this row, because trio of a@a.co, 3ed, 21c have already been used before +--------+--------------+----------------+------------+ | a@a.co | 3ed | 21c | 2022-01-01 | +--------+--------------+----------------+------------+ Here is the validator that I have written so far, but it does not catch my duplicate row public function rules() { return [ 'email' => [ 'required|email:rfc,dns|min:5|max:75', Rule::unique("email")->where(function ($query) { $query->where("product_uuid", $this->product_uuid) ->where("affiliate_uuid", $this->affiliate_uuid); }) ], ]; } The Laravel docs do not seem to address my situation https://laravel.com/docs/9.x/validation#rule-unique I am sure that it is something simple but what am I missing here? A: To enforce row uniqueness, you can use the unique rule in your FormRequest's validation rules. The unique rule takes the name of the table and column as its arguments. In your case, you want to make sure that the combination of email, product_uuid, and affiliate_uuid is unique, so you can specify those columns as the second and third arguments to the unique rule, like this: public function rules() { return [ 'email' => [ 'required|email:rfc,dns|min:5|max:75', Rule::unique('email_updates', 'email', 'product_uuid', 'affiliate_uuid') ], ]; } This will ensure that the combination of email, product_uuid, and affiliate_uuid is unique in the email_updates table. A: Well it was something simple and frankly dumb! I had tried so many things, that I had several broken things throughout my code. After cleaning them out, it came down to this. Using my attempt or @Osama Alma's answer broke with this error Method Illuminate\Validation\Validator::validateRequired|email does not exist. I started a new question to learn how to chain these things together. Laravel merge eloquent validation rule with a Rule:: A comment on this question gave me the answer. Method Illuminate\Validation\Validator::validateRequired|min does not exist You can't use a mix of | and rule This code is validating all three, and will allow saving only if all three are unique public function rules() { return [ 'email' => [ 'required', 'email:rfc,dns', 'min:5', 'max:75', Rule::unique("email")->where(function ($query) { $query->where("product_uuid", $this->product_uuid) ->where("affiliate_uuid", $this->affiliate_uuid); }) ], ]; }
Laravel validation for row uniqueness
Working in Laravel 9, and I am doing my validations in FormRequests. I have a email_updates table. I have 3 columns, email, product_uuid, affiliate_uuid, and I am looking to enforce row uniqueness. An email can signup for multiple products, or even the same product from a different affiliate. There is a shortened scenario of my data. The first 4 rows are valid. +--------+--------------+----------------+------------+ | email | product_uuid | affiliate_uuid | created_at | +--------+--------------+----------------+------------+ | a@a.co | 3ed | 21c | 2022-01-01 | | b@b.co | 46a | 21c | 2022-01-01 | | a@a.co | 46a | 21c | 2022-01-01 | | a@a.co | 46a | 899 | 2022-01-01 | +--------+--------------+----------------+------------+ But I need the validator to refuse this row, because trio of a@a.co, 3ed, 21c have already been used before +--------+--------------+----------------+------------+ | a@a.co | 3ed | 21c | 2022-01-01 | +--------+--------------+----------------+------------+ Here is the validator that I have written so far, but it does not catch my duplicate row public function rules() { return [ 'email' => [ 'required|email:rfc,dns|min:5|max:75', Rule::unique("email")->where(function ($query) { $query->where("product_uuid", $this->product_uuid) ->where("affiliate_uuid", $this->affiliate_uuid); }) ], ]; } The Laravel docs do not seem to address my situation https://laravel.com/docs/9.x/validation#rule-unique I am sure that it is something simple but what am I missing here?
[ "To enforce row uniqueness, you can use the unique rule in your FormRequest's validation rules. The unique rule takes the name of the table and column as its arguments. In your case, you want to make sure that the combination of email, product_uuid, and affiliate_uuid is unique, so you can specify those columns as the second and third arguments to the unique rule, like this:\npublic function rules()\n{\n return [\n 'email' => [\n 'required|email:rfc,dns|min:5|max:75',\n Rule::unique('email_updates', 'email', 'product_uuid', 'affiliate_uuid')\n ],\n ];\n}\n\nThis will ensure that the combination of email, product_uuid, and affiliate_uuid is unique in the email_updates table.\n", "Well it was something simple and frankly dumb! I had tried so many things, that I had several broken things throughout my code. After cleaning them out, it came down to this. Using my attempt or @Osama Alma's answer broke with this error Method Illuminate\\Validation\\Validator::validateRequired|email does not exist.\nI started a new question to learn how to chain these things together. Laravel merge eloquent validation rule with a Rule::\nA comment on this question gave me the answer. Method Illuminate\\Validation\\Validator::validateRequired|min does not exist\nYou can't use a mix of | and rule\nThis code is validating all three, and will allow saving only if all three are unique\npublic function rules()\n{\n return [\n 'email' => [\n 'required', 'email:rfc,dns', 'min:5', 'max:75',\n Rule::unique(\"email\")->where(function ($query) {\n $query->where(\"product_uuid\", $this->product_uuid)\n ->where(\"affiliate_uuid\", $this->affiliate_uuid);\n })\n ],\n ];\n}\n\n" ]
[ 2, 0 ]
[]
[]
[ "laravel_9", "laravel_formrequest", "validation" ]
stackoverflow_0074678559_laravel_9_laravel_formrequest_validation.txt
Q: Problems with fixed nav I have problems with fixed nav, its showing little more down on section links. I need nav on the top to see clearly the title text at the sections. I have tried to add margin-top to section, but these gaps are so big and its awful so I need another solution for that. Pictures are there: Problem What I need A: Fixed elements are not included in the Normal Flow. In order to fix this, you'd need to make the underlying element larger by applying a padding-top with the same height as the fixed nav. A: You can to add a margin-top but its not good practice n just try to drag the element Lorem Ipsuim into the section that have the background Blue. -then select the element in that relied the navbar (Home , About ,etc) inside a div and add them the background color green . (check if the lorem ipsuim text have a position static not absolute)
Problems with fixed nav
I have problems with fixed nav, its showing little more down on section links. I need nav on the top to see clearly the title text at the sections. I have tried to add margin-top to section, but these gaps are so big and its awful so I need another solution for that. Pictures are there: Problem What I need
[ "Fixed elements are not included in the Normal Flow.\nIn order to fix this, you'd need to make the underlying element larger by applying a padding-top with the same height as the fixed nav.\n", "You can to add a margin-top but its not good practice n just try to drag the element Lorem Ipsuim into the section that have the background Blue.\n-then select the element in that relied the navbar (Home , About ,etc) inside a div and add them the background color green .\n(check if the lorem ipsuim text have a position static not absolute)\n" ]
[ 0, 0 ]
[]
[]
[ "bootstrap_4", "css", "fixed", "html", "nav" ]
stackoverflow_0074680088_bootstrap_4_css_fixed_html_nav.txt
Q: How can "cursor.callproc" of MySQL be replaced in MariaDB? I have found examples how to call stored procedures in MySQL from Python using cursor.callproc. But cursor.callproc seems not to be defined in MariaDB. I am using version 10.3. How can I solve this? A: I am learning how to write stored procedures in Mariadb. I have done a test procedure which insert data in a Mariadb database. This test works from Mariadb terminal. Trying to follow an example from internet I wrote following simple code: import mariadb connection = mariadb.connect(host='localhost', database='Terrain', password="xxxxxxx", port = 3306, user='admin' ) cursor = connection.cursor(prepared=True) args = 10,20,45 cursor.callproc(AddPolygon, args) When typing cursor. a list with alternatives pops up but "callproc" is not in this list. When i try to execute to code I recieve the message: object has no attribute 'callproc'
How can "cursor.callproc" of MySQL be replaced in MariaDB?
I have found examples how to call stored procedures in MySQL from Python using cursor.callproc. But cursor.callproc seems not to be defined in MariaDB. I am using version 10.3. How can I solve this?
[ "I am learning how to write stored procedures in Mariadb. I have done a test procedure which insert data in a Mariadb database. This test works from Mariadb terminal. Trying to follow an example from internet I wrote following simple code:\n import mariadb\n connection = mariadb.connect(host='localhost',\n database='Terrain',\n password=\"xxxxxxx\",\n port = 3306,\n user='admin'\n )\n \n cursor = connection.cursor(prepared=True)\n args = 10,20,45\n cursor.callproc(AddPolygon, args)\n\nWhen typing cursor. a list with alternatives pops up but \"callproc\" is not in this list.\nWhen i try to execute to code I recieve the message: object has no attribute 'callproc'\n" ]
[ 0 ]
[]
[]
[ "mariadb", "python", "stored_procedures" ]
stackoverflow_0074679068_mariadb_python_stored_procedures.txt
Q: How to build a c program project using meson I would like to build a c project using meson. Please provide the contents in meson.build file and where to place it? These are the folders of the project Project_dir/ src/ command.c include/ command.h main.c A: To build a C project using Meson, you will need to create a meson.build file in the root directory of your project. The meson.build file is used to define the build configuration for your project, including the source files, build options, and dependencies. Here is an example meson.build file for a simple C project: # Define the project project('my-c-project', 'c') # Add the directory containing the header file to the include directories include_directories('include') # Add source files srcs = ['src/command.c'] # Create an executable exe = executable('my-c-project', srcs) # Build the project meson.build(exe) This meson.build file defines a project named my-c-project that is written in C, with a single source file command.c. It also defines an executable target named my-c-project that will be built using the source files. The include_directories directive tells Meson to add the include directory to the list of directories to search for header files. This allows Meson to find the command.h header file when building the project. Finally, it uses the meson.build() function to build the executable. You can add additional source files, dependencies, and build options to the meson.build file as needed. For more information, see the Meson documentation: https://mesonbuild.com/
How to build a c program project using meson
I would like to build a c project using meson. Please provide the contents in meson.build file and where to place it? These are the folders of the project Project_dir/ src/ command.c include/ command.h main.c
[ "To build a C project using Meson, you will need to create a meson.build file in the root directory of your project. The meson.build file is used to define the build configuration for your project, including the source files, build options, and dependencies.\nHere is an example meson.build file for a simple C project:\n# Define the project\nproject('my-c-project', 'c')\n\n# Add the directory containing the header file to the include directories\ninclude_directories('include')\n\n# Add source files\nsrcs = ['src/command.c']\n\n# Create an executable\nexe = executable('my-c-project', srcs)\n\n# Build the project\nmeson.build(exe)\n\nThis meson.build file defines a project named my-c-project that is written in C, with a single source file command.c. It also defines an executable target named my-c-project that will be built using the source files.\nThe include_directories directive tells Meson to add the include directory to the list of directories to search for header files. This allows Meson to find the command.h header file when building the project.\nFinally, it uses the meson.build() function to build the executable.\nYou can add additional source files, dependencies, and build options to the meson.build file as needed. For more information, see the Meson documentation: https://mesonbuild.com/\n" ]
[ 1 ]
[]
[]
[ "meson_build" ]
stackoverflow_0074680160_meson_build.txt
Q: How to connect fields from Cognito user to Strapi user I have setup Strapi and connect to AWS Cognito using their Users & Permissions Plugin. I want to have first, last name and gender on my Cognito users I expanded the Users Content-Type to have those fields as well When I register a new user those fields just don't get filled out by default I haven't found a way to configure that by reading the documentation and I feel like there has to be some way to access this built in plugin and override method somehow to allow me to expand on what is going to be mapped on my Strapi users. There is a pretty well made docs for startup, but lacks the customization details [here][3]. Anyone have any ideas? [3]: https://docs.strapi.io/developer-docs/latest/plugins/users-permissions.html#providers:~:text=ngrok.io%27)%2C).-,%23,Setting%20up%20the%20provider%20%2D%20examples,-Instead%20of%20a A: To solve this issue I made a post-authentication trigger in cognito, which when performing authentication it makes a POST and updates the user-permissions fields in Strapi.
How to connect fields from Cognito user to Strapi user
I have setup Strapi and connect to AWS Cognito using their Users & Permissions Plugin. I want to have first, last name and gender on my Cognito users I expanded the Users Content-Type to have those fields as well When I register a new user those fields just don't get filled out by default I haven't found a way to configure that by reading the documentation and I feel like there has to be some way to access this built in plugin and override method somehow to allow me to expand on what is going to be mapped on my Strapi users. There is a pretty well made docs for startup, but lacks the customization details [here][3]. Anyone have any ideas? [3]: https://docs.strapi.io/developer-docs/latest/plugins/users-permissions.html#providers:~:text=ngrok.io%27)%2C).-,%23,Setting%20up%20the%20provider%20%2D%20examples,-Instead%20of%20a
[ "To solve this issue I made a post-authentication trigger in cognito, which when performing authentication it makes a POST and updates the user-permissions fields in Strapi.\n" ]
[ 0 ]
[]
[]
[ "amazon_cognito", "strapi", "typescript" ]
stackoverflow_0073503587_amazon_cognito_strapi_typescript.txt
Q: AWS Amplify hosted React App url redirects is tricky my main issue is in the title. Background: I built a React web app, hosted on AWS Amplify, and a Node/Express server, hosted on AWS EC2 with nginx running as a reverse proxy. Additionally I used Webpack and React-Router (important maybe). My front-end handles all routing and views, my back-end doesn't send anything to render to the front-end, only data. When my app is running on localhost and Netlify, there's no issue. I've done a ton of research and, according to the post: React-router urls don't work when refreshing or writing manually, it would appear that my problem is caused by the client-side routing used in my app and the fact that that when the page reloads or refreshes. The reason being when the page loads for the first time it doesn't have anything to render yet so it sends a request to the server, which in my case doesn't send a response back. I read that having a catch-all file that would alway direct to the bundle to the index page in the dist file, and also having an index.html page sent from the back-end, could work. I have tried using a catch-all route. Shown in the code blocks below, I have a _redirects page which does get included in the dist folder (at least when I run npm run build myself), but it doesn't have an effect. EDIT It took a lot of testing but I managed to fix everything by playing around in the AWS console and applying the redirects / rewrites from there rather than from webpack. I done played myself trying round-about solutions rather than go straight to the source, that being AWS. Another issue adding to my confusion was the react doc's and other people's mentioning of making requests to the server, often citing the back-end server, which made me forget that AWS amplify also acts as a server Webpack: module.exports = { entry: './src/index.tsx', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js', publicPath: '/' }, resolve: { extensions: ['.js', '.jsx', '.ts', '.tsx'] }, module: { rules: [ { test: /\.(js|jsx|ts|tsx)$/, use: 'babel-loader' }, { test: /\.css$/, use: ['style-loader', 'css-loader'] }, { test: /\.(woff(2)?|ttf|png|eot|svg)(\?v=\d+\.\d+\.\d+)?$/, use: [{ loader: 'file-loader' }] } ] }, plugins: [ new HtmlWebpackPlugin({ template: 'public/index.html' }), new CopyPlugin([ { from: 'public/_redirects' } ]) ], mode: process.env_NODE_ENV === 'production' ? 'production' : 'development', devServer: { historyApiFallback: true } }; Redirect file: /* /index.html 200 A: Try adding a to key to the CopyPlugin configuration object like so: new CopyPlugin([ { from: './public/_redirects' }, { to: './public/ } ]) A: I ran into this issue as of Dec 2022 and this is how I solved it via the aws console: Redirect rules: /<*> /index.html 404rewrite </^[^.]+$|\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json)$)([^.]+$)/> /index.html 200rewrite
AWS Amplify hosted React App url redirects is tricky
my main issue is in the title. Background: I built a React web app, hosted on AWS Amplify, and a Node/Express server, hosted on AWS EC2 with nginx running as a reverse proxy. Additionally I used Webpack and React-Router (important maybe). My front-end handles all routing and views, my back-end doesn't send anything to render to the front-end, only data. When my app is running on localhost and Netlify, there's no issue. I've done a ton of research and, according to the post: React-router urls don't work when refreshing or writing manually, it would appear that my problem is caused by the client-side routing used in my app and the fact that that when the page reloads or refreshes. The reason being when the page loads for the first time it doesn't have anything to render yet so it sends a request to the server, which in my case doesn't send a response back. I read that having a catch-all file that would alway direct to the bundle to the index page in the dist file, and also having an index.html page sent from the back-end, could work. I have tried using a catch-all route. Shown in the code blocks below, I have a _redirects page which does get included in the dist folder (at least when I run npm run build myself), but it doesn't have an effect. EDIT It took a lot of testing but I managed to fix everything by playing around in the AWS console and applying the redirects / rewrites from there rather than from webpack. I done played myself trying round-about solutions rather than go straight to the source, that being AWS. Another issue adding to my confusion was the react doc's and other people's mentioning of making requests to the server, often citing the back-end server, which made me forget that AWS amplify also acts as a server Webpack: module.exports = { entry: './src/index.tsx', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js', publicPath: '/' }, resolve: { extensions: ['.js', '.jsx', '.ts', '.tsx'] }, module: { rules: [ { test: /\.(js|jsx|ts|tsx)$/, use: 'babel-loader' }, { test: /\.css$/, use: ['style-loader', 'css-loader'] }, { test: /\.(woff(2)?|ttf|png|eot|svg)(\?v=\d+\.\d+\.\d+)?$/, use: [{ loader: 'file-loader' }] } ] }, plugins: [ new HtmlWebpackPlugin({ template: 'public/index.html' }), new CopyPlugin([ { from: 'public/_redirects' } ]) ], mode: process.env_NODE_ENV === 'production' ? 'production' : 'development', devServer: { historyApiFallback: true } }; Redirect file: /* /index.html 200
[ "Try adding a to key to the CopyPlugin configuration object like so:\nnew CopyPlugin([\n { from: './public/_redirects' },\n { to: './public/ }\n])\n\n", "I ran into this issue as of Dec 2022 and this is how I solved it via the aws console:\n\nRedirect rules:\n/<*> /index.html 404rewrite\n</^[^.]+$|\\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json)$)([^.]+$)/> /index.html 200rewrite\n\n" ]
[ 0, 0 ]
[]
[]
[ "amazon_ec2", "aws_amplify", "react_router", "reactjs", "webpack" ]
stackoverflow_0062823167_amazon_ec2_aws_amplify_react_router_reactjs_webpack.txt
Q: What is the alternative for skipTaskbar in electron for Linux? I am using an Ubuntu based distro and electron version 22.0.0. I cannot find any method or property to hide my application that I am making for a personal project, from the taskbar. I tried using skipTaskbar and dock.hide methods but both do not work. A: If you are using Electron to build a desktop application, you can use the setSkipTaskbar(skip: boolean) method to control whether your application appears in the taskbar. This method is part of the BrowserWindow class, which is used to create and manage the individual windows that make up a desktop application. Here is an example of how you could use the setSkipTaskbar method in your Electron application: const { BrowserWindow } = require('electron') // Create a new BrowserWindow instance const mainWindow = new BrowserWindow({ width: 800, height: 600, }) // Hide the window from the taskbar mainWindow.setSkipTaskbar(true) // Load your application's main HTML page mainWindow.loadFile('index.html') You can also use the show() and hide() methods of the BrowserWindow class to control whether the window is visible on the screen. This can be useful if you want to temporarily hide your application from view without closing it. Here is an example of how you could use the hide() method to hide your application's main window: const { BrowserWindow } = require('electron') // Create a new BrowserWindow instance const mainWindow = new BrowserWindow({ width: 800, height: 600, }) // Hide the window from the screen mainWindow.hide() // Load your application's main HTML page mainWindow.loadFile('index.html') Keep in mind that hiding the window from the taskbar or the screen does not prevent the user from accessing the application. The user can still access the application through other means, such as using the Alt+Tab keyboard shortcut to switch between windows. I hope this will work with you
What is the alternative for skipTaskbar in electron for Linux?
I am using an Ubuntu based distro and electron version 22.0.0. I cannot find any method or property to hide my application that I am making for a personal project, from the taskbar. I tried using skipTaskbar and dock.hide methods but both do not work.
[ "If you are using Electron to build a desktop application, you can use the setSkipTaskbar(skip: boolean) method to control whether your application appears in the taskbar. This method is part of the BrowserWindow class, which is used to create and manage the individual windows that make up a desktop application.\nHere is an example of how you could use the setSkipTaskbar method in your Electron application:\nconst { BrowserWindow } = require('electron')\n\n// Create a new BrowserWindow instance\nconst mainWindow = new BrowserWindow({\n width: 800,\n height: 600,\n})\n\n// Hide the window from the taskbar\nmainWindow.setSkipTaskbar(true)\n\n// Load your application's main HTML page\nmainWindow.loadFile('index.html')\n\nYou can also use the show() and hide() methods of the BrowserWindow class to control whether the window is visible on the screen. This can be useful if you want to temporarily hide your application from view without closing it.\nHere is an example of how you could use the hide() method to hide your application's main window:\nconst { BrowserWindow } = require('electron')\n\n// Create a new BrowserWindow instance\nconst mainWindow = new BrowserWindow({\n width: 800,\n height: 600,\n})\n\n// Hide the window from the screen\nmainWindow.hide()\n\n// Load your application's main HTML page\nmainWindow.loadFile('index.html')\n\nKeep in mind that hiding the window from the taskbar or the screen does not prevent the user from accessing the application. The user can still access the application through other means, such as using the Alt+Tab keyboard shortcut to switch between windows.\nI hope this will work with you\n" ]
[ 0 ]
[]
[]
[ "electron", "javascript", "linux", "node.js" ]
stackoverflow_0074679255_electron_javascript_linux_node.js.txt
Q: Function can print tokens, but no lemmas I have a function to run occurrence for tokens/lemmas in a sentence. I want to take a sentence, remove all the annoying bits (punctuation, space, stop), count whats left, then divide that number by how many times those tokens appear in the sentence. Token / count. When I run this function, I can get the user_token to populate, but not the user_lemmas. import spacy # Tokens/Lemmas Count: divided by word count user_lemmas = [token.lemma_ for token in sentence if token_isolation(token)] user_token = [token for token in sentence if token_isolation(token)] def function(sentence, user_X): # Word Count count = 0 for token in sentence: if not (token.is_space or token.is_punct): count += 1 # Word occurrence count occurrence = 0 > occurrence = len([i for i in user_tokens if i in sentence ]) > occurrence = len([i for i in user_lemmas if i in sentence ]) # Calculation result = occurrence / count return result, count, occurrence Error: TypeError: Argument 'other' has incorrect type (expected spacy.tokens.token.Token, got str) I checked both output's types, and I get the same answer: lists. But two different outputs (token: [text, text] lemma: ['text', 'text']). I tried converting the output of lemmas to different formats but I continue to get the same type error. Edit: Error occurs when I try to run the occurrence = statement. When I run the following: result_token = score_sentence_by_token(sentence, interesting_token) result_lemma = score_sentence_by_lemma(sentence, interesting_lemmas) token goes through giving me a score, but lemma gets a type error. A: See the following lines: user_lemmas = [token.lemma_ for token in sentence if token_isolation(token)] user_token = [token for token in sentence if token_isolation(token)] the user_lemmas is a list of strings (picked up from lemma_ string attribute), the user_token is a list of spacy tokens, i.e., spacy objects. The sentence is the spacy document object. When you test: if i in sentence the "in" operator is supported only for the spacy token objects, not strings. Thus you cannot do "Word" in sentence as this is not supported. You need to test spacy token in sentence which is fullfilled with user_token as it is spacy token and not a string. Minimal example to produce the error: import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("Apple is looking at buying U.K. startup for $1 billion") contained = "Apple" in doc >>> TypeError: Argument 'other' has incorrect type (expected spacy.tokens.token.Token, got str) Example of code that compares tokens by the lemmas: import spacy def compare_token(first_token, second_token, compare_by_lemma): """Compares two spacy tokens, first compares POS tag, then lemma/form match. Switch between lemma/form by bool `compare_by_lemma`. """ if not first_token.pos_ == second_token.pos_: return False if compare_by_lemma: if first_token.lemma_ == second_token.lemma_: return True else: if first_token.text == second_token.text: return True return False nlp = spacy.load("en_core_web_sm") doc = nlp("Apple is looking at buying U.K. startup for $1 billion, while bought other company yesterday.") buying_token = doc[4] for token in doc: is_match = compare_token(buying_token, token, compare_by_lemma=True) # <<< notice compare_by_lemma=True, try False if is_match: print(f"{token.text}\t{is_match}") >>> buying True >>> bought True
Function can print tokens, but no lemmas
I have a function to run occurrence for tokens/lemmas in a sentence. I want to take a sentence, remove all the annoying bits (punctuation, space, stop), count whats left, then divide that number by how many times those tokens appear in the sentence. Token / count. When I run this function, I can get the user_token to populate, but not the user_lemmas. import spacy # Tokens/Lemmas Count: divided by word count user_lemmas = [token.lemma_ for token in sentence if token_isolation(token)] user_token = [token for token in sentence if token_isolation(token)] def function(sentence, user_X): # Word Count count = 0 for token in sentence: if not (token.is_space or token.is_punct): count += 1 # Word occurrence count occurrence = 0 > occurrence = len([i for i in user_tokens if i in sentence ]) > occurrence = len([i for i in user_lemmas if i in sentence ]) # Calculation result = occurrence / count return result, count, occurrence Error: TypeError: Argument 'other' has incorrect type (expected spacy.tokens.token.Token, got str) I checked both output's types, and I get the same answer: lists. But two different outputs (token: [text, text] lemma: ['text', 'text']). I tried converting the output of lemmas to different formats but I continue to get the same type error. Edit: Error occurs when I try to run the occurrence = statement. When I run the following: result_token = score_sentence_by_token(sentence, interesting_token) result_lemma = score_sentence_by_lemma(sentence, interesting_lemmas) token goes through giving me a score, but lemma gets a type error.
[ "See the following lines:\nuser_lemmas = [token.lemma_ for token in sentence if token_isolation(token)]\nuser_token = [token for token in sentence if token_isolation(token)]\n\nthe user_lemmas is a list of strings (picked up from lemma_ string attribute), the user_token is a list of spacy tokens, i.e., spacy objects. The sentence is the spacy document object.\nWhen you test: if i in sentence the \"in\" operator is supported only for the spacy token objects, not strings.\nThus you cannot do \"Word\" in sentence as this is not supported. You need to test spacy token in sentence which is fullfilled with user_token as it is spacy token and not a string.\n\nMinimal example to produce the error:\nimport spacy\nnlp = spacy.load(\"en_core_web_sm\")\ndoc = nlp(\"Apple is looking at buying U.K. startup for $1 billion\")\n\ncontained = \"Apple\" in doc\n\n>>> TypeError: Argument 'other' has incorrect type (expected spacy.tokens.token.Token, got str)\n\n\nExample of code that compares tokens by the lemmas:\nimport spacy\n\ndef compare_token(first_token, second_token, compare_by_lemma):\n \"\"\"Compares two spacy tokens, first compares POS tag, then lemma/form match.\n Switch between lemma/form by bool `compare_by_lemma`.\n \"\"\"\n if not first_token.pos_ == second_token.pos_:\n return False\n\n if compare_by_lemma:\n if first_token.lemma_ == second_token.lemma_:\n return True\n else:\n if first_token.text == second_token.text:\n return True\n return False\n\n\nnlp = spacy.load(\"en_core_web_sm\")\ndoc = nlp(\"Apple is looking at buying U.K. startup for $1 billion, while bought other company yesterday.\")\nbuying_token = doc[4]\n\n\nfor token in doc:\n is_match = compare_token(buying_token, token, compare_by_lemma=True) # <<< notice compare_by_lemma=True, try False\n if is_match:\n print(f\"{token.text}\\t{is_match}\")\n>>> buying True\n>>> bought True\n\n" ]
[ 0 ]
[]
[]
[ "nlp", "python" ]
stackoverflow_0074679117_nlp_python.txt
Q: delete parent automatically if child was deleted This my database relationship. I got data like this: LikeDislike |-------------|--------------|-----------------| | id | userId | createdAt | |-------------|--------------|-----------------| | 1 | 1 | 2019-03-26 | |-------------|--------------|-----------------| RoomLikes |------------------|--------------| | likeDislikeId | roomId | |------------------|--------------| | 1 | 1 | |------------------|--------------| I want to do: When I delete a data from RoomLikes, like DELETE FROM roomlikes where id = 1 The data in LikeDislike that related with RoomLikes will be deleted automatically. Is there are ways to delete data of parent automatically if data of child was deleted? Please let me know if you need more info. Thanks. A: If I understand the question you need to look into using a join in your delete statement - something like this: DELETE RoomLikes, LikeDislike FROM RoomLikes INNER JOIN LikeDislike ON RoomLikes.likeDislikeId = LikeDislike.id WHERE roomlikes.id=1; A: You can use following queries ALTER TABLE RoomLikes add foreign key (likeDislikeId) references LikeDislike(id) ON DELETE CASCADE; Just remember that you need to have primary key in the referenced table (LikeDislike) to add foreign key. By using this, if you delete data on parent table, data of child table will be deleted. A: Is it correct that a record in LikeDislike has exactly 1 child record that is either a RoomLike or a CommentLike? In that case you might consider altering the corresponding foreign keys with ON DELETE CASCADE and most of all, deleting the parent record instead of the child record.
delete parent automatically if child was deleted
This my database relationship. I got data like this: LikeDislike |-------------|--------------|-----------------| | id | userId | createdAt | |-------------|--------------|-----------------| | 1 | 1 | 2019-03-26 | |-------------|--------------|-----------------| RoomLikes |------------------|--------------| | likeDislikeId | roomId | |------------------|--------------| | 1 | 1 | |------------------|--------------| I want to do: When I delete a data from RoomLikes, like DELETE FROM roomlikes where id = 1 The data in LikeDislike that related with RoomLikes will be deleted automatically. Is there are ways to delete data of parent automatically if data of child was deleted? Please let me know if you need more info. Thanks.
[ "If I understand the question you need to look into using a join in your delete statement - something like this:\nDELETE RoomLikes, LikeDislike\nFROM RoomLikes\nINNER JOIN LikeDislike ON RoomLikes.likeDislikeId = LikeDislike.id\nWHERE roomlikes.id=1;\n\n", "You can use following queries\nALTER TABLE RoomLikes add foreign key (likeDislikeId) references LikeDislike(id) ON DELETE CASCADE; \nJust remember that you need to have primary key in the referenced table (LikeDislike) to add foreign key. By using this, if you delete data on parent table, data of child table will be deleted.\n", "Is it correct that a record in LikeDislike has exactly 1 child record that is either a RoomLike or a CommentLike?\nIn that case you might consider altering the corresponding foreign keys with ON DELETE CASCADE and most of all, deleting the parent record instead of the child record.\n" ]
[ 1, 1, 1 ]
[ "Please give me this solution using stored procedure how to create the stored procedure Delete child automatically if child was deleted\n" ]
[ -2 ]
[ "mysql", "sql" ]
stackoverflow_0055413263_mysql_sql.txt
Q: React - How to set a new nested object property using just one line? I'm using a state to store a object with this structure: { "fullName": "John", "birthdate": "2000-02-27", "cpfNumber": "11408247910", "telephoneNumber": "47996034002", "emailAddress": "john@gmail.com", "address": { "mainAddress": "Rua Amoroso Costa", "numberAddress": "171", "neighborHood": "Jardim das Américas", "complementInfo": "Casa", "zipCode": "81530-150" } } I'm using a form to fill this data using the onChange function: <input type="email" name="emailAddress" id="emailAddress" onChange={(e) => setFormPatient({ ...formPatient, ["emailAddress"]: e.target.value })} /> I have been trying some different approaches to update the data from the address nested object but I'm not getting there. One of my failed intents was: onChange={(e) => setFormPatient({ ...formPatient, address: "mainAddress": e.target.value }) My question is the following: is there a way to make this one liner work for a nested object like address? A: It might be that the object in question is not an array, try to use it like this and check if it works: <input type="mainAddress" name="mainAddress" id="mainAddress" onChange={(e) => setFormPatient({ ...formPatient, address: { ...formPatient.address, mainAddress: e.target.value } })} /> Edit: Fixed based on Chris Heald feedback, ty!
React - How to set a new nested object property using just one line?
I'm using a state to store a object with this structure: { "fullName": "John", "birthdate": "2000-02-27", "cpfNumber": "11408247910", "telephoneNumber": "47996034002", "emailAddress": "john@gmail.com", "address": { "mainAddress": "Rua Amoroso Costa", "numberAddress": "171", "neighborHood": "Jardim das Américas", "complementInfo": "Casa", "zipCode": "81530-150" } } I'm using a form to fill this data using the onChange function: <input type="email" name="emailAddress" id="emailAddress" onChange={(e) => setFormPatient({ ...formPatient, ["emailAddress"]: e.target.value })} /> I have been trying some different approaches to update the data from the address nested object but I'm not getting there. One of my failed intents was: onChange={(e) => setFormPatient({ ...formPatient, address: "mainAddress": e.target.value }) My question is the following: is there a way to make this one liner work for a nested object like address?
[ "It might be that the object in question is\nnot an array, try to use it like this and check if it works:\n <input type=\"mainAddress\" \n name=\"mainAddress\" \n id=\"mainAddress\" \n onChange={(e) => setFormPatient({ ...formPatient, address: { \n ...formPatient.address,\n mainAddress: e.target.value \n }\n })} />\n\nEdit: Fixed based on Chris Heald feedback, ty!\n" ]
[ 1 ]
[]
[]
[ "forms", "react_hooks", "reactjs", "setstate" ]
stackoverflow_0074680194_forms_react_hooks_reactjs_setstate.txt
Q: npm ERR! syscall spawn git I have a problem When I install thirdweb-dev/react i get this npm ERR! code ENOENT npm ERR! syscall spawn git npm ERR! path git npm ERR! errno -4058 npm ERR! enoent An unknown git error occurred npm ERR! enoent This is related to npm not being able to find a file. npm ERR! enoent npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\ELZAHBIA\AppData\Local\npm-cache_logs\2022-05-08T16_00_11_597Z-debug-0.log) A: Make sure you have the GIT installed on your device and accessible globally. Try to type git --version in the CMD. If it returned that the command is not recognized and GIT is already installed, then you need to add it to the PATH environment variable and make sure to try the command using a new CMD session. A: I had the same problem and it was related to not having the node packages of npm on my PATH. This solved the problem for me: npm global install does not add packages to PATH on Windows 8.1
npm ERR! syscall spawn git
I have a problem When I install thirdweb-dev/react i get this npm ERR! code ENOENT npm ERR! syscall spawn git npm ERR! path git npm ERR! errno -4058 npm ERR! enoent An unknown git error occurred npm ERR! enoent This is related to npm not being able to find a file. npm ERR! enoent npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\ELZAHBIA\AppData\Local\npm-cache_logs\2022-05-08T16_00_11_597Z-debug-0.log)
[ "Make sure you have the GIT installed on your device and accessible globally. Try to type git --version in the CMD. If it returned that the command is not recognized and GIT is already installed, then you need to add it to the PATH environment variable and make sure to try the command using a new CMD session.\n", "I had the same problem and it was related to not having the node packages of npm on my PATH. This solved the problem for me: npm global install does not add packages to PATH on Windows 8.1\n" ]
[ 0, 0 ]
[]
[]
[ "npm_install", "reactjs", "thirdweb" ]
stackoverflow_0072163471_npm_install_reactjs_thirdweb.txt
Q: VS Code - Error: EPERM: operation not permitted I have been experiencing A LOT of permission issues when using VS code with Windows 10. When trying to move a folder: Error: EPERM: operation not permitted, rename 'path a' -> 'path b' When deleting a folder: It fails silently, the folder is removed from the solution explorer but it persists on disk. It doesn't work with or without admin rights. I went to the folder containing all my repos, set the ownership to me, applied full control to all authenticated users, and it still doesn't work. Any idea ? EDIT: It does work sometimes, that's what make it very strange A: I encountered the error message when renaming or moving files in VS Code. I then noticed that it was the same in Windows Explorer. I develop in Angular using Angular-CLI and I execute the command ng build --watch in a cmd. When I stopped the watch command, that solved the issue and I was able to rename and move files in VS Code without any problem. So I think there are some processes that hold your files. I hope this can help. A: I had this issue as well. The cause in my case appeared to be the "Angular Language Service" extension. I killed that in the extensions pane and was able to rename the file immediately. Unfortunately, the problem still persists when Angular Language Server is enabled. A: I faced the same issue while doing a react project. At first, I thought I need to give admin rights but that also didn't work. Later I found that My project was hosted on my localHost. So, If your project is running we can't change the folder structure. You need to quit the server before updating the folder structure. Solution: Quit the Localhost Server and try again(For React and Angular Users). You can also try restarting the VScode. A: The solution that worked for me was to open Vscode as admin. A: If you use the Jest VSCode extension and a test file is located in the folder you're trying to rename, it might not work without disabling the Jest runner. A: Just close the vscode and do any file operations in file explorer. I still rename my files using this way, perhaps this is an unsolved bug. A: If you're developing Firebase app and have its Firebase Emulators previously running, this might happen because when you press Ctrl+C, their emulators may not completely shutdown. No one mention which background process can be ended in the windows Task Manager either. In short, when this problem happen, it has to be some kind of CLI or Extension that is holding up the resources. SOLUTION: Restart PC and try renaming or removing your folder again, it should work. A: In my case it was a nested folder. Manually creating the top folder and moving it's contents was allowed... (imports were still automatically updated) A: As silly as it sounds, this error also happens when a file's 'read-only' flag is set for any reason; in my case it was copying the entire VSCode folder from a read-only host-share to a VM. It doesn't matter whether you have full admin rights or not, if file is read-only VSCode cannot change it. As for open files, you can use SysInternals' Process Explorer tool to find out which process has an open handle to a particular path or file. A: I encountered Error: EPERM: operation not permitted, rename while having the live server of the Live Server extension running. After stopping the live server the renaming operation was possible again. A: Just to add to this wall of answers, this is pretty clearly a very context dependent problem, I initially assumed this was something to do with vscode or my windows permissions but actually it was a problem with my deletion script. Initially my script was something like this: fs.readdir("./content/products", (err, files) => { if (err) console.log(err); else { files.forEach((file) => { console.log(`Deleting: ${file}`); fs.unlink(`content/products//${file}`, (err) => { if (err) throw err; }); }); } }); And this worked for most of my time with this project.... until I decided to make subfolders in the products directory at which point node attempted to unlink the directories and this error was raised. Couldn't figure out why this was happening even after rebooting my PC :/ I changed my deletion script to try { await fs.emptyDir("content/products"); } catch (err) { console.error(err); } And it works now :) A: I had the same issue in VSCode with Angular. I was trying to rename a component folder. The solution for me was to remove the compiled Angular (.js) files from folder_to_rename/dist/ folder. After that, I could rename the parent folder without issues. A: I also got this error while developing in React Native, it was because the app was running and constantly listening for the changes in the code. Terminating the Metro worked. A: "C:\Users{username}" If you don't see the .vscode folder under this path, this may be your problem. Making the .vscode folder visible will solve your problem. It worked for me. If you don't know how to make it visible follow the below steps. see hidden files Make the folder visible from this link and then right click on the folder, select properties from the popup menu, check the box next to hidden in the popup window and then click ok. Your problem should be resolved. A: In my case, I tried moving React packages into a new folder I created in VS Code but the error message persisted. I solved it by closing the google-chrome browser where the app was running. I tried again and it worked :) A: If files/folders are renamed or moved, put them pack to their previous states. Disable and Enable the Angular Language Service extension. Rerun the "ng serve" or "ng build" NOTE: Make sure the extension is updated. No need of uninstalling or disabling the extension entirely as suggested above. Angular 15.0 Angular Language Service v15.0.2 VSCode 1.73.1
VS Code - Error: EPERM: operation not permitted
I have been experiencing A LOT of permission issues when using VS code with Windows 10. When trying to move a folder: Error: EPERM: operation not permitted, rename 'path a' -> 'path b' When deleting a folder: It fails silently, the folder is removed from the solution explorer but it persists on disk. It doesn't work with or without admin rights. I went to the folder containing all my repos, set the ownership to me, applied full control to all authenticated users, and it still doesn't work. Any idea ? EDIT: It does work sometimes, that's what make it very strange
[ "I encountered the error message when renaming or moving files in VS Code.\nI then noticed that it was the same in Windows Explorer.\nI develop in Angular using Angular-CLI and I execute the command ng build --watch in a cmd.\nWhen I stopped the watch command, that solved the issue and I was able to rename and move files in VS Code without any problem.\nSo I think there are some processes that hold your files.\nI hope this can help.\n", "I had this issue as well. The cause in my case appeared to be the \"Angular Language Service\" extension.\nI killed that in the extensions pane and was able to rename the file immediately.\nUnfortunately, the problem still persists when Angular Language Server is enabled.\n", "I faced the same issue while doing a react project. At first, I thought I need to give admin rights but that also didn't work. Later I found that My project was hosted on my localHost. So, If your project is running we can't change the folder structure. You need to quit the server before updating the folder structure.\nSolution: Quit the Localhost Server and try again(For React and Angular Users). You can also try restarting the VScode.\n", "The solution that worked for me was to open Vscode as admin.\n", "If you use the Jest VSCode extension and a test file is located in the folder you're trying to rename, it might not work without disabling the Jest runner.\n", "Just close the vscode and do any file operations in file explorer. I still rename my files using this way, perhaps this is an unsolved bug.\n", "If you're developing Firebase app and have its Firebase Emulators previously running, this might happen because when you press Ctrl+C, their emulators may not completely shutdown. No one mention which background process can be ended in the windows Task Manager either.\nIn short, when this problem happen, it has to be some kind of CLI or Extension that is holding up the resources.\nSOLUTION: Restart PC and try renaming or removing your folder again, it should work.\n", "In my case it was a nested folder. Manually creating the top folder and moving it's contents was allowed... (imports were still automatically updated)\n", "As silly as it sounds, this error also happens when a file's 'read-only' flag is set for any reason; in my case it was copying the entire VSCode folder from a read-only host-share to a VM. It doesn't matter whether you have full admin rights or not, if file is read-only VSCode cannot change it. As for open files, you can use SysInternals' Process Explorer tool to find out which process has an open handle to a particular path or file.\n", "I encountered Error: EPERM: operation not permitted, rename while having the live server of the Live Server extension running. After stopping the live server the renaming operation was possible again.\n", "Just to add to this wall of answers, this is pretty clearly a very context dependent problem, I initially assumed this was something to do with vscode or my windows permissions but actually it was a problem with my deletion script.\nInitially my script was something like this:\n fs.readdir(\"./content/products\", (err, files) => {\n if (err) console.log(err);\n else {\n files.forEach((file) => {\n console.log(`Deleting: ${file}`);\n fs.unlink(`content/products//${file}`, (err) => {\n if (err) throw err;\n });\n });\n }\n });\n\nAnd this worked for most of my time with this project.... until I decided to make subfolders in the products directory at which point node attempted to unlink the directories and this error was raised. Couldn't figure out why this was happening even after rebooting my PC :/\nI changed my deletion script to\n try {\n await fs.emptyDir(\"content/products\");\n } catch (err) {\n console.error(err);\n }\n\nAnd it works now :)\n", "I had the same issue in VSCode with Angular. I was trying to rename a component folder.\nThe solution for me was to remove the compiled Angular (.js) files from folder_to_rename/dist/ folder. After that, I could rename the parent folder without issues.\n", "I also got this error while developing in React Native, it was because the app was running and constantly listening for the changes in the code. Terminating the Metro worked.\n", "\"C:\\Users{username}\" If you don't see the .vscode folder under this path, this may be your problem. Making the .vscode folder visible will solve your problem. It worked for me. If you don't know how to make it visible follow the below steps.\nsee hidden files\nMake the folder visible from this link and then right click on the folder, select properties from the popup menu, check the box next to hidden in the popup window and then click ok.\nYour problem should be resolved.\n", "In my case, I tried moving React packages into a new folder I created in VS Code but the error message persisted.\nI solved it by closing the google-chrome browser where the app was running.\nI tried again and it worked :)\n", "\nIf files/folders are renamed or moved, put them pack to their previous states.\nDisable and Enable the Angular Language Service extension.\nRerun the \"ng serve\" or \"ng build\"\n\nNOTE: Make sure the extension is updated.\nNo need of uninstalling or disabling the extension entirely as suggested above.\n\nAngular 15.0\nAngular Language Service v15.0.2\nVSCode 1.73.1\n\n" ]
[ 59, 25, 9, 6, 4, 2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0 ]
[ "You've probably hidden folder .vscode\nC:\\Users\\%Username%\\.vscode\n\n" ]
[ -1 ]
[ "file_permissions", "visual_studio_code", "windows" ]
stackoverflow_0058707277_file_permissions_visual_studio_code_windows.txt
Q: R split string by symbol Example: string = "abc|3g" function(string) Solution: --> "abc" "3g" Is there any idea how to split in the way as showed in the example? A: strsplit(string,split='|', fixed=TRUE) This is the possible answer. Other solutions? A: A stringr option, using str_split_1, a shortcut for strsplit(...)[[1]]. library(stringr) str_split_1(string, "\\|")
R split string by symbol
Example: string = "abc|3g" function(string) Solution: --> "abc" "3g" Is there any idea how to split in the way as showed in the example?
[ "strsplit(string,split='|', fixed=TRUE)\n\nThis is the possible answer. Other solutions?\n", "A stringr option, using str_split_1, a shortcut for strsplit(...)[[1]].\nlibrary(stringr)\nstr_split_1(string, \"\\\\|\")\n\n" ]
[ 35, 1 ]
[]
[]
[ "r", "split", "string" ]
stackoverflow_0030842264_r_split_string.txt
Q: How do I delete a versioned bucket in AWS S3 using the CLI? I have tried both s3cmd: $ s3cmd -r -f -v del s3://my-versioned-bucket/ And the AWS CLI: $ aws s3 rm s3://my-versioned-bucket/ --recursive But both of these commands simply add DELETE markers to S3. The command for removing a bucket also doesn't work (from the AWS CLI): $ aws s3 rb s3://my-versioned-bucket/ --force Cleaning up. Please wait... Completed 1 part(s) with ... file(s) remaining remove_bucket failed: s3://my-versioned-bucket/ A client error (BucketNotEmpty) occurred when calling the DeleteBucket operation: The bucket you tried to delete is not empty. You must delete all versions in the bucket. Ok... how? There's no information in their documentation for this. S3Cmd says it's a 'fully-featured' S3 command-line tool, but it makes no reference to versions other than its own. Is there any way to do this without using the web interface, which will take forever and requires me to keep my laptop on? A: I ran into the same limitation of the AWS CLI. I found the easiest solution to be to use Python and boto3: #!/usr/bin/env python BUCKET = 'your-bucket-here' import boto3 s3 = boto3.resource('s3') bucket = s3.Bucket(BUCKET) bucket.object_versions.delete() # if you want to delete the now-empty bucket as well, uncomment this line: #bucket.delete() A previous version of this answer used boto but that solution had performance issues with large numbers of keys as Chuckles pointed out. A: Using boto3 it's even easier than with the proposed boto solution to delete all object versions in an S3 bucket: #!/usr/bin/env python import boto3 s3 = boto3.resource('s3') bucket = s3.Bucket('your-bucket-name') bucket.object_versions.all().delete() Works fine also for very large amounts of object versions, although it might take some time in that case. A: You can delete all the objects in the versioned s3 bucket. But I don't know how to delete specific objects. $ aws s3api delete-objects \ --bucket <value> \ --delete "$(aws s3api list-object-versions \ --bucket <value> | \ jq '{Objects: [.Versions[] | {Key:.Key, VersionId : .VersionId}], Quiet: false}')" Alternatively without jq: $ aws s3api delete-objects \ --bucket ${bucket_name} \ --delete "$(aws s3api list-object-versions \ --bucket "${bucket_name}" \ --output=json \ --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')" A: This two bash lines are enough for me to enable the bucket deletion ! 1: Delete objects aws s3api delete-objects --bucket ${buckettoempty} --delete "$(aws s3api list-object-versions --bucket ${buckettoempty} --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')" 2: Delete markers aws s3api delete-objects --bucket ${buckettoempty} --delete "$(aws s3api list-object-versions --bucket ${buckettoempty} --query='{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}')" A: Looks like as of now, there is an Empty button in the AWS S3 console. Just select your bucket and click on it. It will ask you to confirm your decision by typing permanently delete Note, this will not delete the bucket itself. A: Here is a one liner you can just cut and paste into the command line to delete all versions and delete markers (it requires aws tools, replace yourbucket-name-backup with your bucket name) echo '#!/bin/bash' > deleteBucketScript.sh \ && aws --output text s3api list-object-versions --bucket $BUCKET_TO_PERGE \ | grep -E "^VERSIONS" |\ awk '{print "aws s3api delete-object --bucket $BUCKET_TO_PERGE --key "$4" --version-id "$8";"}' >> \ deleteBucketScript.sh && . deleteBucketScript.sh; rm -f deleteBucketScript.sh; echo '#!/bin/bash' > \ deleteBucketScript.sh && aws --output text s3api list-object-versions --bucket $BUCKET_TO_PERGE \ | grep -E "^DELETEMARKERS" | grep -v "null" \ | awk '{print "aws s3api delete-object --bucket $BUCKET_TO_PERGE --key "$3" --version-id "$5";"}' >> \ deleteBucketScript.sh && . deleteBucketScript.sh; rm -f deleteBucketScript.sh; then you could use: aws s3 rb s3://bucket-name --force A: For those using multiple profiles via ~/.aws/config import boto3 PROFILE = "my_profile" BUCKET = "my_bucket" session = boto3.Session(profile_name = PROFILE) s3 = session.resource('s3') bucket = s3.Bucket(BUCKET) bucket.object_versions.delete() A: If you have to delete/empty large S3 buckets, it becomes quite inefficient (and expensive) to delete every single object and version. It's often more convenient to let AWS expire all objects and versions. aws s3api put-bucket-lifecycle-configuration \ --lifecycle-configuration '{"Rules":[{ "ID":"empty-bucket", "Status":"Enabled", "Prefix":"", "Expiration":{"Days":1}, "NoncurrentVersionExpiration":{"NoncurrentDays":1} }]}' \ --bucket YOUR-BUCKET Then you just have to wait 1 day and the bucket can be deleted with: aws s3api delete-bucket --bucket YOUR-BUCKET A: One way to do it is iterate through the versions and delete them. A bit tricky on the CLI, but as you mentioned Java, that would be more straightforward: AmazonS3Client s3 = new AmazonS3Client(); String bucketName = "deleteversions-"+UUID.randomUUID(); //Creates Bucket s3.createBucket(bucketName); //Enable Versioning BucketVersioningConfiguration configuration = new BucketVersioningConfiguration(ENABLED); s3.setBucketVersioningConfiguration(new SetBucketVersioningConfigurationRequest(bucketName, configuration )); //Puts versions s3.putObject(bucketName, "some-key",new ByteArrayInputStream("some-bytes".getBytes()), null); s3.putObject(bucketName, "some-key",new ByteArrayInputStream("other-bytes".getBytes()), null); //Removes all versions for ( S3VersionSummary version : S3Versions.inBucket(s3, bucketName) ) { String key = version.getKey(); String versionId = version.getVersionId(); s3.deleteVersion(bucketName, key, versionId); } //Removes the bucket s3.deleteBucket(bucketName); System.out.println("Done!"); You can also batch delete calls for efficiency if needed. A: If you want pure CLI approach (with jq): aws s3api list-object-versions \ --bucket $bucket \ --region $region \ --query "Versions[].Key" \ --output json | jq 'unique' | jq -r '.[]' | while read key; do echo "deleting versions of $key" aws s3api list-object-versions \ --bucket $bucket \ --region $region \ --prefix $key \ --query "Versions[].VersionId" \ --output json | jq 'unique' | jq -r '.[]' | while read version; do echo "deleting $version" aws s3api delete-object \ --bucket $bucket \ --key $key \ --version-id $version \ --region $region done done A: Simple bash loop I've found and implemented for N buckets: for b in $(ListOfBuckets); do \ echo "Emptying $b"; \ aws s3api delete-objects --bucket $b --delete "$(aws s3api list-object-versions --bucket $b --output=json --query='{Objects: *[].{Key:Key,VersionId:VersionId}}')"; \ done A: For deleting specify object(s), using jq filter. You may need cleanup the 'DeleteMarkers' not just 'Versions'. Using $() instead of ``, you may embed variables for bucket-name and key-value. aws s3api delete-objects --bucket bucket-name --delete "$(aws s3api list-object-versions --bucket bucket-name | jq -M '{Objects: [.["Versions","DeleteMarkers"][]|select(.Key == "key-value")| {Key:.Key, VersionId : .VersionId}], Quiet: false}')" A: I ran into issues with Abe's solution as the list_buckets generator is used to create a massive list called all_keys and I spent an hour without it ever completing. This tweak seems to work better for me, I had close to a million objects in my bucket and counting! import boto s3 = boto.connect_s3() bucket = s3.get_bucket("your-bucket-name-here") chunk_counter = 0 #this is simply a nice to have keys = [] for key in bucket.list_versions(): keys.append(key) if len(keys) > 1000: bucket.delete_keys(keys) chunk_counter += 1 keys = [] print("Another 1000 done.... {n} chunks so far".format(n=chunk_counter)) #bucket.delete() #as per usual uncomment if you're sure! Hopefully this helps anyone else encountering this S3 nightmare! A: Even though technically it's not AWS CLI, I'd recommend using AWS Tools for Powershell for this task. Then you can use the simple command as below: Remove-S3Bucket -BucketName {bucket-name} -DeleteBucketContent -Force -Region {region} As stated in the documentation, DeleteBucketContent flag does the following: "If set, all remaining objects and/or object versions in the bucket are deleted proir (sic) to the bucket itself being deleted" Reference: https://docs.aws.amazon.com/powershell/latest/reference/items/Remove-S3Bucket.html A: This bash script found here: https://gist.github.com/weavenet/f40b09847ac17dd99d16 worked as is for me. I saved script as: delete_all_versions.sh and then simply ran: ./delete_all_versions.sh my_foobar_bucket and that worked without a flaw. Did not need python or boto or anything. A: You can do this from the AWS Console using Lifecycle Rules. Open the bucket in question. Click the Management tab at the top. Make sure the Lifecycle Sub Tab is selected. Click + Add lifecycle rule On Step 1 (Name and scope) enter a rule name (e.g. removeall) Click Next to Step 2 (Transitions) Leave this as is and click Next. You are now on the 3. Expiration step. Check the checkboxes for both Current Version and Previous Versions. Click the checkbox for "Expire current version of object" and enter the number 1 for "After _____ days from object creation Click the checkbox for "Permanently delete previous versions" and enter the number 1 for "After _____ days from becoming a previous version" click the checkbox for "Clean up incomplete multipart uploads" and enter the number 1 for "After ____ days from start of upload" Click Next Review what you just did. Click Save Come back in a day and see how it is doing. A: I improved the boto3 answer with Python3 and argv. Save the following script as something like s3_rm.py. #!/usr/bin/env python3 import sys import boto3 def main(): args = sys.argv[1:] if (len(args) < 1): print("Usage: {} s3_bucket_name".format(sys.argv[0])) exit() s3 = boto3.resource('s3') bucket = s3.Bucket(args[0]) bucket.object_versions.delete() # if you want to delete the now-empty bucket as well, uncomment this line: #bucket.delete() if __name__ == "__main__": main() Add chmod +x s3_rm.py. Run the function like ./s3_rm.py my_bucket_name. A: In the same vein as https://stackoverflow.com/a/63613510/805031 ... this is what I use to clean up accounts before closing them: # If the data is too large, apply LCP to remove all objects within a day # Create lifecycle-expire.json with the LCP required to purge all objects # Based on instructions from: https://aws.amazon.com/premiumsupport/knowledge-center/s3-empty-bucket-lifecycle-rule/ cat << JSON > lifecycle-expire.json { "Rules": [ { "ID": "remove-all-objects-asap", "Filter": { "Prefix": "" }, "Status": "Enabled", "Expiration": { "Days": 1 }, "NoncurrentVersionExpiration": { "NoncurrentDays": 1 }, "AbortIncompleteMultipartUpload": { "DaysAfterInitiation": 1 } }, { "ID": "remove-expired-delete-markers", "Filter": { "Prefix": "" }, "Status": "Enabled", "Expiration": { "ExpiredObjectDeleteMarker": true } } ] } JSON # Apply to ALL buckets aws s3 ls | cut -d" " -f 3 | xargs -I{} aws s3api put-bucket-lifecycle-configuration --bucket {} --lifecycle-configuration file://lifecycle-expire.json # Apply to a single bucket; replace $BUCKET_NAME aws s3api put-bucket-lifecycle-configuration --bucket $BUCKET_NAME --lifecycle-configuration file://lifecycle-expire.json ...then a day later you can come back and delete the buckets using something like: # To force empty/delete all buckets aws s3 ls | cut -d" " -f 3 | xargs -I{} aws s3 rb s3://{} --force # To remove only empty buckets aws s3 ls | cut -d" " -f 3 | xargs -I{} aws s3 rb s3://{} # To force empty/delete a single bucket; replace $BUCKET_NAME aws s3 rb s3://$BUCKET_NAME --force It saves a lot of time and money so worth doing when you have many TBs to delete. A: I found the other answers either incomplete or requiring external dependencies to be installed (like boto), so here is one that is inspired by those but goes a little deeper. As documented in Working with Delete Markers, before a versioned bucket can be removed, all its versions must be completely deleted, which is a 2-step process: "delete" all version objects in the bucket, which marks them as deleted but does not actually delete them complete the deletion by deleting all the deletion marker objects Here is the pure CLI solution that worked for me (inspired by the other answers): #!/usr/bin/env bash bucket_name=... del_s3_bucket_obj() { local bucket_name=$1 local obj_type=$2 local query="{Objects: $obj_type[].{Key:Key,VersionId:VersionId}}" local s3_objects=$(aws s3api list-object-versions --bucket ${bucket_name} --output=json --query="$query") if ! (echo $s3_objects | grep -q '"Objects": null'); then aws s3api delete-objects --bucket "${bucket_name}" --delete "$s3_objects" fi } del_s3_bucket_obj ${bucket_name} 'Versions' del_s3_bucket_obj ${bucket_name} 'DeleteMarkers' Once this is done, the following will work: aws s3 rb "s3://${bucket_name}" Not sure how it will fare with 1000+ objects though, if anyone can report that would be awesome. A: By far the easiest method I've found is to use this CLI tool, s3wipe. It's provided as a docker container so you can use it like so: $ docker run -it --rm slmingol/s3wipe --help usage: s3wipe [-h] --path PATH [--id ID] [--key KEY] [--dryrun] [--quiet] [--batchsize BATCHSIZE] [--maxqueue MAXQUEUE] [--maxthreads MAXTHREADS] [--delbucket] [--region REGION] Recursively delete all keys in an S3 path optional arguments: -h, --help show this help message and exit --path PATH S3 path to delete (e.g. s3://bucket/path) --id ID Your AWS access key ID --key KEY Your AWS secret access key --dryrun Don't delete. Print what we would have deleted --quiet Suprress all non-error output --batchsize BATCHSIZE # of keys to batch delete (default 100) --maxqueue MAXQUEUE Max size of deletion queue (default 10k) --maxthreads MAXTHREADS Max number of threads (default 100) --delbucket If S3 path is a bucket path, delete the bucket also --region REGION Region of target S3 bucket. Default vaue `us- east-1` Example Here's an example where I'm deleting all the versioned objects in a bucket and then deleting the bucket: $ docker run -it --rm slmingol/s3wipe \ --id $(aws configure get default.aws_access_key_id) \ --key $(aws configure get default.aws_secret_access_key) \ --path s3://bw-tf-backends-aws-example-logs \ --delbucket [2019-02-20@03:39:16] INFO: Deleting from bucket: bw-tf-backends-aws-example-logs, path: None [2019-02-20@03:39:16] INFO: Getting subdirs to feed to list threads [2019-02-20@03:39:18] INFO: Done deleting keys [2019-02-20@03:39:18] INFO: Bucket is empty. Attempting to remove bucket How it works There's a bit to unpack here but the above is doing the following: docker run -it --rm mikelorant/s3wipe - runs s3wipe container interactively and deletes it after each execution --id & --key - passing our access key and access id in aws configure get default.aws_access_key_id - retrieves our key id aws configure get default.aws_secret_access_key - retrieves our key secret --path s3://bw-tf-backends-aws-example-logs - bucket that we want to delete --delbucket - deletes bucket once emptied References https://github.com/slmingol/s3wipe Is there a way to export an AWS CLI Profile to Environment Variables? https://cloud.docker.com/u/slmingol/repository/docker/slmingol/s3wipe A: https://gist.github.com/wknapik/191619bfa650b8572115cd07197f3baf #!/usr/bin/env bash set -eEo pipefail shopt -s inherit_errexit >/dev/null 2>&1 || true if [[ ! "$#" -eq 2 || "$1" != --bucket ]]; then echo -e "USAGE: $(basename "$0") --bucket <bucket>" exit 2 fi # $@ := bucket_name empty_bucket() { local -r bucket="${1:?}" for object_type in Versions DeleteMarkers; do local opt=() next_token="" while [[ "$next_token" != null ]]; do page="$(aws s3api list-object-versions --bucket "$bucket" --output json --max-items 1000 "${opt[@]}" \ --query="[{Objects: ${object_type}[].{Key:Key, VersionId:VersionId}}, NextToken]")" objects="$(jq -r '.[0]' <<<"$page")" next_token="$(jq -r '.[1]' <<<"$page")" case "$(jq -r .Objects <<<"$objects")" in '[]'|null) break;; *) opt=(--starting-token "$next_token") aws s3api delete-objects --bucket "$bucket" --delete "$objects";; esac done done } empty_bucket "${2#s3://}" E.g. empty_bucket.sh --bucket foo This will delete all object versions and delete markers in a bucket in batches of 1000. Afterwards, the bucket can be deleted with aws s3 rb s3://foo. Requires bash, awscli and jq. A: This works for me. Maybe running later versions of something and above > 1000 items. been running a couple of million files now. However its still not finished after half a day and no means to validate in AWS GUI =/ # Set bucket name to clearout BUCKET = 'bucket-to-clear' import boto3 s3 = boto3.resource('s3') bucket = s3.Bucket(BUCKET) max_len = 1000 # max 1000 items at one req chunk_counter = 0 # just to keep track keys = [] # collect to delete # clear files def clearout(): global bucket global chunk_counter global keys result = bucket.delete_objects(Delete=dict(Objects=keys)) if result["ResponseMetadata"]["HTTPStatusCode"] != 200: print("Issue with response") print(result) chunk_counter += 1 keys = [] print(". {n} chunks so far".format(n=chunk_counter)) return # start for key in bucket.object_versions.all(): item = {'Key': key.object_key, 'VersionId': key.id} keys.append(item) if len(keys) >= max_len: clearout() # make sure last files are cleared as well if len(keys) > 0: clearout() print("") print("Done, {n} items deleted".format(n=chunk_counter*max_len)) #bucket.delete() #as per usual uncomment if you're sure! A: To add to python solutions provided here: if you are getting boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request error, try creating ~/.boto file with the following data: [Credentials] aws_access_key_id = aws_access_key_id aws_secret_access_key = aws_secret_access_key [s3] host=s3.eu-central-1.amazonaws.com aws_access_key_id = aws_access_key_id aws_secret_access_key = aws_secret_access_key Helped me to delete bucket in Frankfurt region. Original answer: https://stackoverflow.com/a/41200567/2586441 A: You can use aws-cli to delete s3 bucket aws s3 rb s3://your-bucket-name If aws cli is not installed in your computer you can your following commands: For Linux or ubuntu: sudo apt-get install aws-cli Then check it is installed or not by: aws --version Now configure it by providing aws-access-credentials aws configure Then give the access key and secret access key and your region A: If you use AWS SDK for JavaScript S3 Client for Node.js (@aws-sdk/client-s3), you can use following code: const { S3Client, ListObjectsCommand } = require('@aws-sdk/client-s3') const endpoint = 'YOUR_END_POINT' const region = 'YOUR_REGION' // Create an Amazon S3 service client object. const s3Client = new S3Client({ region, endpoint }) const deleteEverythingInBucket = async bucketName => { console.log('Deleting all object in the bucket') const bucketParams = { Bucket: bucketName } try { const command = new ListObjectsCommand(bucketParams) const data = await s3Client.send(command) console.log('Bucket Data', JSON.stringify(data)) if (data?.Contents?.length > 0) { console.log('Removing objects in the bucket', data.Contents.length) for (const object of data.Contents) { console.log('Removing object', object) if (object.Key) { try { await deleteFromS3({ Bucket: bucketName, Key: object.Key }) } catch (err) { console.log('Error on object delete', err) } } } } } catch (err) { console.log('Error creating presigned URL', err) } } A: For my case, I wanted to be sure that all objects for specific prefixes would be deleted. So, we generate a list of all objects for each prefix, divide it by 1k records (AWS limitation), and delete them. Please note that AWS CLI and jq must be installed and configured. A text file with prefixes that we want to delete was created (in the example below prefixes.txt). The format is: prefix1 prefix2 And this is a shell script (also please change the BUCKET_NAME with the real name): #!/bin/sh BUCKET="BUCKET_NAME" PREFIXES_FILE="prefixes.txt" if [ -f "$PREFIXES_FILE" ]; then while read -r current_prefix do printf '***** PREFIX %s *****\n' "$current_prefix" OLD_OBJECTS_FILE="$current_prefix-all.json" if [ -f "$OLD_OBJECTS_FILE" ]; then printf 'Deleted %s...\n' "$OLD_OBJECTS_FILE" rm "$OLD_OBJECTS_FILE" fi cmd="aws s3api list-object-versions --bucket \"$BUCKET\" --prefix \"$current_prefix/\" --query \"[Versions,DeleteMarkers][].{Key: Key, VersionId: VersionId}\" >> $OLD_OBJECTS_FILE" echo "$cmd" eval "$cmd" no_of_obj=$(cat "$OLD_OBJECTS_FILE" | jq 'length') i=0 page=0 #Get old version Objects echo "Objects versions count: $no_of_obj" while [ $i -lt "$no_of_obj" ] do next=$((i+999)) old_versions=$(cat "$OLD_OBJECTS_FILE" | jq '.[] | {Key,VersionId}' | jq -s '.' | jq .[$i:$next]) paged_file_name="$current_prefix-page-$page.json" cat << EOF > "$paged_file_name" {"Objects":$old_versions, "Quiet":true} EOF echo "Deleting records from $i - $next" cmd="aws s3api delete-objects --bucket \"$BUCKET\" --delete file://$paged_file_name" echo "$cmd" eval "$cmd" i=$((i+1000)) page=$((page+1)) done done < "$PREFIXES_FILE" else echo "$PREFIXES_FILE does not exist." fi If you want just to check the list of objects and don't delete them immediately - please comment/remove the last eval "$cmd". A: I needed to delete older object versions but keep the current version in the bucket. Code uses iterators, works on buckets of any size with any number of objects. import boto3 from itertools import islice bucket = boto3.resource('s3').Bucket('bucket_name')) all_versions = bucket.object_versions.all() stale_versions = iter(filter(lambda x: not x.is_latest, all_versions)) pages = iter(lambda: tuple(islice(stale_versions, 1000)), ()) for page in pages: bucket.delete_objects( Delete={ 'Objects': [{ 'Key': item.key, 'VersionId': item.version_id } for item in page] })
How do I delete a versioned bucket in AWS S3 using the CLI?
I have tried both s3cmd: $ s3cmd -r -f -v del s3://my-versioned-bucket/ And the AWS CLI: $ aws s3 rm s3://my-versioned-bucket/ --recursive But both of these commands simply add DELETE markers to S3. The command for removing a bucket also doesn't work (from the AWS CLI): $ aws s3 rb s3://my-versioned-bucket/ --force Cleaning up. Please wait... Completed 1 part(s) with ... file(s) remaining remove_bucket failed: s3://my-versioned-bucket/ A client error (BucketNotEmpty) occurred when calling the DeleteBucket operation: The bucket you tried to delete is not empty. You must delete all versions in the bucket. Ok... how? There's no information in their documentation for this. S3Cmd says it's a 'fully-featured' S3 command-line tool, but it makes no reference to versions other than its own. Is there any way to do this without using the web interface, which will take forever and requires me to keep my laptop on?
[ "I ran into the same limitation of the AWS CLI. I found the easiest solution to be to use Python and boto3:\n#!/usr/bin/env python\n\nBUCKET = 'your-bucket-here'\n\nimport boto3\n\ns3 = boto3.resource('s3')\nbucket = s3.Bucket(BUCKET)\nbucket.object_versions.delete()\n\n# if you want to delete the now-empty bucket as well, uncomment this line:\n#bucket.delete()\n\nA previous version of this answer used boto but that solution had performance issues with large numbers of keys as Chuckles pointed out.\n", "Using boto3 it's even easier than with the proposed boto solution to delete all object versions in an S3 bucket:\n#!/usr/bin/env python\nimport boto3\n\ns3 = boto3.resource('s3')\nbucket = s3.Bucket('your-bucket-name')\nbucket.object_versions.all().delete()\n\nWorks fine also for very large amounts of object versions, although it might take some time in that case.\n", "You can delete all the objects in the versioned s3 bucket.\nBut I don't know how to delete specific objects.\n$ aws s3api delete-objects \\\n --bucket <value> \\\n --delete \"$(aws s3api list-object-versions \\\n --bucket <value> | \\\n jq '{Objects: [.Versions[] | {Key:.Key, VersionId : .VersionId}], Quiet: false}')\"\n\nAlternatively without jq:\n$ aws s3api delete-objects \\\n --bucket ${bucket_name} \\\n --delete \"$(aws s3api list-object-versions \\\n --bucket \"${bucket_name}\" \\\n --output=json \\\n --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')\"\n\n", "This two bash lines are enough for me to enable the bucket deletion !\n1: Delete objects\n\naws s3api delete-objects --bucket ${buckettoempty} --delete \"$(aws s3api list-object-versions --bucket ${buckettoempty} --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')\"\n\n2: Delete markers\n\naws s3api delete-objects --bucket ${buckettoempty} --delete \"$(aws s3api list-object-versions --bucket ${buckettoempty} --query='{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}')\"\n\n", "Looks like as of now, there is an Empty button in the AWS S3 console.\n\nJust select your bucket and click on it. It will ask you to confirm your decision by typing permanently delete\nNote, this will not delete the bucket itself.\n", "Here is a one liner you can just cut and paste into the command line to delete all versions and delete markers (it requires aws tools, replace yourbucket-name-backup with your bucket name)\necho '#!/bin/bash' > deleteBucketScript.sh \\\n&& aws --output text s3api list-object-versions --bucket $BUCKET_TO_PERGE \\\n| grep -E \"^VERSIONS\" |\\\nawk '{print \"aws s3api delete-object --bucket $BUCKET_TO_PERGE --key \"$4\" --version-id \"$8\";\"}' >> \\\ndeleteBucketScript.sh && . deleteBucketScript.sh; rm -f deleteBucketScript.sh; echo '#!/bin/bash' > \\\ndeleteBucketScript.sh && aws --output text s3api list-object-versions --bucket $BUCKET_TO_PERGE \\\n| grep -E \"^DELETEMARKERS\" | grep -v \"null\" \\\n| awk '{print \"aws s3api delete-object --bucket $BUCKET_TO_PERGE --key \"$3\" --version-id \"$5\";\"}' >> \\\ndeleteBucketScript.sh && . deleteBucketScript.sh; rm -f deleteBucketScript.sh;\n\nthen you could use:\naws s3 rb s3://bucket-name --force\n", "For those using multiple profiles via ~/.aws/config\nimport boto3\n\nPROFILE = \"my_profile\"\nBUCKET = \"my_bucket\"\n\nsession = boto3.Session(profile_name = PROFILE)\ns3 = session.resource('s3')\nbucket = s3.Bucket(BUCKET)\nbucket.object_versions.delete()\n\n", "If you have to delete/empty large S3 buckets, it becomes quite inefficient (and expensive) to delete every single object and version. It's often more convenient to let AWS expire all objects and versions.\naws s3api put-bucket-lifecycle-configuration \\\n --lifecycle-configuration '{\"Rules\":[{\n \"ID\":\"empty-bucket\",\n \"Status\":\"Enabled\",\n \"Prefix\":\"\",\n \"Expiration\":{\"Days\":1},\n \"NoncurrentVersionExpiration\":{\"NoncurrentDays\":1}\n }]}' \\\n --bucket YOUR-BUCKET\n\nThen you just have to wait 1 day and the bucket can be deleted with:\naws s3api delete-bucket --bucket YOUR-BUCKET\n\n", "One way to do it is iterate through the versions and delete them. A bit tricky on the CLI, but as you mentioned Java, that would be more straightforward:\nAmazonS3Client s3 = new AmazonS3Client();\nString bucketName = \"deleteversions-\"+UUID.randomUUID();\n\n//Creates Bucket\ns3.createBucket(bucketName);\n\n//Enable Versioning\nBucketVersioningConfiguration configuration = new BucketVersioningConfiguration(ENABLED);\ns3.setBucketVersioningConfiguration(new SetBucketVersioningConfigurationRequest(bucketName, configuration ));\n\n//Puts versions\ns3.putObject(bucketName, \"some-key\",new ByteArrayInputStream(\"some-bytes\".getBytes()), null);\ns3.putObject(bucketName, \"some-key\",new ByteArrayInputStream(\"other-bytes\".getBytes()), null);\n\n//Removes all versions\nfor ( S3VersionSummary version : S3Versions.inBucket(s3, bucketName) ) {\n String key = version.getKey();\n String versionId = version.getVersionId(); \n s3.deleteVersion(bucketName, key, versionId);\n}\n\n//Removes the bucket\ns3.deleteBucket(bucketName);\nSystem.out.println(\"Done!\");\n\nYou can also batch delete calls for efficiency if needed.\n", "If you want pure CLI approach (with jq):\naws s3api list-object-versions \\\n --bucket $bucket \\\n --region $region \\\n --query \"Versions[].Key\" \\\n --output json | jq 'unique' | jq -r '.[]' | while read key; do\n echo \"deleting versions of $key\"\n aws s3api list-object-versions \\\n --bucket $bucket \\\n --region $region \\\n --prefix $key \\\n --query \"Versions[].VersionId\" \\\n --output json | jq 'unique' | jq -r '.[]' | while read version; do\n echo \"deleting $version\"\n aws s3api delete-object \\\n --bucket $bucket \\\n --key $key \\\n --version-id $version \\\n --region $region\n done\ndone \n\n", "Simple bash loop I've found and implemented for N buckets:\nfor b in $(ListOfBuckets); do \\\n echo \"Emptying $b\"; \\\n aws s3api delete-objects --bucket $b --delete \"$(aws s3api list-object-versions --bucket $b --output=json --query='{Objects: *[].{Key:Key,VersionId:VersionId}}')\"; \\\ndone\n\n", "\nFor deleting specify object(s), using jq filter.\nYou may need cleanup the 'DeleteMarkers' not just 'Versions'.\nUsing $() instead of ``, you may embed variables for bucket-name and key-value.\n\naws s3api delete-objects --bucket bucket-name --delete \"$(aws s3api list-object-versions --bucket bucket-name | jq -M '{Objects: [.[\"Versions\",\"DeleteMarkers\"][]|select(.Key == \"key-value\")| {Key:.Key, VersionId : .VersionId}], Quiet: false}')\"\n\n", "I ran into issues with Abe's solution as the list_buckets generator is used to create a massive list called all_keys and I spent an hour without it ever completing. This tweak seems to work better for me, I had close to a million objects in my bucket and counting!\nimport boto\n\ns3 = boto.connect_s3()\nbucket = s3.get_bucket(\"your-bucket-name-here\")\n\nchunk_counter = 0 #this is simply a nice to have\nkeys = []\nfor key in bucket.list_versions():\n keys.append(key)\n if len(keys) > 1000:\n bucket.delete_keys(keys)\n chunk_counter += 1\n keys = []\n print(\"Another 1000 done.... {n} chunks so far\".format(n=chunk_counter))\n\n#bucket.delete() #as per usual uncomment if you're sure!\n\nHopefully this helps anyone else encountering this S3 nightmare!\n", "Even though technically it's not AWS CLI, I'd recommend using AWS Tools for Powershell for this task. Then you can use the simple command as below:\nRemove-S3Bucket -BucketName {bucket-name} -DeleteBucketContent -Force -Region {region}\n\nAs stated in the documentation, DeleteBucketContent flag does the following:\n\n\"If set, all remaining objects and/or object versions in the bucket\nare deleted proir (sic) to the bucket itself being deleted\"\n\nReference: https://docs.aws.amazon.com/powershell/latest/reference/items/Remove-S3Bucket.html\n", "This bash script found here: https://gist.github.com/weavenet/f40b09847ac17dd99d16\nworked as is for me. \nI saved script as: delete_all_versions.sh and then simply ran:\n./delete_all_versions.sh my_foobar_bucket\nand that worked without a flaw.\nDid not need python or boto or anything.\n", "You can do this from the AWS Console using Lifecycle Rules.\nOpen the bucket in question. Click the Management tab at the top. \nMake sure the Lifecycle Sub Tab is selected. \nClick + Add lifecycle rule\nOn Step 1 (Name and scope) enter a rule name (e.g. removeall)\nClick Next to Step 2 (Transitions) \nLeave this as is and click Next.\nYou are now on the 3. Expiration step. \nCheck the checkboxes for both Current Version and Previous Versions.\nClick the checkbox for \"Expire current version of object\" and enter the number 1 for \"After _____ days from object creation\nClick the checkbox for \"Permanently delete previous versions\" and enter the number 1 for \n\"After _____ days from becoming a previous version\"\nclick the checkbox for \"Clean up incomplete multipart uploads\"\n and enter the number 1 for \"After ____ days from start of upload\"\nClick Next\nReview what you just did.\nClick Save\nCome back in a day and see how it is doing. \n\n", "I improved the boto3 answer with Python3 and argv.\n\nSave the following script as something like s3_rm.py.\n\n#!/usr/bin/env python3\nimport sys\nimport boto3\n\ndef main():\n args = sys.argv[1:]\n if (len(args) < 1):\n print(\"Usage: {} s3_bucket_name\".format(sys.argv[0]))\n exit()\n\n s3 = boto3.resource('s3')\n bucket = s3.Bucket(args[0])\n bucket.object_versions.delete()\n\n # if you want to delete the now-empty bucket as well, uncomment this line:\n #bucket.delete()\n\nif __name__ == \"__main__\": \n main()\n\n\nAdd chmod +x s3_rm.py.\nRun the function like ./s3_rm.py my_bucket_name.\n\n", "In the same vein as https://stackoverflow.com/a/63613510/805031 ... this is what I use to clean up accounts before closing them:\n# If the data is too large, apply LCP to remove all objects within a day\n\n# Create lifecycle-expire.json with the LCP required to purge all objects\n# Based on instructions from: https://aws.amazon.com/premiumsupport/knowledge-center/s3-empty-bucket-lifecycle-rule/\ncat << JSON > lifecycle-expire.json\n{\n \"Rules\": [\n {\n \"ID\": \"remove-all-objects-asap\",\n \"Filter\": {\n \"Prefix\": \"\"\n },\n \"Status\": \"Enabled\",\n \"Expiration\": {\n \"Days\": 1\n },\n \"NoncurrentVersionExpiration\": {\n \"NoncurrentDays\": 1\n },\n \"AbortIncompleteMultipartUpload\": {\n \"DaysAfterInitiation\": 1\n }\n },\n {\n \"ID\": \"remove-expired-delete-markers\",\n \"Filter\": {\n \"Prefix\": \"\"\n },\n \"Status\": \"Enabled\",\n \"Expiration\": {\n \"ExpiredObjectDeleteMarker\": true\n }\n }\n ]\n}\nJSON\n\n# Apply to ALL buckets\naws s3 ls | cut -d\" \" -f 3 | xargs -I{} aws s3api put-bucket-lifecycle-configuration --bucket {} --lifecycle-configuration file://lifecycle-expire.json\n\n# Apply to a single bucket; replace $BUCKET_NAME\naws s3api put-bucket-lifecycle-configuration --bucket $BUCKET_NAME --lifecycle-configuration file://lifecycle-expire.json\n\n\n...then a day later you can come back and delete the buckets using something like:\n# To force empty/delete all buckets\naws s3 ls | cut -d\" \" -f 3 | xargs -I{} aws s3 rb s3://{} --force\n\n# To remove only empty buckets\naws s3 ls | cut -d\" \" -f 3 | xargs -I{} aws s3 rb s3://{}\n\n# To force empty/delete a single bucket; replace $BUCKET_NAME\naws s3 rb s3://$BUCKET_NAME --force\n\n\nIt saves a lot of time and money so worth doing when you have many TBs to delete.\n", "I found the other answers either incomplete or requiring external dependencies to be installed (like boto), so here is one that is inspired by those but goes a little deeper. \nAs documented in Working with Delete Markers, before a versioned bucket can be removed, all its versions must be completely deleted, which is a 2-step process: \n\n\"delete\" all version objects in the bucket, which marks them as\ndeleted but does not actually delete them \ncomplete the deletion by deleting all the deletion marker objects\n\nHere is the pure CLI solution that worked for me (inspired by the other answers): \n#!/usr/bin/env bash\n\nbucket_name=...\n\ndel_s3_bucket_obj()\n{\n local bucket_name=$1\n local obj_type=$2\n local query=\"{Objects: $obj_type[].{Key:Key,VersionId:VersionId}}\"\n local s3_objects=$(aws s3api list-object-versions --bucket ${bucket_name} --output=json --query=\"$query\")\n if ! (echo $s3_objects | grep -q '\"Objects\": null'); then\n aws s3api delete-objects --bucket \"${bucket_name}\" --delete \"$s3_objects\"\n fi\n}\n\ndel_s3_bucket_obj ${bucket_name} 'Versions'\ndel_s3_bucket_obj ${bucket_name} 'DeleteMarkers'\n\nOnce this is done, the following will work: \naws s3 rb \"s3://${bucket_name}\"\n\nNot sure how it will fare with 1000+ objects though, if anyone can report that would be awesome. \n", "By far the easiest method I've found is to use this CLI tool, s3wipe. It's provided as a docker container so you can use it like so:\n$ docker run -it --rm slmingol/s3wipe --help\nusage: s3wipe [-h] --path PATH [--id ID] [--key KEY] [--dryrun] [--quiet]\n [--batchsize BATCHSIZE] [--maxqueue MAXQUEUE]\n [--maxthreads MAXTHREADS] [--delbucket] [--region REGION]\n\nRecursively delete all keys in an S3 path\n\noptional arguments:\n -h, --help show this help message and exit\n --path PATH S3 path to delete (e.g. s3://bucket/path)\n --id ID Your AWS access key ID\n --key KEY Your AWS secret access key\n --dryrun Don't delete. Print what we would have deleted\n --quiet Suprress all non-error output\n --batchsize BATCHSIZE # of keys to batch delete (default 100)\n --maxqueue MAXQUEUE Max size of deletion queue (default 10k)\n --maxthreads MAXTHREADS Max number of threads (default 100)\n --delbucket If S3 path is a bucket path, delete the bucket also\n --region REGION Region of target S3 bucket. Default vaue `us-\n east-1`\n\nExample\nHere's an example where I'm deleting all the versioned objects in a bucket and then deleting the bucket:\n$ docker run -it --rm slmingol/s3wipe \\\n --id $(aws configure get default.aws_access_key_id) \\\n --key $(aws configure get default.aws_secret_access_key) \\\n --path s3://bw-tf-backends-aws-example-logs \\\n --delbucket\n[2019-02-20@03:39:16] INFO: Deleting from bucket: bw-tf-backends-aws-example-logs, path: None\n[2019-02-20@03:39:16] INFO: Getting subdirs to feed to list threads\n[2019-02-20@03:39:18] INFO: Done deleting keys\n[2019-02-20@03:39:18] INFO: Bucket is empty. Attempting to remove bucket\n\nHow it works\nThere's a bit to unpack here but the above is doing the following:\n\ndocker run -it --rm mikelorant/s3wipe - runs s3wipe container interactively and deletes it after each execution\n--id & --key - passing our access key and access id in\naws configure get default.aws_access_key_id - retrieves our key id\naws configure get default.aws_secret_access_key - retrieves our key secret\n--path s3://bw-tf-backends-aws-example-logs - bucket that we want to delete\n--delbucket - deletes bucket once emptied\n\nReferences\n\nhttps://github.com/slmingol/s3wipe\nIs there a way to export an AWS CLI Profile to Environment Variables?\nhttps://cloud.docker.com/u/slmingol/repository/docker/slmingol/s3wipe\n\n", "https://gist.github.com/wknapik/191619bfa650b8572115cd07197f3baf\n#!/usr/bin/env bash\n\nset -eEo pipefail\nshopt -s inherit_errexit >/dev/null 2>&1 || true\n\nif [[ ! \"$#\" -eq 2 || \"$1\" != --bucket ]]; then\n echo -e \"USAGE: $(basename \"$0\") --bucket <bucket>\"\n exit 2\nfi\n\n# $@ := bucket_name\nempty_bucket() {\n local -r bucket=\"${1:?}\"\n for object_type in Versions DeleteMarkers; do\n local opt=() next_token=\"\"\n while [[ \"$next_token\" != null ]]; do\n page=\"$(aws s3api list-object-versions --bucket \"$bucket\" --output json --max-items 1000 \"${opt[@]}\" \\\n --query=\"[{Objects: ${object_type}[].{Key:Key, VersionId:VersionId}}, NextToken]\")\"\n objects=\"$(jq -r '.[0]' <<<\"$page\")\"\n next_token=\"$(jq -r '.[1]' <<<\"$page\")\"\n case \"$(jq -r .Objects <<<\"$objects\")\" in\n '[]'|null) break;;\n *) opt=(--starting-token \"$next_token\")\n aws s3api delete-objects --bucket \"$bucket\" --delete \"$objects\";;\n esac\n done\n done\n}\n\nempty_bucket \"${2#s3://}\"\n\nE.g. empty_bucket.sh --bucket foo\nThis will delete all object versions and delete markers in a bucket in batches of 1000. Afterwards, the bucket can be deleted with aws s3 rb s3://foo.\nRequires bash, awscli and jq.\n", "This works for me. Maybe running later versions of something and above > 1000 items. been running a couple of million files now. However its still not finished after half a day and no means to validate in AWS GUI =/\n# Set bucket name to clearout\nBUCKET = 'bucket-to-clear'\n\nimport boto3\ns3 = boto3.resource('s3')\nbucket = s3.Bucket(BUCKET)\n\nmax_len = 1000 # max 1000 items at one req\nchunk_counter = 0 # just to keep track\nkeys = [] # collect to delete\n\n# clear files\ndef clearout():\n global bucket\n global chunk_counter\n global keys\n result = bucket.delete_objects(Delete=dict(Objects=keys))\n\n if result[\"ResponseMetadata\"][\"HTTPStatusCode\"] != 200:\n print(\"Issue with response\")\n print(result)\n\n chunk_counter += 1\n keys = []\n print(\". {n} chunks so far\".format(n=chunk_counter))\n return\n\n# start\nfor key in bucket.object_versions.all():\n item = {'Key': key.object_key, 'VersionId': key.id}\n keys.append(item)\n if len(keys) >= max_len:\n clearout()\n\n# make sure last files are cleared as well\nif len(keys) > 0:\n clearout()\n\nprint(\"\")\nprint(\"Done, {n} items deleted\".format(n=chunk_counter*max_len))\n#bucket.delete() #as per usual uncomment if you're sure!\n\n", "To add to python solutions provided here: if you are getting boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request error, try creating ~/.boto file with the following data:\n[Credentials]\naws_access_key_id = aws_access_key_id\naws_secret_access_key = aws_secret_access_key\n[s3]\nhost=s3.eu-central-1.amazonaws.com\naws_access_key_id = aws_access_key_id\naws_secret_access_key = aws_secret_access_key\n\nHelped me to delete bucket in Frankfurt region.\nOriginal answer: https://stackoverflow.com/a/41200567/2586441\n", "You can use aws-cli to delete s3 bucket\n\naws s3 rb s3://your-bucket-name\n\nIf aws cli is not installed in your computer you can your following commands:\nFor Linux or ubuntu:\n\nsudo apt-get install aws-cli\n\nThen check it is installed or not by:\n\naws --version\n\nNow configure it by providing aws-access-credentials\n\naws configure\n\nThen give the access key and secret access key and your region\n", "If you use AWS SDK for JavaScript S3 Client for Node.js (@aws-sdk/client-s3), you can use following code:\nconst { S3Client, ListObjectsCommand } = require('@aws-sdk/client-s3')\n\n\nconst endpoint = 'YOUR_END_POINT'\nconst region = 'YOUR_REGION'\n\n// Create an Amazon S3 service client object.\nconst s3Client = new S3Client({ region, endpoint })\n\nconst deleteEverythingInBucket = async bucketName => {\n console.log('Deleting all object in the bucket')\n\n const bucketParams = {\n Bucket: bucketName\n }\n\n try {\n const command = new ListObjectsCommand(bucketParams)\n const data = await s3Client.send(command)\n console.log('Bucket Data', JSON.stringify(data))\n if (data?.Contents?.length > 0) {\n console.log('Removing objects in the bucket', data.Contents.length)\n for (const object of data.Contents) {\n console.log('Removing object', object)\n if (object.Key) {\n try {\n await deleteFromS3({\n Bucket: bucketName,\n Key: object.Key\n })\n } catch (err) {\n console.log('Error on object delete', err)\n }\n }\n }\n }\n } catch (err) {\n console.log('Error creating presigned URL', err)\n }\n}\n\n", "For my case, I wanted to be sure that all objects for specific prefixes would be deleted. So, we generate a list of all objects for each prefix, divide it by 1k records (AWS limitation), and delete them.\nPlease note that AWS CLI and jq must be installed and configured.\nA text file with prefixes that we want to delete was created (in the example below prefixes.txt).\nThe format is:\nprefix1\nprefix2\n\nAnd this is a shell script (also please change the BUCKET_NAME with the real name):\n#!/bin/sh\n\nBUCKET=\"BUCKET_NAME\"\nPREFIXES_FILE=\"prefixes.txt\"\n\nif [ -f \"$PREFIXES_FILE\" ]; then\n while read -r current_prefix\n do\n printf '***** PREFIX %s *****\\n' \"$current_prefix\"\n\n OLD_OBJECTS_FILE=\"$current_prefix-all.json\"\n\n if [ -f \"$OLD_OBJECTS_FILE\" ]; then\n printf 'Deleted %s...\\n' \"$OLD_OBJECTS_FILE\"\n\n rm \"$OLD_OBJECTS_FILE\"\n fi\n\n cmd=\"aws s3api list-object-versions --bucket \\\"$BUCKET\\\" --prefix \\\"$current_prefix/\\\" --query \\\"[Versions,DeleteMarkers][].{Key: Key, VersionId: VersionId}\\\" >> $OLD_OBJECTS_FILE\"\n echo \"$cmd\"\n eval \"$cmd\"\n\n no_of_obj=$(cat \"$OLD_OBJECTS_FILE\" | jq 'length')\n i=0\n page=0\n\n #Get old version Objects\n echo \"Objects versions count: $no_of_obj\"\n\n while [ $i -lt \"$no_of_obj\" ]\n do\n next=$((i+999))\n old_versions=$(cat \"$OLD_OBJECTS_FILE\" | jq '.[] | {Key,VersionId}' | jq -s '.' | jq .[$i:$next])\n paged_file_name=\"$current_prefix-page-$page.json\"\n cat << EOF > \"$paged_file_name\"\n{\"Objects\":$old_versions, \"Quiet\":true}\nEOF\n echo \"Deleting records from $i - $next\"\n cmd=\"aws s3api delete-objects --bucket \\\"$BUCKET\\\" --delete file://$paged_file_name\"\n echo \"$cmd\"\n eval \"$cmd\"\n\n i=$((i+1000))\n page=$((page+1))\n done\n\n done < \"$PREFIXES_FILE\"\nelse\n echo \"$PREFIXES_FILE does not exist.\"\nfi\n\nIf you want just to check the list of objects and don't delete them immediately - please comment/remove the last eval \"$cmd\".\n", "I needed to delete older object versions but keep the current version in the bucket. Code uses iterators, works on buckets of any size with any number of objects.\nimport boto3\nfrom itertools import islice\n\nbucket = boto3.resource('s3').Bucket('bucket_name'))\nall_versions = bucket.object_versions.all()\nstale_versions = iter(filter(lambda x: not x.is_latest, all_versions))\npages = iter(lambda: tuple(islice(stale_versions, 1000)), ())\nfor page in pages:\n bucket.delete_objects(\n Delete={\n 'Objects': [{\n 'Key': item.key,\n 'VersionId': item.version_id\n } for item in page]\n })\n\n" ]
[ 129, 49, 42, 27, 17, 15, 12, 12, 11, 9, 8, 7, 7, 6, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "aws_cli", "command_line_interface", "s3cmd" ]
stackoverflow_0029809105_amazon_s3_amazon_web_services_aws_cli_command_line_interface_s3cmd.txt
Q: Manually generate GA linker parameter with gtag (GA4) Background To pass a client_id from one domain to another, Google supports adding a "linker" parameter to outgoing Links that are part of the cross-domain tracking setup. This linker parameter contains the client_id, session_id (I believe, information about Google Ads, e.g. gclid) and a basic fingerprint + timestamp. On the receiving domain, if the browser fingerprint matches and the timestamp is not too far in the past, the passed client_id and session_id are stored in a first party cookie on the 2nd domain and consequently used. analytics.js / GA-UA With analytics.js (GA-UA) you could easily do the following, to decorate URLs manually: function decorateUrl(urlString) { var ga = window[window['GoogleAnalyticsObject']]; var tracker; if (ga && typeof ga.getAll === 'function') { tracker = ga.getAll()[0]; // Uses the first tracker created on the page urlString = (new window.gaplugins.Linker(tracker)).decorate(urlString); } return urlString; } Yet, when only gtag is loaded, window.ga and window.gaplugins are not defined. As far as I see, there is currently no documented way to manually generate links with the linker parameter with gtag. In Google's documentation, they suggest setting up the linker manually. (https://support.google.com/analytics/answer/10071811?hl=en#zippy=%2Cmanual-setup) But this has several disadvantages, e.g. I have to create a custom "fingerprint" logic (so that decorated URLs are not shared) and e.g. Google Ads information is not included. Either way, I would like to use the internal gtag logic to decorate URLs. "Hacky" Workaround Solution gtag automatically decorates a tags (as soon as they're clicked) that lead to a cross-domain-tracking domain specified in the GA4 data stream settings (e.g. "test.com"), but I specifically need to decorate URLs manually (i.e. without immediately redirecting to them). I thought about doing the following: Create a dummy, hidden a tag with the URL to decorate Prevent redirection with onclick='event.preventDefault();' Simulate click on hidden element so that gtag automatically adds the linker url parameter to the href attribute Extract new href attribute Remove hidden element function decorateUrlGtag(urlString) { var tempAnchorEl = document.createElement("a"); tempAnchorEl.setAttribute("type", "hidden"); tempAnchorEl.setAttribute("href", urlString); tempAnchorEl.setAttribute("onclick", "event.preventDefault(); return false"); document.body.appendChild(tempAnchorEl); tempAnchorEl.click(); var urlWithLinker = tempAnchorEl.href; tempAnchorEl.remove(); return urlWithLinker; } This also does not work, because gtag does not seem to register the tempAnchorEl.click(); call. If I click the link manually, the URL is decorated - as expected. Suggested Solutions The solutions outlined here (Google Analytics gtag.js Manually adding the linker cross-domain parameter to URLs) also do not work for me: Answer: Even after gtag is initiated, I do not see a global ga element Answer: Same problem (no ga defined) Do you (1) know if there is a way to generate the linker parameter manually with gtag that I have overlooked, (2) know how to make my "hacky" solution work or (3) have another possible solution? A: I haven't grokked this solution, and I am not sure it answers your question directly, but Simo does give an outline of how to configure GA4 cross domain tracking here: https://www.simoahava.com/gtm-tips/cross-domain-tracking-google-analytics-4/#how-to-configure-cross-domain-tracking-manually He breaks the problem down into steps but does not go into great detail. He provides one code snippet: "...you could also load the URL parameter values directly into the GA4 configuration with something like: gtag('config', 'G-12345', { // Namespace roll-up trackers cookie_prefix: 'roll-up', // Pull in the Client ID from the URL client_id: (new URLSearchParams(document.location.search)).get('client_id'), // Pull in the Session ID from the URL session_id: (new URLSearchParams(document.location.search)).get('session_id') }); " Hope that helps!
Manually generate GA linker parameter with gtag (GA4)
Background To pass a client_id from one domain to another, Google supports adding a "linker" parameter to outgoing Links that are part of the cross-domain tracking setup. This linker parameter contains the client_id, session_id (I believe, information about Google Ads, e.g. gclid) and a basic fingerprint + timestamp. On the receiving domain, if the browser fingerprint matches and the timestamp is not too far in the past, the passed client_id and session_id are stored in a first party cookie on the 2nd domain and consequently used. analytics.js / GA-UA With analytics.js (GA-UA) you could easily do the following, to decorate URLs manually: function decorateUrl(urlString) { var ga = window[window['GoogleAnalyticsObject']]; var tracker; if (ga && typeof ga.getAll === 'function') { tracker = ga.getAll()[0]; // Uses the first tracker created on the page urlString = (new window.gaplugins.Linker(tracker)).decorate(urlString); } return urlString; } Yet, when only gtag is loaded, window.ga and window.gaplugins are not defined. As far as I see, there is currently no documented way to manually generate links with the linker parameter with gtag. In Google's documentation, they suggest setting up the linker manually. (https://support.google.com/analytics/answer/10071811?hl=en#zippy=%2Cmanual-setup) But this has several disadvantages, e.g. I have to create a custom "fingerprint" logic (so that decorated URLs are not shared) and e.g. Google Ads information is not included. Either way, I would like to use the internal gtag logic to decorate URLs. "Hacky" Workaround Solution gtag automatically decorates a tags (as soon as they're clicked) that lead to a cross-domain-tracking domain specified in the GA4 data stream settings (e.g. "test.com"), but I specifically need to decorate URLs manually (i.e. without immediately redirecting to them). I thought about doing the following: Create a dummy, hidden a tag with the URL to decorate Prevent redirection with onclick='event.preventDefault();' Simulate click on hidden element so that gtag automatically adds the linker url parameter to the href attribute Extract new href attribute Remove hidden element function decorateUrlGtag(urlString) { var tempAnchorEl = document.createElement("a"); tempAnchorEl.setAttribute("type", "hidden"); tempAnchorEl.setAttribute("href", urlString); tempAnchorEl.setAttribute("onclick", "event.preventDefault(); return false"); document.body.appendChild(tempAnchorEl); tempAnchorEl.click(); var urlWithLinker = tempAnchorEl.href; tempAnchorEl.remove(); return urlWithLinker; } This also does not work, because gtag does not seem to register the tempAnchorEl.click(); call. If I click the link manually, the URL is decorated - as expected. Suggested Solutions The solutions outlined here (Google Analytics gtag.js Manually adding the linker cross-domain parameter to URLs) also do not work for me: Answer: Even after gtag is initiated, I do not see a global ga element Answer: Same problem (no ga defined) Do you (1) know if there is a way to generate the linker parameter manually with gtag that I have overlooked, (2) know how to make my "hacky" solution work or (3) have another possible solution?
[ "I haven't grokked this solution, and I am not sure it answers your question directly, but Simo does give an outline of how to configure GA4 cross domain tracking here:\nhttps://www.simoahava.com/gtm-tips/cross-domain-tracking-google-analytics-4/#how-to-configure-cross-domain-tracking-manually\nHe breaks the problem down into steps but does not go into great detail. He provides one code snippet:\n\"...you could also load the URL parameter values directly into the GA4 configuration with something like:\ngtag('config', 'G-12345', {\n // Namespace roll-up trackers\n cookie_prefix: 'roll-up',\n // Pull in the Client ID from the URL\n client_id: (new URLSearchParams(document.location.search)).get('client_id'),\n // Pull in the Session ID from the URL\n session_id: (new URLSearchParams(document.location.search)).get('session_id') \n});\n\n\"\nHope that helps!\n" ]
[ 0 ]
[]
[]
[ "google_analytics", "google_tag_manager", "javascript" ]
stackoverflow_0073066384_google_analytics_google_tag_manager_javascript.txt
Q: How can I use an element to hover other elements that are in front of the hovered element Elements A and B are next to each other in the same parent element. Now I want that when I hover over element B, element A is also affected. .child-a, .child-b { height: 200px; width: 200px; } .child-a { background-color: brown; } .child-b { background-color: blue; } .parent .child-b:hover, /* Works */ .parent .child-b:hover + .child-a { /* Doesn't work*/ background-color: black; } <div class="parent"> <div class="child-a"> /* should be affected when i hover on element B*/ A </div> <div class="child-b"> B </div> </div> A: In a future world, you could use the CSS :hasselector to check whether the parent element has a hovered element like this: .parent:has(.child-b:hover) .child-a Unfortunately, this is currently still unsupported in Firefox without a flag, so the only other way you could use is by checking whether the user hovers over the parent element with .parent:hover and then check whether the child a is also hovered, like this: .parent:hover .child-a:not(:hover). Now, the hover effect would only apply if you're hovering over the parent element or the child-b element, but not over the child-a element.
How can I use an element to hover other elements that are in front of the hovered element
Elements A and B are next to each other in the same parent element. Now I want that when I hover over element B, element A is also affected. .child-a, .child-b { height: 200px; width: 200px; } .child-a { background-color: brown; } .child-b { background-color: blue; } .parent .child-b:hover, /* Works */ .parent .child-b:hover + .child-a { /* Doesn't work*/ background-color: black; } <div class="parent"> <div class="child-a"> /* should be affected when i hover on element B*/ A </div> <div class="child-b"> B </div> </div>
[ "In a future world, you could use the CSS :hasselector to check whether the parent element has a hovered element like this: .parent:has(.child-b:hover) .child-a\nUnfortunately, this is currently still unsupported in Firefox without a flag, so the only other way you could use is by checking whether the user hovers over the parent element with .parent:hover and then check whether the child a is also hovered, like this: .parent:hover .child-a:not(:hover). Now, the hover effect would only apply if you're hovering over the parent element or the child-b element, but not over the child-a element.\n" ]
[ 0 ]
[]
[]
[ "css", "hover", "html" ]
stackoverflow_0074680188_css_hover_html.txt
Q: Method to insert object at index (LinkedList) I am trying to create a method that replace a specific object from my linked list by another object. replaceAtIndex(object, index). I have no idea how to get a specified index from my linked list. Here is the code for my linked list class: public class CellList { public class cellNode{ private cellPhone phone; private cellNode next; //default null public cellNode() { phone = null; next = null; } //parametrized public cellNode(cellPhone phone, cellNode next) { this.phone = phone; this.next = next; } public cellNode(cellNode x) { this.phone = x.phone; this.next = x.next; } //Cloning protected Object clone() throws CloneNotSupportedException { cellNode x=new cellNode(this.phone,this.next); return x; } public cellPhone getPhone() { return phone; } public cellNode getNext() { return next; } public void setPhone(cellPhone phone) { this.phone = phone; } public void setNext(cellNode next) { this.next = next; } } private cellNode head; private int size; //default public CellList() { head=null; size=0; } //copy public CellList(CellList c) { this.head = c.head; this.size = c.size; } //Add a node at start public void addToStart(cellPhone c) { cellNode cn=new cellNode(c,head); head=cn; size++; } ` I tried this method but it only correctly replace my elements if the index passes is less than 1. If I try at index 3 for example, it won't replace anything at all and show me the normal list. If I try an index that is higher than my size, it will throw the exception as expected. ` public void insertAtIndex(cellPhone c,int index) { if(index<0 || index>=size) { throw new NoSuchElementException("Out of boundary!!!"); } else { if(index==0) { addToStart(c); } else if(index>0 && index<size) { cellNode curr=head.next; cellNode prev=head; cellNode cn=new cellNode(c,head); int i=1; while(curr!=null) { if(i==index) { prev.next=cn; cn.next=curr; size++; i++; return; } prev=curr; curr=curr.next; } } } } ` A: You don't change i in the while (curr != null) loop. If you add i++, it looks like it ought to work.
Method to insert object at index (LinkedList)
I am trying to create a method that replace a specific object from my linked list by another object. replaceAtIndex(object, index). I have no idea how to get a specified index from my linked list. Here is the code for my linked list class: public class CellList { public class cellNode{ private cellPhone phone; private cellNode next; //default null public cellNode() { phone = null; next = null; } //parametrized public cellNode(cellPhone phone, cellNode next) { this.phone = phone; this.next = next; } public cellNode(cellNode x) { this.phone = x.phone; this.next = x.next; } //Cloning protected Object clone() throws CloneNotSupportedException { cellNode x=new cellNode(this.phone,this.next); return x; } public cellPhone getPhone() { return phone; } public cellNode getNext() { return next; } public void setPhone(cellPhone phone) { this.phone = phone; } public void setNext(cellNode next) { this.next = next; } } private cellNode head; private int size; //default public CellList() { head=null; size=0; } //copy public CellList(CellList c) { this.head = c.head; this.size = c.size; } //Add a node at start public void addToStart(cellPhone c) { cellNode cn=new cellNode(c,head); head=cn; size++; } ` I tried this method but it only correctly replace my elements if the index passes is less than 1. If I try at index 3 for example, it won't replace anything at all and show me the normal list. If I try an index that is higher than my size, it will throw the exception as expected. ` public void insertAtIndex(cellPhone c,int index) { if(index<0 || index>=size) { throw new NoSuchElementException("Out of boundary!!!"); } else { if(index==0) { addToStart(c); } else if(index>0 && index<size) { cellNode curr=head.next; cellNode prev=head; cellNode cn=new cellNode(c,head); int i=1; while(curr!=null) { if(i==index) { prev.next=cn; cn.next=curr; size++; i++; return; } prev=curr; curr=curr.next; } } } } `
[ "You don't change i in the while (curr != null) loop. If you add i++, it looks like it ought to work.\n" ]
[ 1 ]
[]
[]
[ "java", "linked_list", "object" ]
stackoverflow_0074680166_java_linked_list_object.txt
Q: Which identifier am I supposed to use to close my library in R7RS/Scheme? I'm trying to write a R7RS library which will reverse a list in a destructive manner, I currently have written this code; #lang r7rs (define-library (in-place-reverse!) (export reverse!) (import (scheme base)) (ignore the code below) ; (define (reverse! list) ; (if (null? list) ; '() ; (append (reverse (cdr list)) ; (list (car list))))))) (begin (define (reverse! lst) (define (reverse-hulp! prev cur) (cond ((null? cur) prev) ((next) cdr cur) ((set-cdr! cur prev)) (else (reverse-hulp! cur next)))))) (reverse! '() lst)) The only thing I'm worried about is the error i'm receiving. define-library: expected one of these identifiers: `import', `export', `begin', `cond-expand', or `include' parsing context: while parsing library clause in: reverse! Is my code even working as i'm asking it to? any kind of advice is appreciated! Tried to implement a reverse procedure as I would in R5RS only this time i'm using destructive operators. A: Your error message doesn't match your code. When I run your example I get the error: define-library: expected one of these identifiers: `import', `export', `begin', `cond-expand', or `include' parsing context: while parsing library clause in: ignore If I comment out the ignore form, I get your error and the expression (reverse! '() lst) is highlighted. The problem is that the expression is outside the begin. Also, the intention was probably to make start the process in reverse!. #lang r7rs (define-library (in-place-reverse!) (export reverse!) (import (scheme base)) (begin (define (reverse! lst) (define (reverse-hulp! prev cur) (cond ((null? cur) prev) ((next) cdr cur) ((set-cdr! cur prev)) (else (reverse-hulp! cur next)))) (reverse! '() lst)))) Then you need to figure out what to do about next.
Which identifier am I supposed to use to close my library in R7RS/Scheme?
I'm trying to write a R7RS library which will reverse a list in a destructive manner, I currently have written this code; #lang r7rs (define-library (in-place-reverse!) (export reverse!) (import (scheme base)) (ignore the code below) ; (define (reverse! list) ; (if (null? list) ; '() ; (append (reverse (cdr list)) ; (list (car list))))))) (begin (define (reverse! lst) (define (reverse-hulp! prev cur) (cond ((null? cur) prev) ((next) cdr cur) ((set-cdr! cur prev)) (else (reverse-hulp! cur next)))))) (reverse! '() lst)) The only thing I'm worried about is the error i'm receiving. define-library: expected one of these identifiers: `import', `export', `begin', `cond-expand', or `include' parsing context: while parsing library clause in: reverse! Is my code even working as i'm asking it to? any kind of advice is appreciated! Tried to implement a reverse procedure as I would in R5RS only this time i'm using destructive operators.
[ "Your error message doesn't match your code.\nWhen I run your example I get the error:\ndefine-library: expected one of these identifiers: `import', `export', `begin', `cond-expand', or `include'\n parsing context: \n while parsing library clause in: ignore\n\nIf I comment out the ignore form, I get your error and the expression (reverse! '() lst) is highlighted.\nThe problem is that the expression is outside the begin.\nAlso, the intention was probably to make start the process in reverse!.\n#lang r7rs\n\n\n(define-library (in-place-reverse!)\n (export reverse!)\n (import (scheme base))\n\n (begin\n \n (define (reverse! lst)\n (define (reverse-hulp! prev cur)\n (cond\n ((null? cur) prev)\n ((next) cdr cur)\n ((set-cdr! cur prev))\n (else\n (reverse-hulp! cur next))))\n (reverse! '() lst))))\n\nThen you need to figure out what to do about next.\n" ]
[ 1 ]
[]
[]
[ "r5rs", "r7rs", "racket", "scheme" ]
stackoverflow_0074679810_r5rs_r7rs_racket_scheme.txt
Q: Calculating sensitivity indices using R I am currently working on a class project where I am supposed to analyze some results in few select papers. I want to calculate the sensitivity indices of the parameters with respect to the basic reproduction number (R0) using the formula S(p)=(p/R0)*(∂R0/∂p) where p is a parameter and S(p) is the sensitivity index of p. I have already calculated the indices manually, but I was wondering if there is a way to automate this process using R. The formula for R0 and the parameter values are given below. beta_s = 0.274, alpha_a = 0.4775, alpha_u = 0.695, mu = 0.062, q_i = 0.078, gamma_a = 0.29, 1/eta_i = 0.009, 1/eta_u = 0.05 R0 = (beta_s*alpha_a)/(gamma_a+mu) + (beta_s*alpha_u*gamma_a*(1-q_i))/((gamma_a+mu)*(eta_u+mu)) It would be great if someone could help me with finding the sensitivity indices using R. Thanks a lot for your time! Edited code based on @epsilonG's answer beta_s = 0.274 alpha_a = 0.4775 alpha_u = 0.695 mu = 0.062 q_i = 0.078 gamma_a = 0.29 1/eta_i = 0.009 1/eta_u = 0.05 R0 = (beta_s*alpha_a)/(gamma_a+mu) + (beta_s*alpha_u*gamma_a*(1-q_i))/((gamma_a+mu)*(eta_u+mu)) # Create function func <- expression((beta_s*alpha_a)/(gamma_a+mu) + (beta_s*alpha_u*gamma_a*(1-q_i))/((gamma_a+mu)*(eta_u+mu)) # Calculate the derivative of R0 to alpha_a dalpha_a <- deriv(func, 'alpha_a') val <- eval(dalpha_a) # Calculate the sensitivity index of alpha_a S_alpha_a <- (alpha_a/R0)*val A: # Create function func <- expression((beta_s*alpha_a)/(gamma_a+mu) + (beta_s*alpha_u*gamma_a*(1-q_i))/((gamma_a+mu))) # Calculate the derivative of R0 to beta_s dbeta_s <- deriv(func , 'beta_s') val <- eval(dbeta_s) # Calculate the sensitivity index of beta_s s_beta_s <- beta_s/R0 * val You may use for loop do this through all the parameters.
Calculating sensitivity indices using R
I am currently working on a class project where I am supposed to analyze some results in few select papers. I want to calculate the sensitivity indices of the parameters with respect to the basic reproduction number (R0) using the formula S(p)=(p/R0)*(∂R0/∂p) where p is a parameter and S(p) is the sensitivity index of p. I have already calculated the indices manually, but I was wondering if there is a way to automate this process using R. The formula for R0 and the parameter values are given below. beta_s = 0.274, alpha_a = 0.4775, alpha_u = 0.695, mu = 0.062, q_i = 0.078, gamma_a = 0.29, 1/eta_i = 0.009, 1/eta_u = 0.05 R0 = (beta_s*alpha_a)/(gamma_a+mu) + (beta_s*alpha_u*gamma_a*(1-q_i))/((gamma_a+mu)*(eta_u+mu)) It would be great if someone could help me with finding the sensitivity indices using R. Thanks a lot for your time! Edited code based on @epsilonG's answer beta_s = 0.274 alpha_a = 0.4775 alpha_u = 0.695 mu = 0.062 q_i = 0.078 gamma_a = 0.29 1/eta_i = 0.009 1/eta_u = 0.05 R0 = (beta_s*alpha_a)/(gamma_a+mu) + (beta_s*alpha_u*gamma_a*(1-q_i))/((gamma_a+mu)*(eta_u+mu)) # Create function func <- expression((beta_s*alpha_a)/(gamma_a+mu) + (beta_s*alpha_u*gamma_a*(1-q_i))/((gamma_a+mu)*(eta_u+mu)) # Calculate the derivative of R0 to alpha_a dalpha_a <- deriv(func, 'alpha_a') val <- eval(dalpha_a) # Calculate the sensitivity index of alpha_a S_alpha_a <- (alpha_a/R0)*val
[ "# Create function\nfunc <- expression((beta_s*alpha_a)/(gamma_a+mu) + \n (beta_s*alpha_u*gamma_a*(1-q_i))/((gamma_a+mu)))\n\n# Calculate the derivative of R0 to beta_s\ndbeta_s <- deriv(func , 'beta_s')\nval <- eval(dbeta_s)\n\n# Calculate the sensitivity index of beta_s\ns_beta_s <- beta_s/R0 * val \n\nYou may use for loop do this through all the parameters.\n" ]
[ 1 ]
[]
[]
[ "derivative", "odesensitivity", "r" ]
stackoverflow_0074679132_derivative_odesensitivity_r.txt
Q: Docker desktop: Files in container isnt effected by changes from host I ask you for your help with my problem. I use Docker desktop on macbook with M1 Pro and Monterey (12.6.1). When I changed some php file, the file in container is effected only when I use chown or chmod commands with same or different rights. Example how it is works: Changed php file on local/volume. Used chown or chmod commands in container terminal on the file. And now is the file effected in container by changes from local. Used image: debian:latest (11.x) Used packages: apache2.4, php7.4 (php7.4-fpm) and more. Used command for create volume: docker volume create --driver local --name debvol --opt type=nfs --opt device=${PWD} --opt o=bind Used docker run command: docker run -it -d -p xx:xx -p yy:yy –name deb –mount source=debvol,target=/var/www debian:latest Thank you for your help.
Docker desktop: Files in container isnt effected by changes from host
I ask you for your help with my problem. I use Docker desktop on macbook with M1 Pro and Monterey (12.6.1). When I changed some php file, the file in container is effected only when I use chown or chmod commands with same or different rights. Example how it is works: Changed php file on local/volume. Used chown or chmod commands in container terminal on the file. And now is the file effected in container by changes from local. Used image: debian:latest (11.x) Used packages: apache2.4, php7.4 (php7.4-fpm) and more. Used command for create volume: docker volume create --driver local --name debvol --opt type=nfs --opt device=${PWD} --opt o=bind Used docker run command: docker run -it -d -p xx:xx -p yy:yy –name deb –mount source=debvol,target=/var/www debian:latest Thank you for your help.
[]
[]
[ "It sounds like you are running into an issue with file permissions when using Docker with your Macbook.\nWhen using Docker, any changes made to files on your local machine will not automatically be reflected in the Docker container. This is because the files in the container are owned by a different user than the files on your local machine.\nTo fix this issue, you can try using the chown and chmod commands to change the ownership and permissions of the files in the container to match those on your local machine. This will allow the changes made on your local machine to be reflected in the container.\n" ]
[ -1 ]
[ "debian", "docker", "linux", "local", "macos" ]
stackoverflow_0074678737_debian_docker_linux_local_macos.txt
Q: How to show 2 or more duplicates with this array using java? i would like to display 2 or more duplicated customers using joptionpane. It is working if there is only 1 duplicate customer but unfortunately the message dialogue wasnt showing if there is 2 or more duplicated customer. Here is my code. public static void main(String[] args) { int number; number = Integer.parseInt(JOptionPane.showInputDialog("Enter the number of customers: ")); int[] one = new int[number]; int[] two = new int[number]; for (int i = 0; i < number; i++) { one[i] = Integer.parseInt(JOptionPane.showInputDialog("Customer number: ")); } int y = 0; for (int i = 0; i < one.length - 1; i++) { for (int w = i + 1; w < one.length; w++) { if (one[i] == one[w]) { two[y] = one[w]; y = y + 1; break; } } for (int p = 0; p < y - 1; p++) { if (one[p] == two[p - 1]) { y = y - 1; break; } } } if (y == 0) { JOptionPane.showMessageDialog(null, "\nHONEST CUSTOMERS"); } else if (y != 0) { JOptionPane.showMessageDialog(null, "Duplicates:"); for (int o = 0; o < y; o++) { JOptionPane.showMessageDialog(null, "Customer #" + two[o]); //jop.showMessageDialog(null, "Duplicates: Customer #" + two[l]); //} } } } } How can i show the message dialogue if i want to show 2 or more duplicated customers? thank you for the help. A: Break down your code into the following: A function that takes the two lists and return a list of pairs of duplicates. A function that takes the list of duplicates and create a string stating: "Duplicates: value1, value2..." Show message box. A: here is an solution for your problem using Collections: public static Integer[] getDuplicates(Integer[] list) { List<Integer> duplicates = new ArrayList<>(Arrays.asList(list)); Set<Integer> seen = new HashSet<>(Arrays.asList(list)); for (Integer i : seen) { duplicates.remove(i); } Set<Integer> uniqueDuplicate = new HashSet<>(duplicates); return uniqueDuplicate.toArray(Integer[]::new); } public static void main(String[] args) { int number; number = Integer.parseInt(JOptionPane.showInputDialog("Enter the number of customers: ")); Integer[] one = new Integer[number]; for (int i = 0; i < number; i++) { one[i] = Integer.parseInt(JOptionPane.showInputDialog("Customer number: ")); } Integer[] two = getDuplicates(one); if (two.length == 0) { JOptionPane.showMessageDialog(null, "\nHONEST CUSTOMERS"); } else { JOptionPane.showMessageDialog(null, "Duplicates:"); for (int o = 0; o < two.length; o++) { JOptionPane.showMessageDialog(null, "Customer #" + two[o]); // jop.showMessageDialog(null, "Duplicates: Customer #" + two[l]); // } } } }
How to show 2 or more duplicates with this array using java?
i would like to display 2 or more duplicated customers using joptionpane. It is working if there is only 1 duplicate customer but unfortunately the message dialogue wasnt showing if there is 2 or more duplicated customer. Here is my code. public static void main(String[] args) { int number; number = Integer.parseInt(JOptionPane.showInputDialog("Enter the number of customers: ")); int[] one = new int[number]; int[] two = new int[number]; for (int i = 0; i < number; i++) { one[i] = Integer.parseInt(JOptionPane.showInputDialog("Customer number: ")); } int y = 0; for (int i = 0; i < one.length - 1; i++) { for (int w = i + 1; w < one.length; w++) { if (one[i] == one[w]) { two[y] = one[w]; y = y + 1; break; } } for (int p = 0; p < y - 1; p++) { if (one[p] == two[p - 1]) { y = y - 1; break; } } } if (y == 0) { JOptionPane.showMessageDialog(null, "\nHONEST CUSTOMERS"); } else if (y != 0) { JOptionPane.showMessageDialog(null, "Duplicates:"); for (int o = 0; o < y; o++) { JOptionPane.showMessageDialog(null, "Customer #" + two[o]); //jop.showMessageDialog(null, "Duplicates: Customer #" + two[l]); //} } } } } How can i show the message dialogue if i want to show 2 or more duplicated customers? thank you for the help.
[ "Break down your code into the following:\n\nA function that takes the two lists and return a list of pairs of duplicates.\nA function that takes the list of duplicates and create a string stating: \"Duplicates: value1, value2...\"\nShow message box.\n\n", "here is an solution for your problem using Collections:\npublic static Integer[] getDuplicates(Integer[] list) {\n\n List<Integer> duplicates = new ArrayList<>(Arrays.asList(list));\n Set<Integer> seen = new HashSet<>(Arrays.asList(list));\n\n for (Integer i : seen) {\n duplicates.remove(i);\n }\n\n Set<Integer> uniqueDuplicate = new HashSet<>(duplicates);\n\n return uniqueDuplicate.toArray(Integer[]::new);\n\n}\n\npublic static void main(String[] args) {\n int number;\n\n number = Integer.parseInt(JOptionPane.showInputDialog(\"Enter the number of customers: \"));\n\n Integer[] one = new Integer[number];\n\n for (int i = 0; i < number; i++) {\n\n one[i] = Integer.parseInt(JOptionPane.showInputDialog(\"Customer number: \"));\n\n }\n\n Integer[] two = getDuplicates(one);\n\n if (two.length == 0) {\n JOptionPane.showMessageDialog(null, \"\\nHONEST CUSTOMERS\");\n } else {\n JOptionPane.showMessageDialog(null, \"Duplicates:\");\n for (int o = 0; o < two.length; o++) {\n JOptionPane.showMessageDialog(null, \"Customer #\" + two[o]);\n // jop.showMessageDialog(null, \"Duplicates: Customer #\" + two[l]);\n // }\n }\n }\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "java" ]
stackoverflow_0074677828_java.txt
Q: Links respond in second click In my wordpress web page, links respond in 2th click. First click there is no respond. Second click link answers. In bottom video tells everything. ONLY ON MOBILE <iframe width="560" height="315" src="https://video.hizliresim.com/embed/oJS8133tNj" title="hizliresim video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> I dont know what am i going A: I would recommend trying the following steps: Check for plugin and theme conflicts: If you have recently installed a new plugin or theme, this could be causing the issue with the links on your website. Try disabling all plugins and switching to the default WordPress theme to see if this fixes the problem. Clear your browser cache: Sometimes, old or outdated data can cause issues with website links. Try clearing your browser cache and cookies to see if this fixes the problem. Check for JavaScript errors: JavaScript errors can sometimes cause issues with links on a website. Try using a tool like the Chrome Developer Tools to check for JavaScript errors on your website and fix any that you find. If none of these steps fix the problem, it is possible that there is a more complex issue with your website.
Links respond in second click
In my wordpress web page, links respond in 2th click. First click there is no respond. Second click link answers. In bottom video tells everything. ONLY ON MOBILE <iframe width="560" height="315" src="https://video.hizliresim.com/embed/oJS8133tNj" title="hizliresim video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> I dont know what am i going
[ "I would recommend trying the following steps:\n\nCheck for plugin and theme conflicts: If you have recently installed a new plugin or theme, this could be causing the issue with the links on your website. Try disabling all plugins and switching to the default WordPress theme to see if this fixes the problem.\n\nClear your browser cache: Sometimes, old or outdated data can cause issues with website links. Try clearing your browser cache and cookies to see if this fixes the problem.\n\nCheck for JavaScript errors: JavaScript errors can sometimes cause issues with links on a website. Try using a tool like the Chrome Developer Tools to check for JavaScript errors on your website and fix any that you find.\n\n\nIf none of these steps fix the problem, it is possible that there is a more complex issue with your website.\n" ]
[ 0 ]
[]
[]
[ "css", "wordpress", "wordpress_theming" ]
stackoverflow_0074680170_css_wordpress_wordpress_theming.txt
Q: How to delete all the rows in a table using Eloquent? My guess was to use the following syntax: MyModel::all()->delete(); But that did not work. I'm sure it's super simple, but I've searched for documentation on the subject and can't find it! A: The reason MyModel::all()->delete() doesn't work is because all() actually fires off the query and returns a collection of Eloquent objects. You can make use of the truncate method, this works for Laravel 4 and 5: MyModel::truncate(); That drops all rows from the table without logging individual row deletions. A: Laravel 5.2+ solution. Model::getQuery()->delete(); Just grab underlying builder with table name and do whatever. Couldn't be any tidier than that. Laravel 5.6 solution \App\Model::query()->delete(); A: You can use Model::truncate() if you disable foreign_key_checks (I assume you use MySQL). DB::statement("SET foreign_key_checks=0"); Model::truncate(); DB::statement("SET foreign_key_checks=1"); A: I've seen both methods been used in seed files. // Uncomment the below to wipe the table clean before populating DB::table('table_name')->truncate(); //or DB::table('table_name')->delete(); Even though you can not use the first one if you want to set foreign keys. Cannot truncate a table referenced in a foreign key constraint So it might be a good idea to use the second one. A: There is an indirect way: myModel:where('anyColumnName', 'like', '%%')->delete(); Example: User:where('id', 'like' '%%')->delete(); Laravel query builder information: https://laravel.com/docs/5.4/queries A: I wanted to add another option for those getting to this thread via Google. I needed to accomplish this, but wanted to retain my auto-increment value which truncate() resets. I also didn't want to use DB:: anything because I wanted to operate directly off of the model object. So, I went with this: Model::whereNotNull('id')->delete(); Obviously the column will have to actually exists, but in a standard, out-of-the-box Eloquent model, the id column exists and is never null. I don't know if this is the best choice, but it works for my purposes. A: simple solution: Mymodel::query()->delete(); A: I wasn't able to use Model::truncate() as it would error: SQLSTATE[42000]: Syntax error or access violation: 1701 Cannot truncate a table referenced in a foreign key constraint And unfortunately Model::delete() doesn't work (at least in Laravel 5.0): Non-static method Illuminate\Database\Eloquent\Model::delete() should not be called statically, assuming $this from incompatible context But this does work: (new Model)->newQuery()->delete() That will soft-delete all rows, if you have soft-delete set up. To fully delete all rows including soft-deleted ones you can change to this: (new Model)->newQueryWithoutScopes()->forceDelete() A: You can try this one-liner which preserves soft-deletes also: Model::whereRaw('1=1')->delete(); A: The best way for accomplishing this operation in Laravel 3 seems to be the use of the Fluent interface to truncate the table as shown below DB::query("TRUNCATE TABLE mytable"); A: The problem with truncate is that it implies an immediate commit, so if use it inside a transaction the risk is that you find the table empty. The best solution is to use delete MyModel::query()->delete(); A: MyModel::truncate(); in commade line : php artisan tinker then Post::truncate();
How to delete all the rows in a table using Eloquent?
My guess was to use the following syntax: MyModel::all()->delete(); But that did not work. I'm sure it's super simple, but I've searched for documentation on the subject and can't find it!
[ "The reason MyModel::all()->delete() doesn't work is because all() actually fires off the query and returns a collection of Eloquent objects. \nYou can make use of the truncate method, this works for Laravel 4 and 5:\nMyModel::truncate();\n\nThat drops all rows from the table without logging individual row deletions.\n", "Laravel 5.2+ solution.\nModel::getQuery()->delete();\n\nJust grab underlying builder with table name and do whatever.\nCouldn't be any tidier than that.\nLaravel 5.6 solution\n\\App\\Model::query()->delete();\n\n", "You can use Model::truncate() if you disable foreign_key_checks (I assume you use MySQL).\nDB::statement(\"SET foreign_key_checks=0\");\nModel::truncate();\nDB::statement(\"SET foreign_key_checks=1\");\n\n", "I've seen both methods been used in seed files.\n// Uncomment the below to wipe the table clean before populating\n\nDB::table('table_name')->truncate();\n\n//or\n\nDB::table('table_name')->delete();\n\nEven though you can not use the first one if you want to set foreign keys.\n\nCannot truncate a table referenced in a foreign key constraint\n\nSo it might be a good idea to use the second one.\n", "There is an indirect way:\nmyModel:where('anyColumnName', 'like', '%%')->delete();\n\nExample:\nUser:where('id', 'like' '%%')->delete();\n\nLaravel query builder information: https://laravel.com/docs/5.4/queries\n", "I wanted to add another option for those getting to this thread via Google. I needed to accomplish this, but wanted to retain my auto-increment value which truncate() resets. I also didn't want to use DB:: anything because I wanted to operate directly off of the model object. So, I went with this:\nModel::whereNotNull('id')->delete();\n\nObviously the column will have to actually exists, but in a standard, out-of-the-box Eloquent model, the id column exists and is never null. I don't know if this is the best choice, but it works for my purposes.\n", "simple solution:\n Mymodel::query()->delete();\n\n", "I wasn't able to use Model::truncate() as it would error:\n\nSQLSTATE[42000]: Syntax error or access violation: 1701 Cannot truncate a table referenced in a foreign key constraint\n\nAnd unfortunately Model::delete() doesn't work (at least in Laravel 5.0):\n\nNon-static method Illuminate\\Database\\Eloquent\\Model::delete() should not be called statically, assuming $this from incompatible context\n\nBut this does work:\n(new Model)->newQuery()->delete()\n\nThat will soft-delete all rows, if you have soft-delete set up. To fully delete all rows including soft-deleted ones you can change to this:\n(new Model)->newQueryWithoutScopes()->forceDelete()\n\n", "You can try this one-liner which preserves soft-deletes also:\nModel::whereRaw('1=1')->delete();\n\n", "The best way for accomplishing this operation in Laravel 3 seems to be the use of the Fluent interface to truncate the table as shown below\nDB::query(\"TRUNCATE TABLE mytable\");\n\n", "The problem with truncate is that it implies an immediate commit,\nso if use it inside a transaction the risk is that you find the table empty.\nThe best solution is to use delete\nMyModel::query()->delete();\n\n", "MyModel::truncate();\nin commade line :\nphp artisan tinker\nthen\nPost::truncate();\n" ]
[ 384, 114, 74, 50, 13, 12, 12, 9, 7, 4, 4, 0 ]
[ "In a similar vein to Travis vignon's answer, I required data from the eloquent model, and if conditions were correct, I needed to either delete or update the model. I wound up getting the minimum and maximum I'd field returned by my query (in case another field was added to the table that would meet my selection criteria) along with the original selection criteria to update the fields via one raw SQL query (as opposed to one eloquent query per object in the collection).\nI know the use of raw SQL violates laravels beautiful code philosophy, but itd be hard to stomach possibly hundreds of queries in place of one.\n", "\nIn my case laravel 4.2 delete all rows ,but not truncate table\n\nDB::table('your_table')->delete();\n" ]
[ -1, -1 ]
[ "eloquent", "laravel", "laravel_4" ]
stackoverflow_0015484404_eloquent_laravel_laravel_4.txt
Q: Get row when value is higher than a given row value in Pandas Sorry for the confusing title, I'm trying to figure out something that's a bit hard to explain. I have a dataframe that looks like this (link to csv) time value is_critical 0:00 1 false 0:01 9 true 0:02 2 false 0:03 4 false 0:04 6 true 0:05 5 false 0:06 1 false 0:07 4 false 0:08 8 true 0:09 7 false 0:10 10 false And I want to compute another dataframe with all the critical values and the date of when the value returned or surpassed the critical value. So the new dataframe would look something like this: time value return_to_critical 0:01 9 0:10 0:04 6 0:08 0:08 8 0:10 How can I do this? Thanks! A: It's a bit messy, and not very efficient but here's a solution: In [3]: df[df["is_critical"]].apply(lambda critical_row: df["time"][(df["time"] > critical_row["time"]) & (df["value"] >= critical_row["value"])].min(), axis=1) Out[3]: 1 0:10 4 0:08 8 0:10 dtype: object Works by first filtering out any non-critical rows, then applying a boolean expression to each row of that result: "values in the dataframe where the value is greater than or equal to the current value, and the time is greater than the current time" where "current" refers to each row in the filtered data. You can clean up a little: def time_of_return_to_critical(df, critical_row): mask = (df.time > critical_row.time) & (df.value >= critical_row.value) return df["time"][mask].min() df[df.is_critical].apply(lambda row: time_of_return_to_critical(df, row), axis=1) Note that the .min() is a brittle solution. You should convert the "time" column to a proper datetime or timestamp data type because right now it's only "working" as a string comparator.
Get row when value is higher than a given row value in Pandas
Sorry for the confusing title, I'm trying to figure out something that's a bit hard to explain. I have a dataframe that looks like this (link to csv) time value is_critical 0:00 1 false 0:01 9 true 0:02 2 false 0:03 4 false 0:04 6 true 0:05 5 false 0:06 1 false 0:07 4 false 0:08 8 true 0:09 7 false 0:10 10 false And I want to compute another dataframe with all the critical values and the date of when the value returned or surpassed the critical value. So the new dataframe would look something like this: time value return_to_critical 0:01 9 0:10 0:04 6 0:08 0:08 8 0:10 How can I do this? Thanks!
[ "It's a bit messy, and not very efficient but here's a solution:\nIn [3]: df[df[\"is_critical\"]].apply(lambda critical_row: df[\"time\"][(df[\"time\"] > critical_row[\"time\"]) & (df[\"value\"] >= critical_row[\"value\"])].min(), axis=1)\nOut[3]:\n1 0:10\n4 0:08\n8 0:10\ndtype: object\n\nWorks by first filtering out any non-critical rows, then applying a boolean expression to each row of that result: \"values in the dataframe where the value is greater than or equal to the current value, and the time is greater than the current time\" where \"current\" refers to each row in the filtered data.\nYou can clean up a little:\ndef time_of_return_to_critical(df, critical_row):\n mask = (df.time > critical_row.time) & (df.value >= critical_row.value)\n return df[\"time\"][mask].min()\n\n\ndf[df.is_critical].apply(lambda row: time_of_return_to_critical(df, row), axis=1)\n\nNote that the .min() is a brittle solution. You should convert the \"time\" column to a proper datetime or timestamp data type because right now it's only \"working\" as a string comparator.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074680131_pandas_python.txt
Q: add a clear button to the GUI which clears the output formatted by the user I am trying to clear the output displayed When clicked the CLEAR button should 'clear' or remove any text written in the Entry box and any text displayed on the Label. While attempting this on my own I tried using the delete method and del both of which did not remove the output when the button is pressed from tkinter import * import random def generate_name_box(): n = input_string_var.get() s = "" s += "+" + "-"*len(n) + "+\n" s += "|" + n + "|\n" s += "+" + "-"*len(n) + "+\n" name_string_var.set(s) def clear_name_box(): #MAIN #Generate holding structures for GUI root = Tk() mainframe = Frame(root) #Other variables font_colour = '#EE34A2' #Create the widgets and associated Vars name_string_var = StringVar() name_label = Label(mainframe, text = "", font = ("Courier", 50), textvariable=name_string_var, fg=font_colour) instruction_label = Label(mainframe, text="Enter your name", font=("Courier", 20)) greeting_button = Button(mainframe, text ="FORMAT", font=("Courier", 20), command=generate_name_box) clear_button = Button(mainframe, text="CLEAR", font=("Courier",20), command= clear_name_box) input_string_var = StringVar() input_entry = Entry(mainframe, textvariable=input_string_var) #Grid the widgets ############# root.minsize(450, 400) mainframe.grid(padx = 50, pady = 50) instruction_label.grid(row = 1, column = 1, sticky=W) input_entry.grid(row = 2, column = 1, sticky=W) greeting_button.grid(row = 3, column = 1, ipadx=55, ipady=10, sticky=W) clear_button.grid(row=4,column = 1, ipadx= 55, ipady= 20, sticky=W) name_label.grid(row = 5, column = 1, sticky=W) root.mainloop() A: To clear the text in the Entry widget, you can use the delete method and specify the indices of the characters that you want to delete. For example, to delete all the text in the Entry widget, you can use the following code: def clear_name_box(): input_entry.delete(0, 'end') This code will delete all the text from the beginning (index 0) to the end ('end') of the text in the Entry widget. To clear the text in the Label widget, you can use the config method to change the text attribute of the Label to an empty string. For example: def clear_name_box(): input_entry.delete(0, 'end') name_label.config(text="") This code will clear the text in both the Entry and Label widgets when the clear_name_box function is called.
add a clear button to the GUI which clears the output formatted by the user
I am trying to clear the output displayed When clicked the CLEAR button should 'clear' or remove any text written in the Entry box and any text displayed on the Label. While attempting this on my own I tried using the delete method and del both of which did not remove the output when the button is pressed from tkinter import * import random def generate_name_box(): n = input_string_var.get() s = "" s += "+" + "-"*len(n) + "+\n" s += "|" + n + "|\n" s += "+" + "-"*len(n) + "+\n" name_string_var.set(s) def clear_name_box(): #MAIN #Generate holding structures for GUI root = Tk() mainframe = Frame(root) #Other variables font_colour = '#EE34A2' #Create the widgets and associated Vars name_string_var = StringVar() name_label = Label(mainframe, text = "", font = ("Courier", 50), textvariable=name_string_var, fg=font_colour) instruction_label = Label(mainframe, text="Enter your name", font=("Courier", 20)) greeting_button = Button(mainframe, text ="FORMAT", font=("Courier", 20), command=generate_name_box) clear_button = Button(mainframe, text="CLEAR", font=("Courier",20), command= clear_name_box) input_string_var = StringVar() input_entry = Entry(mainframe, textvariable=input_string_var) #Grid the widgets ############# root.minsize(450, 400) mainframe.grid(padx = 50, pady = 50) instruction_label.grid(row = 1, column = 1, sticky=W) input_entry.grid(row = 2, column = 1, sticky=W) greeting_button.grid(row = 3, column = 1, ipadx=55, ipady=10, sticky=W) clear_button.grid(row=4,column = 1, ipadx= 55, ipady= 20, sticky=W) name_label.grid(row = 5, column = 1, sticky=W) root.mainloop()
[ "To clear the text in the Entry widget, you can use the delete method and specify the indices of the characters that you want to delete. For example, to delete all the text in the Entry widget, you can use the following code:\ndef clear_name_box():\n input_entry.delete(0, 'end')\n\nThis code will delete all the text from the beginning (index 0) to the end ('end') of the text in the Entry widget.\nTo clear the text in the Label widget, you can use the config method to change the text attribute of the Label to an empty string. For example:\ndef clear_name_box():\n input_entry.delete(0, 'end')\n name_label.config(text=\"\")\n\nThis code will clear the text in both the Entry and Label widgets when the clear_name_box function is called.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074680220_python.txt
Q: What does $_ bind to at the top level of pipelines, in Powershell? In Powershell, a pipeline can contain filters such as ForEach-Object: get-process | %{$_.name} Within the script block of the foreach filter, it's possible to use the $_ auto variable to refer to the current object of the script block. What does the $_ variable bind to when used at the top level of the pipeline? For example, the following doesn't work: "common" | get-verb -group $_ I would have thought that it was bound to the resulting object from the previous section of the pipeline -- in the above case, to the "common" string. I've been looking online for info about how this $_ is bound, but haven't found that type of info. Virtually all examples that I see use $_ within script blocks. For example, this doesn't answer this question: What does $_ mean in PowerShell? A: $_ is only valid inside scriptblocks. Note that -group is a powershell 7 only parameter. You can do something like this, but you have to specify something for the verb: 'comm' | get-verb -group { $_ + 'on' } -verb * Verb AliasPrefix Group Description ---- ----------- ----- ----------- Add a Common Adds a resource to a container, or attaches an item to another item Clear cl Common Removes all the resources from a container but does not delete the container Close cs Common Changes the state of a resource to make it inaccessible, unavailable, or unusable ...
What does $_ bind to at the top level of pipelines, in Powershell?
In Powershell, a pipeline can contain filters such as ForEach-Object: get-process | %{$_.name} Within the script block of the foreach filter, it's possible to use the $_ auto variable to refer to the current object of the script block. What does the $_ variable bind to when used at the top level of the pipeline? For example, the following doesn't work: "common" | get-verb -group $_ I would have thought that it was bound to the resulting object from the previous section of the pipeline -- in the above case, to the "common" string. I've been looking online for info about how this $_ is bound, but haven't found that type of info. Virtually all examples that I see use $_ within script blocks. For example, this doesn't answer this question: What does $_ mean in PowerShell?
[ "$_ is only valid inside scriptblocks. Note that -group is a powershell 7 only parameter. You can do something like this, but you have to specify something for the verb:\n'comm' | get-verb -group { $_ + 'on' } -verb *\n\n\nVerb AliasPrefix Group Description\n---- ----------- ----- -----------\nAdd a Common Adds a resource to a container, or attaches an item to another item\nClear cl Common Removes all the resources from a container but does not delete the container\nClose cs Common Changes the state of a resource to make it inaccessible, unavailable, or unusable\n...\n\n" ]
[ 1 ]
[]
[]
[ "automatic_variable", "pipeline", "powershell" ]
stackoverflow_0074680073_automatic_variable_pipeline_powershell.txt
Q: Django forms: how to use optional arguments in form class I'm building a Django web-app which has page create and edit functionality. I create the page and edit the pages using 2 arguments: page title and page contents. Since the edit and create code is very similar except that the edit code doesn't let you change the title of the page I want to make some code that can do both depending on the input. This is the current code I'm using right now. class createPageForm(forms.Form): page_name = forms.CharField() page_contents = forms.CharField(widget=forms.Textarea()) class editPageForm(forms.Form): page_name = forms.CharField(disabled=True) page_contents = forms.CharField(widget=forms.Textarea()) I know that if I wasn't using classes, but functions I could do something like this: def PageForm(forms.Form, disabled=False): page_name = forms.CharField(disabled=disabled) page_contents = forms.CharField(widget=forms.Textarea()) PageForm(disabled=True) PageForm(disabled=False) That is the kind of functionality I'm looking for^^ I tried the following: class PageForm(forms.Form): def __init__(self, disabled=False): self.page_name = forms.CharField(disabled=disabled) self.page_contents = forms.CharField(widget=forms.Textarea()) class PageForm(forms.Form, disabled=False): page_name = forms.CharField(disabled=disabled) page_contents = forms.CharField(widget=forms.Textarea()) Both didn't work and got different errors I couldn't get around. I was hoping someone could lead me in the right direction, since I'm not very familiar with classes. A: You can work with: class CreatePageForm(forms.Form): page_name = forms.CharField() page_contents = forms.CharField(widget=forms.Textarea()) def __init__(self, *args, disabled=False, **kwargs): super().__init__(*args, **kwargs) self.fields['page_contents'].disabled = disabled and call it with: CreatePageForm(disabled=True) A: PrintOrderDetails(orderNum: 31, productName: "Red Mug", sellerName: "Gift Shop"); PrintOrderDetails(productName: "Red Mug", sellerName: "Gift Shop", orderNum: 31);
Django forms: how to use optional arguments in form class
I'm building a Django web-app which has page create and edit functionality. I create the page and edit the pages using 2 arguments: page title and page contents. Since the edit and create code is very similar except that the edit code doesn't let you change the title of the page I want to make some code that can do both depending on the input. This is the current code I'm using right now. class createPageForm(forms.Form): page_name = forms.CharField() page_contents = forms.CharField(widget=forms.Textarea()) class editPageForm(forms.Form): page_name = forms.CharField(disabled=True) page_contents = forms.CharField(widget=forms.Textarea()) I know that if I wasn't using classes, but functions I could do something like this: def PageForm(forms.Form, disabled=False): page_name = forms.CharField(disabled=disabled) page_contents = forms.CharField(widget=forms.Textarea()) PageForm(disabled=True) PageForm(disabled=False) That is the kind of functionality I'm looking for^^ I tried the following: class PageForm(forms.Form): def __init__(self, disabled=False): self.page_name = forms.CharField(disabled=disabled) self.page_contents = forms.CharField(widget=forms.Textarea()) class PageForm(forms.Form, disabled=False): page_name = forms.CharField(disabled=disabled) page_contents = forms.CharField(widget=forms.Textarea()) Both didn't work and got different errors I couldn't get around. I was hoping someone could lead me in the right direction, since I'm not very familiar with classes.
[ "You can work with:\nclass CreatePageForm(forms.Form):\n page_name = forms.CharField()\n page_contents = forms.CharField(widget=forms.Textarea())\n\n def __init__(self, *args, disabled=False, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['page_contents'].disabled = disabled\nand call it with:\nCreatePageForm(disabled=True)\n\n", "PrintOrderDetails(orderNum: 31, productName: \"Red Mug\", sellerName: \"Gift Shop\");\nPrintOrderDetails(productName: \"Red Mug\", sellerName: \"Gift Shop\", orderNum: 31);\n" ]
[ 0, 0 ]
[]
[]
[ "django", "forms", "python" ]
stackoverflow_0074680120_django_forms_python.txt
Q: Will a SqlConnection() declared before using it in a using() statement still close the connection when done? (C#/SQL Server) In a situation like this: SqlConnection cn = new SqlConnection( ConfigurationManager .ConnectionStrings["AppConnection"] .ConnectionString ); using (cn) {...} Will the using() statement still close the connection even though it wasn't declared in the using() statement? Similar to this question: SqlConnection with a using clause - calling close on the connection Except I am wondering about using the actual connection, not a copy of it. A: Yes, but you shouldn't do it. The documentation for using says (all the way at the bottom): You can instantiate the resource object and then pass the variable to the using statement, but this isn't a best practice. In this case, after control leaves the using block, the object remains in scope but probably has no access to its unmanaged resources. In other words, it's not fully initialized anymore. If you try to use the object outside the using block, you risk causing an exception to be thrown. For this reason, it's better to instantiate the object in the using statement and limit its scope to the using block. var reader = new StringReader(manyLines); using (reader) { ... } // reader is in scope here, but has been disposed
Will a SqlConnection() declared before using it in a using() statement still close the connection when done? (C#/SQL Server)
In a situation like this: SqlConnection cn = new SqlConnection( ConfigurationManager .ConnectionStrings["AppConnection"] .ConnectionString ); using (cn) {...} Will the using() statement still close the connection even though it wasn't declared in the using() statement? Similar to this question: SqlConnection with a using clause - calling close on the connection Except I am wondering about using the actual connection, not a copy of it.
[ "Yes, but you shouldn't do it.\nThe documentation for using says (all the way at the bottom):\n\nYou can instantiate the resource object and then pass the variable to the using statement, but this isn't a best practice. In this case, after control leaves the using block, the object remains in scope but probably has no access to its unmanaged resources. In other words, it's not fully initialized anymore. If you try to use the object outside the using block, you risk causing an exception to be thrown. For this reason, it's better to instantiate the object in the using statement and limit its scope to the using block.\nvar reader = new StringReader(manyLines);\nusing (reader)\n{\n ...\n}\n// reader is in scope here, but has been disposed\n\n\n" ]
[ 3 ]
[]
[]
[ "c#", "sql_server", "sqlcommand", "sqlconnection" ]
stackoverflow_0074680216_c#_sql_server_sqlcommand_sqlconnection.txt
Q: Find the number of commits in a Git repo when cloning with --depth=1 To find the number of commits on a git branch you can do: $ git rev-list --count HEAD 920 However, if you initially clone with --depth=1, that doesn't work: $ git clone https://github.com/ndmitchell/hoogle.git --depth=1 $ cd hoogle $ git rev-list --count HEAD 1 Is there any way to get the speed and reduced network traffic of a --depth=1 clone, but then also get the count of the number of commits? A: Is there any way to get the speed and reduced network traffic of a --depth=1 clone, but then also get the count of the number of commits? I'm pretty sure you can't. As you know, --depth=1 only retrieves the most recently pushed commit. That means when you clone with a depth of 1 you get 1 commit and only that single commit, with no history at all attached to it. As far as your local repository is concerned, there is no history, just this 1 commit. As is also mentioned in the docs --depth Create a shallow clone with a history truncated to the specified number of revisions. What I also find interesting that even if you'd check the origin $ git rev-list --count origin/master $ git log origin/master they'd both only show 1 commit, too. A: This may not be appropriate for all situations, but in certain use cases you may be able to convert your shallow clone to a full clone with (assuming Git 1.8.3+) as per How to convert a Git shallow clone to a full clone: git fetch --unshallow This will allow you to get the correct count as normal.
Find the number of commits in a Git repo when cloning with --depth=1
To find the number of commits on a git branch you can do: $ git rev-list --count HEAD 920 However, if you initially clone with --depth=1, that doesn't work: $ git clone https://github.com/ndmitchell/hoogle.git --depth=1 $ cd hoogle $ git rev-list --count HEAD 1 Is there any way to get the speed and reduced network traffic of a --depth=1 clone, but then also get the count of the number of commits?
[ "\nIs there any way to get the speed and reduced network traffic of a --depth=1 clone, but then also get the count of the number of commits?\n\nI'm pretty sure you can't.\nAs you know, --depth=1 only retrieves the most recently pushed commit. That means when you clone with a depth of 1 you get 1 commit and only that single commit, with no history at all attached to it. \nAs far as your local repository is concerned, there is no history, just this 1 commit.\nAs is also mentioned in the docs\n\n--depth \nCreate a shallow clone with a history truncated to the specified number of revisions.\n\nWhat I also find interesting that even if you'd check the origin\n$ git rev-list --count origin/master\n$ git log origin/master\n\nthey'd both only show 1 commit, too.\n", "This may not be appropriate for all situations, but in certain use cases you may be able to convert your shallow clone to a full clone with (assuming Git 1.8.3+) as per How to convert a Git shallow clone to a full clone:\ngit fetch --unshallow\n\nThis will allow you to get the correct count as normal.\n" ]
[ 4, 0 ]
[]
[]
[ "git" ]
stackoverflow_0032069186_git.txt
Q: How to prevent malformed html to breaks DOM in SSR rendering React project? I'm trying to render a content getting from content creators. Some contents are malformed and breaks DOM. For example contents include unclosed tags and other reasons that i could not find yet. I should make an implementation that malformed content does not affect the other parts of DOM tree. I'm using SSR to render content, so I should find a solution on server side. Do you have any sugesstions? I tried to use shadow DOM but it is not supported well on server side. Also correcting this html pages and prevent to add malformed html can be viable solution too do you also any suggestion to validate html? A: To prevent malformed HTML from breaking the DOM tree, one solution is to use a parser that can handle invalid HTML and recover from errors. One such parser is the HTML5 Parser from the html5lib library. This parser can handle a wide range of invalid HTML and recover from errors, so that the resulting DOM tree is still well-formed and can be rendered correctly. Here is an example of how you can use the HTML5 Parser to parse and render malformed HTML content: import React from "react"; import { parse as parseHtml } from "html5lib"; import { renderToString as render } from "react-dom/server"; const MyContent = ({ content }) => { const dom = parseHtml(content, { tree: true }); return <div>{render(dom)}</div>; }; In this example, we are using the parseHtml function from the html5lib library to parse the given content and create a DOM tree. We then use the renderToString function from react-dom/server to convert the DOM tree to a string of HTML that can be rendered by React on the server. Another solution is to use a library like cheerio to parse the HTML content and clean it up before rendering it. cheerio is a fast, flexible, and lean implementation of the core jQuery API that can be used to manipulate and parse HTML content on the server. Here is an example of how you can use cheerio to clean up malformed HTML before rendering it with React: import React from "react"; import cheerio from "cheerio"; import { renderToString as render } from "react-dom/server"; const MyContent = ({ content }) => { const $ = cheerio.load(content); // Use cheerio to manipulate and clean up the HTML content // ... const html = $.html(); return <div>{render(html)}</div>; };
How to prevent malformed html to breaks DOM in SSR rendering React project?
I'm trying to render a content getting from content creators. Some contents are malformed and breaks DOM. For example contents include unclosed tags and other reasons that i could not find yet. I should make an implementation that malformed content does not affect the other parts of DOM tree. I'm using SSR to render content, so I should find a solution on server side. Do you have any sugesstions? I tried to use shadow DOM but it is not supported well on server side. Also correcting this html pages and prevent to add malformed html can be viable solution too do you also any suggestion to validate html?
[ "To prevent malformed HTML from breaking the DOM tree, one solution is to use a parser that can handle invalid HTML and recover from errors. One such parser is the HTML5 Parser from the html5lib library. This parser can handle a wide range of invalid HTML and recover from errors, so that the resulting DOM tree is still well-formed and can be rendered correctly.\nHere is an example of how you can use the HTML5 Parser to parse and render malformed HTML content:\nimport React from \"react\";\nimport { parse as parseHtml } from \"html5lib\";\nimport { renderToString as render } from \"react-dom/server\";\n\nconst MyContent = ({ content }) => {\n const dom = parseHtml(content, { tree: true });\n return <div>{render(dom)}</div>;\n};\n\nIn this example, we are using the parseHtml function from the html5lib library to parse the given content and create a DOM tree. We then use the renderToString function from react-dom/server to convert the DOM tree to a string of HTML that can be rendered by React on the server.\nAnother solution is to use a library like cheerio to parse the HTML content and clean it up before rendering it. cheerio is a fast, flexible, and lean implementation of the core jQuery API that can be used to manipulate and parse HTML content on the server. Here is an example of how you can use cheerio to clean up malformed HTML before rendering it with React:\nimport React from \"react\";\nimport cheerio from \"cheerio\";\nimport { renderToString as render } from \"react-dom/server\";\n\nconst MyContent = ({ content }) => {\n const $ = cheerio.load(content);\n // Use cheerio to manipulate and clean up the HTML content\n // ...\n const html = $.html();\n return <div>{render(html)}</div>;\n};\n\n\n" ]
[ 0 ]
[]
[]
[ "html", "javascript", "reactjs", "sanitization", "server_side_rendering" ]
stackoverflow_0074680077_html_javascript_reactjs_sanitization_server_side_rendering.txt
Q: Vue3 script setup without all the imports from vue I find it very repetitive to have to import vue things like: import { ref, computed } from 'vue' In the script setup section. Would it be a bad practice to, let's say assign vue to a special character, like $ and then use it to access these like let drawer = $.ref(null); If so what would be the reasoning behind? A: You can use the experimental version of vue3: // vite.config.js export default { plugins: [ vue({ reactivityTransform: true }) ] } after that there is an auto import available and you don't have to write .value if using a $ref or $computed. Because $ref() is a macro and not a runtime API, it doesn't need to be imported from vue.
Vue3 script setup without all the imports from vue
I find it very repetitive to have to import vue things like: import { ref, computed } from 'vue' In the script setup section. Would it be a bad practice to, let's say assign vue to a special character, like $ and then use it to access these like let drawer = $.ref(null); If so what would be the reasoning behind?
[ "You can use the experimental version of vue3:\n// vite.config.js\nexport default {\n plugins: [\n vue({\n reactivityTransform: true\n })\n ]\n}\n\nafter that there is an auto import available and you don't have to write .value if using a $ref or $computed.\n\nBecause $ref() is a macro and not a runtime API, it doesn't need to be imported from vue.\n\n" ]
[ 1 ]
[]
[]
[ "vue_composition_api", "vuejs3" ]
stackoverflow_0074678183_vue_composition_api_vuejs3.txt
Q: Exit methods ninjascript I am trying to change- SetStopLoss(CalculationMode.Price, Close[0] + SL1 * TickSize); to -ExitShortStopMarket(CalculationMode.Price, Close[0] - SL1 * TickSize); but keep getting these errors. can I not do this with calculationmode?enter image description here Try to make stops movable. Need to use exit methods instead of setstoploss A: It looks like you're trying to use the ExitShortStopMarket method, but you're passing it the wrong type of argument for its first parameter. The ExitShortStopMarket method expects an integer as its first argument, but you're trying to pass it a CalculationMode value. To fix this error, you can either pass an integer value that represents the desired calculation mode to the ExitShortStopMarket method, or you can use a different method that accepts a CalculationMode value as its first argument. Here is an example of how you could modify your code to use the ExitShortStopMarket method correctly: // Set the calculation mode to use the closing price int calculationMode = (int) CalculationMode.Price; // Calculate the stop loss value using the closing price and the SL1 value double stopLossValue = Close[0] - SL1 * TickSize; // Use the ExitShortStopMarket method to set the stop loss ExitShortStopMarket(calculationMode, stopLossValue); This code will set the stop loss for a short position using the ExitShortStopMarket method, with the calculation mode set to use the closing price and the stop loss value calculated based on the closing price and the SL1 value. I hope this helps! Let me know if you have any other questions.
Exit methods ninjascript
I am trying to change- SetStopLoss(CalculationMode.Price, Close[0] + SL1 * TickSize); to -ExitShortStopMarket(CalculationMode.Price, Close[0] - SL1 * TickSize); but keep getting these errors. can I not do this with calculationmode?enter image description here Try to make stops movable. Need to use exit methods instead of setstoploss
[ "It looks like you're trying to use the ExitShortStopMarket method, but you're passing it the wrong type of argument for its first parameter. The ExitShortStopMarket method expects an integer as its first argument, but you're trying to pass it a CalculationMode value.\nTo fix this error, you can either pass an integer value that represents the desired calculation mode to the ExitShortStopMarket method, or you can use a different method that accepts a CalculationMode value as its first argument.\nHere is an example of how you could modify your code to use the ExitShortStopMarket method correctly:\n// Set the calculation mode to use the closing price\nint calculationMode = (int) CalculationMode.Price;\n\n// Calculate the stop loss value using the closing price and the SL1 value\ndouble stopLossValue = Close[0] - SL1 * TickSize;\n\n// Use the ExitShortStopMarket method to set the stop loss\nExitShortStopMarket(calculationMode, stopLossValue);\n\nThis code will set the stop loss for a short position using the ExitShortStopMarket method, with the calculation mode set to use the closing price and the stop loss value calculated based on the closing price and the SL1 value.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "c#" ]
stackoverflow_0074680186_c#.txt
Q: Mgt-toolkit get with HTML-attribute data-for doest work with React? I need to display the values of response, but React doest support the data-for HTML-attribute. When i use the template from https://learn.microsoft.com/en-us/graph/toolkit/components/get I get the error: unexpected variable email. See my implementation below. I can't use {{email.subject}} in this case. import React, { useState, useEffect, useDebugValue } from 'react'; import { Get } from '@microsoft/mgt-react'; import { useAuth0 } from '@auth0/auth0-react'; function GetMessage() { const { isAuthenticated } = useAuth0(); const value = <Get></Get>; console.log(value); return ( isAuthenticated && ( <mgt-get resource="/me/messages" version="beta" scopes="mail.read" max-pages="2"> <template> <div class="email" data-for="email in value"> <h3>{{ email.subject }}</h3> <h4> <mgt-person person-query="{{email.sender.emailAddress.address}}" view="oneline" person-card="hover"></mgt-person> </h4> <div data-if="email.bodyPreview" class="preview" innerHtml>{{email.bodyPreview}}</div> <div data-else class="preview">email body is empty</div> </div> </template> <template data-type="loading">loading</template> <template data-type="error">{{ this }}</template> </mgt-get> ) ) } export default GetMessage; I tried to use the mgt-toolkit examples. The other components works fine. A: As per the Doc,React Toolkit do support the data -for HTML-attribute. In the code you provided above, the error says unable to find the variable mail ,I am wondering you are using authenticating through AuthO,you are using && operation ,might be there is any issue with authentication please confirm once or you can also try to login with React Toolkit , you can also check in the React Toolkit playground, this is working as expected. Hope this helps, If you still having the issue , kindly share the screenshot of output as well ,to understand where it fails Thanks A: I am using Auth0 identity provide with Microsoft connection. It works fine with for example and . The response of returns all the emails correctly, but I React doesnt support data-for HTML attribute. The error says: "unexpected token email". See the attachments. as you can see the tag returns and displays the value This is the response of . The response is correct, but I cant loop and display for example "bodyPreview." This shows my implementation Is there a different approach to display the response? So how can I display the value without data-for attribute? Thanks for your response and support!
Mgt-toolkit get with HTML-attribute data-for doest work with React?
I need to display the values of response, but React doest support the data-for HTML-attribute. When i use the template from https://learn.microsoft.com/en-us/graph/toolkit/components/get I get the error: unexpected variable email. See my implementation below. I can't use {{email.subject}} in this case. import React, { useState, useEffect, useDebugValue } from 'react'; import { Get } from '@microsoft/mgt-react'; import { useAuth0 } from '@auth0/auth0-react'; function GetMessage() { const { isAuthenticated } = useAuth0(); const value = <Get></Get>; console.log(value); return ( isAuthenticated && ( <mgt-get resource="/me/messages" version="beta" scopes="mail.read" max-pages="2"> <template> <div class="email" data-for="email in value"> <h3>{{ email.subject }}</h3> <h4> <mgt-person person-query="{{email.sender.emailAddress.address}}" view="oneline" person-card="hover"></mgt-person> </h4> <div data-if="email.bodyPreview" class="preview" innerHtml>{{email.bodyPreview}}</div> <div data-else class="preview">email body is empty</div> </div> </template> <template data-type="loading">loading</template> <template data-type="error">{{ this }}</template> </mgt-get> ) ) } export default GetMessage; I tried to use the mgt-toolkit examples. The other components works fine.
[ "As per the Doc,React Toolkit do support the data -for HTML-attribute.\nIn the code you provided above, the error says unable to find the variable mail ,I am wondering you are using authenticating through AuthO,you are using && operation ,might be there is any issue with authentication please confirm once or you can also try to login with React Toolkit , you can also check in the React Toolkit playground, this is working as expected.\n\nHope this helps,\nIf you still having the issue , kindly share the screenshot of output as well ,to understand where it fails\nThanks\n", "I am using Auth0 identity provide with Microsoft connection. It works fine with for example and . The response of returns all the emails correctly, but I React doesnt support data-for HTML attribute. The error says: \"unexpected token email\". See the attachments.\nas you can see the tag returns and displays the value\nThis is the response of . The response is correct, but I cant loop and display for example \"bodyPreview.\"\nThis shows my implementation\nIs there a different approach to display the response? So how can I display the value without data-for attribute?\nThanks for your response and support!\n" ]
[ 0, 0 ]
[]
[]
[ "azure_active_directory", "graph", "microsoft_graph_api", "reactjs" ]
stackoverflow_0074594626_azure_active_directory_graph_microsoft_graph_api_reactjs.txt
Q: Split a character vector into individual characters? (opposite of paste or stringr::str_c) An incredibly basic question in R yet the solution isn't clear. How to split a vector of character into its individual characters, i.e. the opposite of paste(..., sep='') or stringr::str_c() ? Anything less clunky than this: sapply(1:26, function(i) { substr("ABCDEFGHIJKLMNOPQRSTUVWXYZ",i,i) } ) "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" "K" "L" "M" "N" "O" "P" "Q" "R" "S" "T" "U" "V" "W" "X" "Y" "Z" Can it be done otherwise, e.g. with strsplit(), stringr::* or anything else? A: Yes, strsplit will do it. strsplit returns a list, so you can either use unlist to coerce the string to a single character vector, or use the list index [[1]] to access first element. x <- paste(LETTERS, collapse = "") unlist(strsplit(x, split = "")) # [1] "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" "K" "L" "M" "N" "O" "P" "Q" "R" "S" #[20] "T" "U" "V" "W" "X" "Y" "Z" OR (noting that it is not actually necessary to name the split argument) strsplit(x, "")[[1]] # [1] "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" "K" "L" "M" "N" "O" "P" "Q" "R" "S" #[20] "T" "U" "V" "W" "X" "Y" "Z" You can also split on NULL or character(0) for the same result. A: str_extract_all() from stringr offers a nice way to perform this operation: str_extract_all("ABCDEFGHIJKLMNOPQRSTUVWXYZ", boundary("character")) [[1]] [1] "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" "K" "L" "M" "N" "O" "P" "Q" "R" "S" "T" "U" [22] "V" "W" "X" "Y" "Z" A: Since stringr 1.5.0, you can use str_split_1, a version of str_split for single strings: library(stringr) x <- paste(LETTERS, collapse = "") str_split_1(x, "") # [1] "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" "K" "L" "M" "N" "O" "P" "Q" "R" "S" #[20] "T" "U" "V" "W" "X" "Y" "Z" A: This is rendered stepwise for clarity; in practice, a function would be created. To find the number of times any character is repeated in sequence the_string <- "BaaaaaaH" # split string into characters the_runs <- strsplit(the_string, "")[[1]] # find runs result <- rle(the_runs) # find values that are repeated result$values[which(result$lengths > 1)] #> [1] "a" # retest with more runs the_string <- "BaabbccH" # split string into characters the_runs <- strsplit(the_string, "")[[1]] # find runs result <- rle(the_runs) # find values that are repeated result$values[which(result$lengths > 1)] #> [1] "a" "b" "c"
Split a character vector into individual characters? (opposite of paste or stringr::str_c)
An incredibly basic question in R yet the solution isn't clear. How to split a vector of character into its individual characters, i.e. the opposite of paste(..., sep='') or stringr::str_c() ? Anything less clunky than this: sapply(1:26, function(i) { substr("ABCDEFGHIJKLMNOPQRSTUVWXYZ",i,i) } ) "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" "K" "L" "M" "N" "O" "P" "Q" "R" "S" "T" "U" "V" "W" "X" "Y" "Z" Can it be done otherwise, e.g. with strsplit(), stringr::* or anything else?
[ "Yes, strsplit will do it. strsplit returns a list, so you can either use unlist to coerce the string to a single character vector, or use the list index [[1]] to access first element. \nx <- paste(LETTERS, collapse = \"\")\n\nunlist(strsplit(x, split = \"\"))\n# [1] \"A\" \"B\" \"C\" \"D\" \"E\" \"F\" \"G\" \"H\" \"I\" \"J\" \"K\" \"L\" \"M\" \"N\" \"O\" \"P\" \"Q\" \"R\" \"S\"\n#[20] \"T\" \"U\" \"V\" \"W\" \"X\" \"Y\" \"Z\"\n\nOR (noting that it is not actually necessary to name the split argument)\nstrsplit(x, \"\")[[1]]\n# [1] \"A\" \"B\" \"C\" \"D\" \"E\" \"F\" \"G\" \"H\" \"I\" \"J\" \"K\" \"L\" \"M\" \"N\" \"O\" \"P\" \"Q\" \"R\" \"S\"\n#[20] \"T\" \"U\" \"V\" \"W\" \"X\" \"Y\" \"Z\"\n\nYou can also split on NULL or character(0) for the same result.\n", "str_extract_all() from stringr offers a nice way to perform this operation:\nstr_extract_all(\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\", boundary(\"character\"))\n\n[[1]]\n [1] \"A\" \"B\" \"C\" \"D\" \"E\" \"F\" \"G\" \"H\" \"I\" \"J\" \"K\" \"L\" \"M\" \"N\" \"O\" \"P\" \"Q\" \"R\" \"S\" \"T\" \"U\"\n[22] \"V\" \"W\" \"X\" \"Y\" \"Z\"\n\n", "Since stringr 1.5.0, you can use str_split_1, a version of str_split for single strings:\nlibrary(stringr)\nx <- paste(LETTERS, collapse = \"\")\nstr_split_1(x, \"\")\n# [1] \"A\" \"B\" \"C\" \"D\" \"E\" \"F\" \"G\" \"H\" \"I\" \"J\" \"K\" \"L\" \"M\" \"N\" \"O\" \"P\" \"Q\" \"R\" \"S\"\n#[20] \"T\" \"U\" \"V\" \"W\" \"X\" \"Y\" \"Z\"\n\n", "This is rendered stepwise for clarity; in practice, a function would be created.\nTo find the number of times any character is repeated in sequence\nthe_string <- \"BaaaaaaH\"\n# split string into characters\nthe_runs <- strsplit(the_string, \"\")[[1]]\n# find runs\nresult <- rle(the_runs)\n# find values that are repeated\nresult$values[which(result$lengths > 1)]\n#> [1] \"a\"\n# retest with more runs\nthe_string <- \"BaabbccH\"\n# split string into characters\nthe_runs <- strsplit(the_string, \"\")[[1]]\n# find runs\nresult <- rle(the_runs)\n# find values that are repeated\nresult$values[which(result$lengths > 1)]\n#> [1] \"a\" \"b\" \"c\"\n\n" ]
[ 27, 5, 1, 0 ]
[]
[]
[ "paste", "r", "string", "string_split", "stringr" ]
stackoverflow_0023028885_paste_r_string_string_split_stringr.txt
Q: react useState object linked when it should NOT const { strip, setStrip } = useContext(StripperContext); const \[tree, setTree\] = useState({}); const \[treeLoading, setTreeLoading\] = useState(false); // this function should change just the \*\*foldersObj but apparently its changing the context {strip.folders} that was used in useEffect function const getTrees = async (foldersObj) =\> { setTreeLoading(true); const newTree = await foldersListToTree({ ...foldersObj }); setTree({...newTree }); setTreeLoading(false); }; useEffect(() =\> { **//after I assigned strip.folders context to the newFOlders its linked together and every change follows** const newFOlders = {}; for (const key in strip.folders) { newFOlders\[key\] = strip.folders\[key\]; } getTrees(newFOlders); }, \[strip.folders\]); it can't be done it's the first time I encounter this , I don't know where it got the link from .your text A: const newFOlders = Object.assign({}, strip.folders); With this update, the newFOlders object will be a separate object from the strip.folders object, so any changes to the newFOlders object will not affect the strip.folders object. This should fix the issue with the code and allow the getTrees function to work as intended.
react useState object linked when it should NOT
const { strip, setStrip } = useContext(StripperContext); const \[tree, setTree\] = useState({}); const \[treeLoading, setTreeLoading\] = useState(false); // this function should change just the \*\*foldersObj but apparently its changing the context {strip.folders} that was used in useEffect function const getTrees = async (foldersObj) =\> { setTreeLoading(true); const newTree = await foldersListToTree({ ...foldersObj }); setTree({...newTree }); setTreeLoading(false); }; useEffect(() =\> { **//after I assigned strip.folders context to the newFOlders its linked together and every change follows** const newFOlders = {}; for (const key in strip.folders) { newFOlders\[key\] = strip.folders\[key\]; } getTrees(newFOlders); }, \[strip.folders\]); it can't be done it's the first time I encounter this , I don't know where it got the link from .your text
[ "const newFOlders = Object.assign({}, strip.folders);\n\nWith this update, the newFOlders object will be a separate object from the strip.folders object, so any changes to the newFOlders object will not affect the strip.folders object. This should fix the issue with the code and allow the getTrees function to work as intended.\n" ]
[ 0 ]
[]
[]
[ "effect", "hyperlink", "reactjs", "state" ]
stackoverflow_0074680208_effect_hyperlink_reactjs_state.txt
Q: What is the best practice of Redirecting users in React app? The app flow: for not yet registered users landing page -> email wall -> plan selection page -> account creation page -> payment page The problem is: when users lands on payment page, and they hit back arrow in browser tab, they would land on the email wall page again. ( code allowed skipping account creation and to display plan selection page ) so when they on plan selection page and click back button, they would see email wall again. I need to not display the email wall page whose url is '/' Question: What is the best way to display another page under this url? and What things would potential fall into the category of "hijack" users back button? some of my thoughts are redirect/ return another component / manipulating user's history. A: To prevent users from seeing the email wall page when they hit the back button on the payment page, you can use the history.replace method from the history library. This method allows you to replace the current history entry with a new one, so that when the user hits the back button, they will be redirected to a different page instead of the email wall page. Here is an example of how you can use the history.replace method to redirect users from the payment page to the plan selection page when they hit the back button: import React from "react"; import { useHistory } from "react-router-dom"; const PaymentPage = () => { const history = useHistory(); useEffect(() => { // Replace the current history entry with the plan selection page // so that when the user hits the back button, they will be // redirected to the plan selection page instead of the email wall page history.replace("/plans"); }, [history]); return ( <div> {/* Payment page content goes here */} </div> ); };
What is the best practice of Redirecting users in React app?
The app flow: for not yet registered users landing page -> email wall -> plan selection page -> account creation page -> payment page The problem is: when users lands on payment page, and they hit back arrow in browser tab, they would land on the email wall page again. ( code allowed skipping account creation and to display plan selection page ) so when they on plan selection page and click back button, they would see email wall again. I need to not display the email wall page whose url is '/' Question: What is the best way to display another page under this url? and What things would potential fall into the category of "hijack" users back button? some of my thoughts are redirect/ return another component / manipulating user's history.
[ "To prevent users from seeing the email wall page when they hit the back button on the payment page, you can use the history.replace method from the history library. This method allows you to replace the current history entry with a new one, so that when the user hits the back button, they will be redirected to a different page instead of the email wall page.\nHere is an example of how you can use the history.replace method to redirect users from the payment page to the plan selection page when they hit the back button:\nimport React from \"react\";\nimport { useHistory } from \"react-router-dom\";\n\nconst PaymentPage = () => {\n const history = useHistory();\n\n useEffect(() => {\n // Replace the current history entry with the plan selection page\n // so that when the user hits the back button, they will be\n // redirected to the plan selection page instead of the email wall page\n history.replace(\"/plans\");\n }, [history]);\n\n return (\n <div>\n {/* Payment page content goes here */}\n </div>\n );\n};\n\n\n" ]
[ 0 ]
[]
[]
[ "browser", "browser_history", "reactjs", "web", "web_deployment" ]
stackoverflow_0074680010_browser_browser_history_reactjs_web_web_deployment.txt
Q: Laravel tinker mode If I run php artisan tinker the CLI will open with Psy Shell v0.8.6 (PHP 7.0.8 — cli) by Justin Hileman >>> Now I enter a word like 'test' and press the return key. What mode did I enter? How can I exit this mode? (ctrl+c exits tinker, but actually I only want to be able to execute commands again as usual which means I type in a command and when I press the return key it will be executed) How can I execute commands in this mode. If I press enter in this mode I will move to the next line, but the command is not executed. A: When I type run PsySH and then type test, I don't enter a different mode. I actually get an error that reads: PHP Warning: Use of undefined constant test - assumed 'test' (this will throw an Error in a future version of PHP) in Psy Shell code on line 1 This might be a version issue. But... I think I know the mode you're talking about. You can enter that mode by typing one single quote ' and pressing Enter. What PsySH is doing is letting you supply input across multiple lines. If, on the next line you just type '; you should see "\n" because it captured the Enter that you pressed. If you want to exit this mode, you can either: Close whatever opening statement you made. You can enter this mode by typing "function { + Enter", "for(;;) { + Enter", the single-quote, and many other things. You need to type the appropriate closing for the statement you started. For a function, }. For a string, ' or ", etc. Press ^D (Control+D). This will put you back at the PsySH prompt. This will also work in many regular system shells, because ^D sends the End-Of-File character. A: Tinker is a command line tool that lets you interact with Laravel from the command line. You can exit tinker mode with either ctrl+c (as you mentioned) or typing exit; and hitting enter. Tinker is based on PsySH, you can think of this mode as a line-by-line interactive PHP parser. So, for example, you can do something like this: $ php artisan tinker Psy Shell v0.7.2 (PHP 5.6.30-7+deb.sury.org~trusty+1 — cli) by Justin Hileman >>> $testString = "test"; => "test" >>> echo $testString; test⏎ => null >>> exit; Exit: Goodbye. A: while you are in tinker just type exit;
Laravel tinker mode
If I run php artisan tinker the CLI will open with Psy Shell v0.8.6 (PHP 7.0.8 — cli) by Justin Hileman >>> Now I enter a word like 'test' and press the return key. What mode did I enter? How can I exit this mode? (ctrl+c exits tinker, but actually I only want to be able to execute commands again as usual which means I type in a command and when I press the return key it will be executed) How can I execute commands in this mode. If I press enter in this mode I will move to the next line, but the command is not executed.
[ "When I type run PsySH and then type test, I don't enter a different mode. I actually get an error that reads:\n\nPHP Warning: Use of undefined constant test - assumed 'test' (this will throw an Error in a future version of PHP) in Psy Shell code on line 1\n\nThis might be a version issue.\nBut...\nI think I know the mode you're talking about. You can enter that mode by typing one single quote ' and pressing Enter.\nWhat PsySH is doing is letting you supply input across multiple lines. If, on the next line you just type '; you should see \"\\n\" because it captured the Enter that you pressed.\nIf you want to exit this mode, you can either:\n\nClose whatever opening statement you made. You can enter this mode by typing \"function { + Enter\", \"for(;;) { + Enter\", the single-quote, and many other things. You need to type the appropriate closing for the statement you started. For a function, }. For a string, ' or \", etc.\nPress ^D (Control+D). This will put you back at the PsySH prompt. This will also work in many regular system shells, because ^D sends the End-Of-File character.\n\n", "\nTinker is a command line tool that lets you interact with Laravel from the command line.\nYou can exit tinker mode with either ctrl+c (as you mentioned) or typing exit; and hitting enter.\nTinker is based on PsySH, you can think of this mode as a line-by-line interactive PHP parser.\n\nSo, for example, you can do something like this:\n$ php artisan tinker\nPsy Shell v0.7.2 (PHP 5.6.30-7+deb.sury.org~trusty+1 — cli) by Justin Hileman\n>>> $testString = \"test\";\n=> \"test\"\n>>> echo $testString;\ntest⏎\n=> null\n>>> exit;\nExit: Goodbye.\n\n", "while you are in tinker just type\n\nexit;\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "laravel", "tinker" ]
stackoverflow_0044785582_laravel_tinker.txt
Q: C# Using Socket.Receive(bytes) takes about 3 minutes I am using .NET 6.0 and recently using int numBytes = client.Receive(bytes); has been taking around about 3 minutes. The Socket variable is called client. This issue was not occuring 3 days ago. The full code that I am using is: string data = ""; byte[] bytes = new byte[2048]; client = httpServer.Accept(); // Read inbound connection data while (true) { int numBytes = client.Receive(bytes); // Taking about 3 minutes here data += Encoding.ASCII.GetString(bytes, 0, numBytes); if (data.IndexOf("\r\n") > -1 || data == "") { break; } } The timing is also not always consistent. Sometimes (rarely) it can be instant and othertimes it can take 3 minutes - 1 hour. I have attempted the following: Restarting my computer Changing networks Turning off the firewall Attempting on a different computer Attempting on a different computer with the firewall off Using a wired and wireless connection However none of these worked and instead resulted in the same issue. What I expect to happen and what used to happen is that it would continue through the code normally instead of being hung up on 1 line of code for a long time. A: You could use the client.Poll() method to check if data is available to be read from the socket before calling client.Receive(). If client.Poll() returns false, it means that there is no data available to be read and you can handle that situation accordingly.
C# Using Socket.Receive(bytes) takes about 3 minutes
I am using .NET 6.0 and recently using int numBytes = client.Receive(bytes); has been taking around about 3 minutes. The Socket variable is called client. This issue was not occuring 3 days ago. The full code that I am using is: string data = ""; byte[] bytes = new byte[2048]; client = httpServer.Accept(); // Read inbound connection data while (true) { int numBytes = client.Receive(bytes); // Taking about 3 minutes here data += Encoding.ASCII.GetString(bytes, 0, numBytes); if (data.IndexOf("\r\n") > -1 || data == "") { break; } } The timing is also not always consistent. Sometimes (rarely) it can be instant and othertimes it can take 3 minutes - 1 hour. I have attempted the following: Restarting my computer Changing networks Turning off the firewall Attempting on a different computer Attempting on a different computer with the firewall off Using a wired and wireless connection However none of these worked and instead resulted in the same issue. What I expect to happen and what used to happen is that it would continue through the code normally instead of being hung up on 1 line of code for a long time.
[ "You could use the client.Poll() method to check if data is available to be read from the socket before calling client.Receive().\nIf client.Poll() returns false, it means that there is no data available to be read and you can handle that situation accordingly.\n" ]
[ 1 ]
[]
[]
[ ".net", ".net_6.0", "c#", "sockets" ]
stackoverflow_0074680258_.net_.net_6.0_c#_sockets.txt
Q: While loop cannot be not terminated with eof char buf_in() { if(read_pos==16) { read_pos=0; read(fd_in,buffer_in,16); } return buffer_in[read_pos++]; } void buf_out(char data) { if(write_pos==16) { write_pos=0; write(fd_out,buffer_out,16); } buffer_out[write_pos++]=data; } int main() { fd_in=open("input.txt",O_RDONLY); fd_out=open("result.txt",O_WRONLY|O_CREAT,0666); char ch; while((ch=buf_in())!= EOF) { buf_out(ch); } close(fd_in); close(fd_out); return 0; } These are my codes, and when I look at the result.txt, it looks like the content of the input file and bunch of nulls and the content of input file appears again. So it never ends. What is the problem of my code? Thanks. A: If you want while((ch=buf_in())!= EOF) to work, you must program the function buf_in to return EOF when a read fails due to reaching end-of-file. However, you did not do this. You are instead ignoring the return value of read, so you will not notice when end-of-file is reached. Also, a char is not guaranteed to be able to represent the value EOF. Use int instead, if you want to be sure that the data type can represent the value EOF. You appear to be under the impression that EOF is the character code of a character that appears at the end of the file. However, that is not the case. The value EOF is not a character code. It is a special value that is returned by functions such as getchar, to indicate that no character was retrieved. The function read will not necessarily return the value EOF to indicate end-or-file or an error. It uses a different mechanism to report this. It will return the number of bytes successfully read, or -1 on error. The value EOF may be equal to the value -1 on some platforms, but you cannot rely on this. In order to solve the problems mentioned above, you could rewrite the function buf_in like this: int buf_in( void ) { if ( read_pos == 16 ) { read_pos = 0; if ( read( fd_in, buffer_in, 16 ) == -1 ) return EOF; } return (unsigned char)buffer_in[read_pos++]; } The cast to unsigned char is necessary so that it is guaranteed that all character codes are distinguishable from the value EOF. However, this would not properly handle the case of read only reading between 1 and 15 bytes, instead of 16 bytes. Therefore, it would probably be best to add an additional variable ssize_t bytes_read and to initialize this variable to 0. Afterwards, you can use the following code: int buf_in( void ) { if ( read_pos == bytes_read ) { read_pos = 0; bytes_read = read( fd_in, buffer_in, 16 ); if ( bytes_read == -1 ) return EOF; } return (unsigned char)buffer_in[read_pos++]; } Note that you must also change the variable declaration char ch; in the function main to int ch; because we have changed the return value of the function buf_in to int.
While loop cannot be not terminated with eof
char buf_in() { if(read_pos==16) { read_pos=0; read(fd_in,buffer_in,16); } return buffer_in[read_pos++]; } void buf_out(char data) { if(write_pos==16) { write_pos=0; write(fd_out,buffer_out,16); } buffer_out[write_pos++]=data; } int main() { fd_in=open("input.txt",O_RDONLY); fd_out=open("result.txt",O_WRONLY|O_CREAT,0666); char ch; while((ch=buf_in())!= EOF) { buf_out(ch); } close(fd_in); close(fd_out); return 0; } These are my codes, and when I look at the result.txt, it looks like the content of the input file and bunch of nulls and the content of input file appears again. So it never ends. What is the problem of my code? Thanks.
[ "If you want while((ch=buf_in())!= EOF) to work, you must program the function buf_in to return EOF when a read fails due to reaching end-of-file. However, you did not do this. You are instead ignoring the return value of read, so you will not notice when end-of-file is reached.\nAlso, a char is not guaranteed to be able to represent the value EOF. Use int instead, if you want to be sure that the data type can represent the value EOF.\nYou appear to be under the impression that EOF is the character code of a character that appears at the end of the file. However, that is not the case. The value EOF is not a character code. It is a special value that is returned by functions such as getchar, to indicate that no character was retrieved. The function read will not necessarily return the value EOF to indicate end-or-file or an error. It uses a different mechanism to report this. It will return the number of bytes successfully read, or -1 on error. The value EOF may be equal to the value -1 on some platforms, but you cannot rely on this.\nIn order to solve the problems mentioned above, you could rewrite the function buf_in like this:\nint buf_in( void )\n{\n if ( read_pos == 16 )\n {\n read_pos = 0;\n if ( read( fd_in, buffer_in, 16 ) == -1 )\n return EOF;\n }\n return (unsigned char)buffer_in[read_pos++];\n}\n\nThe cast to unsigned char is necessary so that it is guaranteed that all character codes are distinguishable from the value EOF.\nHowever, this would not properly handle the case of read only reading between 1 and 15 bytes, instead of 16 bytes.\nTherefore, it would probably be best to add an additional variable ssize_t bytes_read and to initialize this variable to 0. Afterwards, you can use the following code:\nint buf_in( void )\n{\n if ( read_pos == bytes_read )\n {\n read_pos = 0;\n bytes_read = read( fd_in, buffer_in, 16 );\n if ( bytes_read == -1 )\n return EOF;\n }\n return (unsigned char)buffer_in[read_pos++];\n}\n\nNote that you must also change the variable declaration\nchar ch;\n\nin the function main to\nint ch;\n\nbecause we have changed the return value of the function buf_in to int.\n" ]
[ 1 ]
[]
[]
[ "buffer", "c", "eof", "file", "linux" ]
stackoverflow_0074680135_buffer_c_eof_file_linux.txt
Q: Problem with pymunk while running in Virtual Studio I tried using the code so I can run a simulation of an object hitting on the ground but it just says draw_polygon ([Vec2d(55.0, -4779353554820.233), Vec2d(55.0, -4779353554810.233), Vec2d(45.0, -4779353554810.233), Vec2d(45.0, -4779353554820.233)], 0.0, SpaceDebugColor(r=44.0, g=62.0, b=80.0, a=255.0), SpaceDebugColor(r=52.0, g=152.0, b=219.0, a=255.0)) import pymunk #This comands sets the scene for our prosomition space=pymunk.Space() space.gravity = 0,-9.80665 body = pymunk.Body() body.position= 50,100 #This comands create a box that attaches to the body and creates its settings poly = pymunk.Poly.create_box(body) poly.mass = 10 space.add(body, poly) #Creates and prints the scene print_options = pymunk.SpaceDebugDrawOptions() speed=int(input("Speed:")) while True: space.step(speed) space.debug_draw(print_options) Im trying to run this on my visual studio but it's just saying: draw_polygon ([Vec2d(55.0, -4779353554820.233), Vec2d(55.0, -4779353554810.233), Vec2d(45.0, -4779353554810.233), Vec2d(45.0, -4779353554820.233)], 0.0, SpaceDebugColor(r=44.0, g=62.0, b=80.0, a=255.0), SpaceDebugColor(r=52.0, g=152.0, b=219.0, a=255.0)) Is there any package for an graphical enviroment ? A: Yes, by default the debug drawing will just print out the result (its made in this way so that you can use it without installing anything else and even run it from the terminal). However, it also comes with a module for the two libraries pygame and pyglet that are documented here: http://www.pymunk.org/en/latest/pymunk.pygame_util.html and here http://www.pymunk.org/en/latest/pymunk.pyglet_util.html Both work in more or less the same way, just use their implementation of the SpaceDebugDrawOptions class instead of the default one.
Problem with pymunk while running in Virtual Studio
I tried using the code so I can run a simulation of an object hitting on the ground but it just says draw_polygon ([Vec2d(55.0, -4779353554820.233), Vec2d(55.0, -4779353554810.233), Vec2d(45.0, -4779353554810.233), Vec2d(45.0, -4779353554820.233)], 0.0, SpaceDebugColor(r=44.0, g=62.0, b=80.0, a=255.0), SpaceDebugColor(r=52.0, g=152.0, b=219.0, a=255.0)) import pymunk #This comands sets the scene for our prosomition space=pymunk.Space() space.gravity = 0,-9.80665 body = pymunk.Body() body.position= 50,100 #This comands create a box that attaches to the body and creates its settings poly = pymunk.Poly.create_box(body) poly.mass = 10 space.add(body, poly) #Creates and prints the scene print_options = pymunk.SpaceDebugDrawOptions() speed=int(input("Speed:")) while True: space.step(speed) space.debug_draw(print_options) Im trying to run this on my visual studio but it's just saying: draw_polygon ([Vec2d(55.0, -4779353554820.233), Vec2d(55.0, -4779353554810.233), Vec2d(45.0, -4779353554810.233), Vec2d(45.0, -4779353554820.233)], 0.0, SpaceDebugColor(r=44.0, g=62.0, b=80.0, a=255.0), SpaceDebugColor(r=52.0, g=152.0, b=219.0, a=255.0)) Is there any package for an graphical enviroment ?
[ "Yes, by default the debug drawing will just print out the result (its made in this way so that you can use it without installing anything else and even run it from the terminal).\nHowever, it also comes with a module for the two libraries pygame and pyglet that are documented here:\nhttp://www.pymunk.org/en/latest/pymunk.pygame_util.html\nand here\nhttp://www.pymunk.org/en/latest/pymunk.pyglet_util.html\nBoth work in more or less the same way, just use their implementation of the SpaceDebugDrawOptions class instead of the default one.\n" ]
[ 0 ]
[]
[]
[ "pymunk", "python" ]
stackoverflow_0074670356_pymunk_python.txt
Q: Segmentation fault (core dumped) in gnu assembly I'm learn Gnu Assembly now, trying to read string, but is wrong. What is the problem? .text .globl _start MAX_CHAR=30 _start: ## Start message ## movl $4, %eax movl $1, %ebx movl $msg, %ecx movl $len, %edx int $0x80 ## READ ## movl $3, %eax #sys_read (number 3) movl $0, %ebx #stdin (number 0) movl %esp, %ecx #starting point movl $MAX_CHAR, %edx #max input int $0x80 #call ## Need the cycle to count input length ## movl $1, %ecx #counter end_input: xor %ebx, %ebx mov (%esp), %ebx add $1, %esp #get next char to compare add $1, %ecx #counter+=1 cmp $0xa, %ebx #compare with "\n" jne end_input #if not, continue ## WRITE ## sub %ecx, %esp #start from the first input char movl $4, %eax #sys_write (number 4) movl $1, %ebx #stdout (number 1) movl %ecx, %edx #start pointer movl %esp, %ecx #length int $0x80 #call ## EXIT ## movl $1, %eax int $0x80 .data msg: .ascii "Insert an input:\n" len =.-msg A: Bugs that I see: Stack management. You can't assume anything about the data already on the stack on program entry, nor how much space is available. And you mustn't write below the current address in %esp; for instance, signal handlers can overwrite it unexpectedly at any time. So you need to subtract from %esp to allocate space for your buffer, then add back when done. Moreover, %esp should remain aligned to 4 bytes at all times. This is not strictly an architectural requirement, but breaking this rule will cause inefficient execution and a lot of confusion. Thus, to create space for a 30-byte buffer, round up and subtract 32 from %esp. When you want to call functions written in C, there are additional alignment requirements, see gcc x86-32 stack alignment and calling printf. For both of the above reasons, don't use %esp as a pointer variable in your loop: leave it alone and choose some other register. Operand size. x86-32 instructions can generally operate on either 8, 16 or 32 bits. The l suffix and/or use of a 32-bit register (eax, ebx, and so on) signals a 32-bit instruction. So mov (%esp), %ebx loads 4 bytes from memory, and cmp $0xa, %ebx compares them to the 32-bit value 0x0000000a. Thus the comparison will be wrong unless the next three bytes in memory just happened to all be zeros. To get 8-bit operation, use 8-bit registers (al, bl, ah, bh, etc), but be aware that they overlap the corresponding 16-bit and 32-bit registers; so don't try to use %ebx and %bl for different things at the same time. Try movb (%reg), %bl (where as mentioned above, %reg shouldn't be %esp but rather whatever register you use instead) and cmpb $0xa, %bl. The b suffix is optional as the size is inferred from the 8-bit bl register, but as you're using suffixes in most of the rest of your cod, might as well be consistent.) You are writing 32-bit code here, so be sure to build your program in 32-bit mode. For instance, if using gcc, you need the -m32 flag. In the long run, you might prefer to learn 64-bit x86 assembly instead; 32 bit x86 code is pretty much obsolete. Actually, counting the length of the input by searching for newline (0xa) isn't really appropriate in the first place. If the input doesn't contain a newline at all, which is possible if the line was more than 30 bytes long, then your loop will run off the end of the buffer. To find out how many characters were read, you should instead use the return value from read, which is left in %eax after the read system call returns. (If it is zero, end-of-file was reached; if it's negative, there was an error.) Moreover, if you're reading from the terminal in its default mode, you will normally just get at most one line at a time anyway, so if there is a \n it would correspond with the end of the input returned by read. (But this doesn't apply if standard input is redirected from a file.)
Segmentation fault (core dumped) in gnu assembly
I'm learn Gnu Assembly now, trying to read string, but is wrong. What is the problem? .text .globl _start MAX_CHAR=30 _start: ## Start message ## movl $4, %eax movl $1, %ebx movl $msg, %ecx movl $len, %edx int $0x80 ## READ ## movl $3, %eax #sys_read (number 3) movl $0, %ebx #stdin (number 0) movl %esp, %ecx #starting point movl $MAX_CHAR, %edx #max input int $0x80 #call ## Need the cycle to count input length ## movl $1, %ecx #counter end_input: xor %ebx, %ebx mov (%esp), %ebx add $1, %esp #get next char to compare add $1, %ecx #counter+=1 cmp $0xa, %ebx #compare with "\n" jne end_input #if not, continue ## WRITE ## sub %ecx, %esp #start from the first input char movl $4, %eax #sys_write (number 4) movl $1, %ebx #stdout (number 1) movl %ecx, %edx #start pointer movl %esp, %ecx #length int $0x80 #call ## EXIT ## movl $1, %eax int $0x80 .data msg: .ascii "Insert an input:\n" len =.-msg
[ "Bugs that I see:\n\nStack management. You can't assume anything about the data already on the stack on program entry, nor how much space is available. And you mustn't write below the current address in %esp; for instance, signal handlers can overwrite it unexpectedly at any time. So you need to subtract from %esp to allocate space for your buffer, then add back when done.\n\nMoreover, %esp should remain aligned to 4 bytes at all times. This is not strictly an architectural requirement, but breaking this rule will cause inefficient execution and a lot of confusion. Thus, to create space for a 30-byte buffer, round up and subtract 32 from %esp.\nWhen you want to call functions written in C, there are additional alignment requirements, see gcc x86-32 stack alignment and calling printf.\n\nFor both of the above reasons, don't use %esp as a pointer variable in your loop: leave it alone and choose some other register.\n\nOperand size. x86-32 instructions can generally operate on either 8, 16 or 32 bits. The l suffix and/or use of a 32-bit register (eax, ebx, and so on) signals a 32-bit instruction. So mov (%esp), %ebx loads 4 bytes from memory, and cmp $0xa, %ebx compares them to the 32-bit value 0x0000000a. Thus the comparison will be wrong unless the next three bytes in memory just happened to all be zeros. To get 8-bit operation, use 8-bit registers (al, bl, ah, bh, etc), but be aware that they overlap the corresponding 16-bit and 32-bit registers; so don't try to use %ebx and %bl for different things at the same time. Try movb (%reg), %bl (where as mentioned above, %reg shouldn't be %esp but rather whatever register you use instead) and cmpb $0xa, %bl. The b suffix is optional as the size is inferred from the 8-bit bl register, but as you're using suffixes in most of the rest of your cod, might as well be consistent.)\n\nYou are writing 32-bit code here, so be sure to build your program in 32-bit mode. For instance, if using gcc, you need the -m32 flag. In the long run, you might prefer to learn 64-bit x86 assembly instead; 32 bit x86 code is pretty much obsolete.\n\nActually, counting the length of the input by searching for newline (0xa) isn't really appropriate in the first place. If the input doesn't contain a newline at all, which is possible if the line was more than 30 bytes long, then your loop will run off the end of the buffer. To find out how many characters were read, you should instead use the return value from read, which is left in %eax after the read system call returns. (If it is zero, end-of-file was reached; if it's negative, there was an error.)\nMoreover, if you're reading from the terminal in its default mode, you will normally just get at most one line at a time anyway, so if there is a \\n it would correspond with the end of the input returned by read. (But this doesn't apply if standard input is redirected from a file.)\n\n\n" ]
[ 2 ]
[]
[]
[ "assembly", "gnu_assembler", "segmentation_fault", "x86" ]
stackoverflow_0074679397_assembly_gnu_assembler_segmentation_fault_x86.txt
Q: How to write a 7zip batch file to extract and rename the first file of each archive in a folder that contains multiple archives? I would like to know how to write a 7zip batch file to extract the first file of an archive, rename that file to the archive name, and do so for all archive files in a folder? For example, here is the folder/file structure: Folder Archive_1.zip File_1.jpg File_2.jpg Archive_2.zip File_1.jpg File_2.jpg I would like to extract the first file of each archive, rename that file to the name of the archive, and have the output in the same folder: Archive_1.jpg Archive_2.jpg I have a batch file to zip all directories to separate archives, but I'm having trouble converting into the task I want because I don't know anything about coding. Thank you. I have a batch file to zip all directories to separate archives, but I'm having trouble converting into the task I want because I don't know anything about coding. for /d %%X in (*) do "c:\Program Files\7-Zip\7z.exe" a "%%X.cbz" "%%X\" A: using CMD/.BAT you can extract multiple files, but i don't know way to do that only one-by-one. Here's how to do that in cmd: Folder>"<Directory+FileDirectory>" e *.zip How to do it in .bat file: @echo off cls Folder>"<Directory+FileDirectory>" e *.zip Remeber, the batch file needs to be in folder that contains .zip folder. Remember, open command prompt in directory where .zip folder are stored.
How to write a 7zip batch file to extract and rename the first file of each archive in a folder that contains multiple archives?
I would like to know how to write a 7zip batch file to extract the first file of an archive, rename that file to the archive name, and do so for all archive files in a folder? For example, here is the folder/file structure: Folder Archive_1.zip File_1.jpg File_2.jpg Archive_2.zip File_1.jpg File_2.jpg I would like to extract the first file of each archive, rename that file to the name of the archive, and have the output in the same folder: Archive_1.jpg Archive_2.jpg I have a batch file to zip all directories to separate archives, but I'm having trouble converting into the task I want because I don't know anything about coding. Thank you. I have a batch file to zip all directories to separate archives, but I'm having trouble converting into the task I want because I don't know anything about coding. for /d %%X in (*) do "c:\Program Files\7-Zip\7z.exe" a "%%X.cbz" "%%X\"
[ "using CMD/.BAT you can extract multiple files, but i don't know way to do that only one-by-one.\nHere's how to do that in cmd:\nFolder>\"<Directory+FileDirectory>\" e *.zip\nHow to do it in .bat file:\n@echo off cls Folder>\"<Directory+FileDirectory>\" e *.zip\nRemeber, the batch file needs to be in folder that contains .zip folder.\nRemember, open command prompt in directory where .zip folder are stored.\n" ]
[ 0 ]
[]
[]
[ "7zip", "archive", "batch_file" ]
stackoverflow_0074679568_7zip_archive_batch_file.txt
Q: Printing Ansible Console output to a file I Have a Ansible playbook which prints the CPU utilisation of target machine.When the CPU utilization is less than 90% , I get a OK message and if more than 90% , I should get not okay message on screen and also generate a log file as monitor.log on the Ansible host machine when CPU utilization is not okay. I am able to generate the output on the console but I am not able to send this output to a log file. The Ansible playbook that I have created is. #CPU callculation - name: Setup Nginx server on myserver list hosts: myservers become: True tasks: - name: 'copy Get-Memory-Utilization.sh script to {{ inventory_hostname }}' copy: src: /home/ec2-user/Memory-Utilization.sh dest: /tmp mode: '0775' - name: 'Preparing Memory utilization using script results' shell: | sh /tmp/Memory-Utilization.sh register: memsec - name: 'Preparing Memory utilization for 1st sec' shell: | sh /tmp/Memory-Utilization.sh register: mem1sec - name: 'Preparing Memory utilization for 2nd sec' shell: | sh /tmp/Memory-Utilization.sh register: mem2sec - name: 'Preparing Memory utilization for 3rd sec' shell: | sh /tmp/Memory-Utilization.sh register: mem3sec - name: 'Prepare Memory Used percentage if its abnormal' shell: | sh /tmp/Memory-Utilization.sh register: memhigusage when: memsec.stdout|int >= 90 or mem1sec.stdout|int >= 90 or mem2sec.stdout|int >= 90 or mem3sec.stdout|int >= 90 - name: 'Print message if MEMORY utilization is normal' debug: msg: - ------------------------------------------------------- - Memory Utilization = ( ( Total - Free ) / Total * 100 ) = {{ memsec.stdout }}% - ------------------------------------------------------- when: memsec.stdout|int < 90 and mem1sec.stdout|int < 90 and mem2sec.stdout|int < 90 and mem3sec.stdout|int < 90 - name: 'Print message if MEMORY utilization is abnormal' debug: msg: - ------------------------------------------------------- - Memory Utilization = ( ( Total - Free ) / Total * 100 ) = {{ memhigusage.stdout }}% - ------------------------------------------------------- when: memsec.stdout|int >= 90 or mem1sec.stdout|int >= 90 or mem2sec.stdout|int >= 90 or mem3sec.stdout|int >= 90 output: TASK [Print message if MEMORY utilization is normal] ************************************************************************************************************************************************************* ok: [44.203.153.54] => { "msg": [ "-------------------------------------------------------", "Memory Utilization = ( ( Total - Free ) / Total * 100 ) = 13.87%", "-------------------------------------------------------" ] } TASK [Print message if MEMORY utilization is abnormal] *********************************************************************************************************************************************************** skipping: [44.203.153.54] => {} Please help me to send this output to a file. A: To send the output to a log file, you can use the "ansible-playbook" command with the "-l" option to specify the log file and the "-v" option to increase the verbosity of the output. For example: ansible-playbook -l /var/log/monitor.log -v my_playbook.yml This will save the output of the playbook in the specified log file. You can also adjust the verbosity level by increasing the value of the "-v" option, for example "-vvv" for more detailed output. You can also use the "debug" module in your playbook to print messages to the log file, using the "log_path" option to specify the log file. For example: name: 'Print message if MEMORY utilization is normal' debug: msg: - ------------------------------------------------------- - Memory Utilization = ( ( Total - Free ) / Total * 100 ) = {{ memsec.stdout }}% - ------------------------------------------------------- log_path: /var/log/monitor.log when: memsec.stdout|int < 90 and mem1sec.stdout|int < 90 and mem2sec.stdout|int < 90 and mem3sec.stdout|int < 90 This will print the message to the specified log file, in addition to the console output. Hope this helps! A: To send the output to a log file, you can use the ansible.builtin.log module in your playbook. This module allows you to log messages to a specified file. First, you need to create a task that uses the log module to write a message to a file. You can do this by adding the following task to your playbook: - name: 'Write log message to file' log: msg: 'Memory utilization is not OK: {{ memhigusage.stdout }}%' path: /path/to/log/file/monitor.log This task will write the message Memory utilization is not OK: {{ memhigusage.stdout }}% to the file monitor.log in the specified path. You can customize the message and the path to the log file as needed. Next, you need to specify when this task should be executed. In your case, you want to write a log message when the memory utilization is not OK, which means the memsec.stdout, mem1sec.stdout, mem2sec.stdout, or mem3sec.stdout variable is greater than or equal to 90. To do this, you can add the when condition to your task, as shown below: - name: 'Write log message to file' log: msg: 'Memory utilization is not OK: {{ memhigusage.stdout }}%' path: /path/to/log/file/monitor.log when: memsec.stdout|int >= 90 or mem1sec.stdout|int >= 90 or mem2sec.stdout|int >= 90 or mem3sec.stdout|int >= 90 This will ensure that the log message is only written to the file when the memory utilization is not OK. I hope this helps!! Thanks
Printing Ansible Console output to a file
I Have a Ansible playbook which prints the CPU utilisation of target machine.When the CPU utilization is less than 90% , I get a OK message and if more than 90% , I should get not okay message on screen and also generate a log file as monitor.log on the Ansible host machine when CPU utilization is not okay. I am able to generate the output on the console but I am not able to send this output to a log file. The Ansible playbook that I have created is. #CPU callculation - name: Setup Nginx server on myserver list hosts: myservers become: True tasks: - name: 'copy Get-Memory-Utilization.sh script to {{ inventory_hostname }}' copy: src: /home/ec2-user/Memory-Utilization.sh dest: /tmp mode: '0775' - name: 'Preparing Memory utilization using script results' shell: | sh /tmp/Memory-Utilization.sh register: memsec - name: 'Preparing Memory utilization for 1st sec' shell: | sh /tmp/Memory-Utilization.sh register: mem1sec - name: 'Preparing Memory utilization for 2nd sec' shell: | sh /tmp/Memory-Utilization.sh register: mem2sec - name: 'Preparing Memory utilization for 3rd sec' shell: | sh /tmp/Memory-Utilization.sh register: mem3sec - name: 'Prepare Memory Used percentage if its abnormal' shell: | sh /tmp/Memory-Utilization.sh register: memhigusage when: memsec.stdout|int >= 90 or mem1sec.stdout|int >= 90 or mem2sec.stdout|int >= 90 or mem3sec.stdout|int >= 90 - name: 'Print message if MEMORY utilization is normal' debug: msg: - ------------------------------------------------------- - Memory Utilization = ( ( Total - Free ) / Total * 100 ) = {{ memsec.stdout }}% - ------------------------------------------------------- when: memsec.stdout|int < 90 and mem1sec.stdout|int < 90 and mem2sec.stdout|int < 90 and mem3sec.stdout|int < 90 - name: 'Print message if MEMORY utilization is abnormal' debug: msg: - ------------------------------------------------------- - Memory Utilization = ( ( Total - Free ) / Total * 100 ) = {{ memhigusage.stdout }}% - ------------------------------------------------------- when: memsec.stdout|int >= 90 or mem1sec.stdout|int >= 90 or mem2sec.stdout|int >= 90 or mem3sec.stdout|int >= 90 output: TASK [Print message if MEMORY utilization is normal] ************************************************************************************************************************************************************* ok: [44.203.153.54] => { "msg": [ "-------------------------------------------------------", "Memory Utilization = ( ( Total - Free ) / Total * 100 ) = 13.87%", "-------------------------------------------------------" ] } TASK [Print message if MEMORY utilization is abnormal] *********************************************************************************************************************************************************** skipping: [44.203.153.54] => {} Please help me to send this output to a file.
[ "To send the output to a log file, you can use the \"ansible-playbook\" command with the \"-l\" option to specify the log file and the \"-v\" option to increase the verbosity of the output.\nFor example:\nansible-playbook -l /var/log/monitor.log -v my_playbook.yml\n\nThis will save the output of the playbook in the specified log file. You can also adjust the verbosity level by increasing the value of the \"-v\" option, for example \"-vvv\" for more detailed output.\nYou can also use the \"debug\" module in your playbook to print messages to the log file, using the \"log_path\" option to specify the log file. For example:\n name: 'Print message if MEMORY utilization is normal'\ndebug:\nmsg:\n- -------------------------------------------------------\n- Memory Utilization = ( ( Total - Free ) / Total * 100 ) = {{ memsec.stdout }}%\n- -------------------------------------------------------\nlog_path: /var/log/monitor.log\nwhen: memsec.stdout|int < 90 and mem1sec.stdout|int < 90 and mem2sec.stdout|int < 90 and mem3sec.stdout|int < 90\n\nThis will print the message to the specified log file, in addition to the console output.\nHope this helps!\n", "To send the output to a log file, you can use the ansible.builtin.log module in your playbook. This module allows you to log messages to a specified file.\nFirst, you need to create a task that uses the log module to write a message to a file. You can do this by adding the following task to your playbook:\n- name: 'Write log message to file'\n log:\n msg: 'Memory utilization is not OK: {{ memhigusage.stdout }}%'\n path: /path/to/log/file/monitor.log\n\nThis task will write the message Memory utilization is not OK: {{ memhigusage.stdout }}% to the file monitor.log in the specified path. You can customize the message and the path to the log file as needed.\nNext, you need to specify when this task should be executed. In your case, you want to write a log message when the memory utilization is not OK, which means the memsec.stdout, mem1sec.stdout, mem2sec.stdout, or mem3sec.stdout variable is greater than or equal to 90. To do this, you can add the when condition to your task, as shown below:\n- name: 'Write log message to file'\n log:\n msg: 'Memory utilization is not OK: {{ memhigusage.stdout }}%'\n path: /path/to/log/file/monitor.log\n when: memsec.stdout|int >= 90 or mem1sec.stdout|int >= 90 or mem2sec.stdout|int >= 90 or mem3sec.stdout|int >= 90\n\nThis will ensure that the log message is only written to the file when the memory utilization is not OK.\nI hope this helps!!\nThanks\n" ]
[ 1, 0 ]
[]
[]
[ "ansible", "devops", "linux", "redhat", "server" ]
stackoverflow_0074677085_ansible_devops_linux_redhat_server.txt
Q: Replace string to the left of value and to the right of quote character I have a text file content.txt: Some other text 1 "one" : "Text To Replace1:/Text To Stay.133" Some other text 2 "five" : "Text To Change2:/Another Text To Stay.50" Some other text 5 I came up with the following script: $SRCFile = "K:\content.txt" $DSTFile = "K:\result.txt" $Text2Replace = "YabaDaba.du:/" get-content $SRCFile | ForEach-Object { $_ -replace ".*:\/", $Text2Replace } | Out-File $DSTFile It works almost okay, but it selects the entire line to the left of the ":/" string. I want it only to select the text to the previous quotation mark (excluding it): What regex value should I use to point the above script to select only the text up to previous quotation mark? I've been trying Regex101.com, especially LookBehind, but I couldn't come up with any idea. A: (?<=.+: ").*:\/ might do what you're after. In this case you can also read the file as single multi-line string (hence the use of -Raw in the code) and the use of the (?m) flag (Multiline mode). See https://regex101.com/r/3CXaOI/1 for details. $Text2Replace = "YabaDaba.du:/" (Get-Content $SRCFile -Raw) -replace '(?m)(?<=.+: ").*:\/', $Text2Replace | Out-File $DSTFile
Replace string to the left of value and to the right of quote character
I have a text file content.txt: Some other text 1 "one" : "Text To Replace1:/Text To Stay.133" Some other text 2 "five" : "Text To Change2:/Another Text To Stay.50" Some other text 5 I came up with the following script: $SRCFile = "K:\content.txt" $DSTFile = "K:\result.txt" $Text2Replace = "YabaDaba.du:/" get-content $SRCFile | ForEach-Object { $_ -replace ".*:\/", $Text2Replace } | Out-File $DSTFile It works almost okay, but it selects the entire line to the left of the ":/" string. I want it only to select the text to the previous quotation mark (excluding it): What regex value should I use to point the above script to select only the text up to previous quotation mark? I've been trying Regex101.com, especially LookBehind, but I couldn't come up with any idea.
[ "(?<=.+: \").*:\\/ might do what you're after. In this case you can also read the file as single multi-line string (hence the use of -Raw in the code) and the use of the (?m) flag (Multiline mode).\nSee https://regex101.com/r/3CXaOI/1 for details.\n$Text2Replace = \"YabaDaba.du:/\"\n\n(Get-Content $SRCFile -Raw) -replace '(?m)(?<=.+: \").*:\\/', $Text2Replace |\n Out-File $DSTFile\n\n" ]
[ 2 ]
[]
[]
[ "powershell" ]
stackoverflow_0074680227_powershell.txt
Q: Temporary string comparison with > and < operators in C++ These operators do not perform lexicographical comparisons and seem to provide inconsistent results. #include <iostream> int main () { std::cout << ("70" < "60") << '\n'; std::cout << ("60" < "70") << '\n'; return 0; } and #include <iostream> int main() { std::cout << ("60" < "70") << '\n'; std::cout << ("70" < "60") << '\n'; return 0; } both print 1 0 The same holds true for std::less<>(). However, std::less<std::string>() provides the correct lexicographical comparison. What is the reason for this behaviour? Are the comparisons shown above comparing the addresses of char arrays? A: Character literals are character arrays. You are comparing these arrays, which after array-to-pointer decay means you are comparing the addresses of the first byte of these arrays. Each character literal may refer to a different such array, even if it has the same value. And there is no guarantee about the order of their addresses. So any possible outcome for your tests is allowed. 0/0 as well as 0/1, 1/0 and 1/1. You can avoid all the C-style array behavior of string literals by always using std::string or std::string_view literals instead: #include <iostream> using namespace std::string_literals; int main () { std::cout << ("70"s < "60"s) << '\n'; std::cout << ("60"s < "70"s) << '\n'; return 0; } This uses the user-defined string literal operator""s from the standard library to immediately form std::strings from the string literals. sv can be used for std::string_views instead. (string_view will incur less performance cost, in particular no dynamic allocation.)
Temporary string comparison with > and < operators in C++
These operators do not perform lexicographical comparisons and seem to provide inconsistent results. #include <iostream> int main () { std::cout << ("70" < "60") << '\n'; std::cout << ("60" < "70") << '\n'; return 0; } and #include <iostream> int main() { std::cout << ("60" < "70") << '\n'; std::cout << ("70" < "60") << '\n'; return 0; } both print 1 0 The same holds true for std::less<>(). However, std::less<std::string>() provides the correct lexicographical comparison. What is the reason for this behaviour? Are the comparisons shown above comparing the addresses of char arrays?
[ "Character literals are character arrays. You are comparing these arrays, which after array-to-pointer decay means you are comparing the addresses of the first byte of these arrays.\nEach character literal may refer to a different such array, even if it has the same value. And there is no guarantee about the order of their addresses.\nSo any possible outcome for your tests is allowed. 0/0 as well as 0/1, 1/0 and 1/1.\n\nYou can avoid all the C-style array behavior of string literals by always using std::string or std::string_view literals instead:\n#include <iostream>\n\nusing namespace std::string_literals;\n\nint main () {\n std::cout << (\"70\"s < \"60\"s) << '\\n';\n std::cout << (\"60\"s < \"70\"s) << '\\n';\n return 0;\n}\n\nThis uses the user-defined string literal operator\"\"s from the standard library to immediately form std::strings from the string literals. sv can be used for std::string_views instead. (string_view will incur less performance cost, in particular no dynamic allocation.)\n" ]
[ 1 ]
[]
[]
[ "c++", "chararray", "operators", "string", "temporary" ]
stackoverflow_0074680179_c++_chararray_operators_string_temporary.txt
Q: Why would we need to sort an array? This might be a silly question; for all of you who have been in this field for a while, nevertheless, I would still appreciate your insight on the matter - why does an array need to be sorted, and in what scenario would we need to sort an array? So far it is clear to me that the whole purpose of sorting is to organize the data in such a way that will minimize the searching complexity and improve the overall efficiency of our program, though I would appreciate it if someone could describe a scenario in which it would be most useful to sort an array? If we are searching for something specific like a number wouldn't the process of sorting an array be equally demanding as the process of just iterating through the array until we find what we are looking for? I hope that made sense. Thanks. This is just a general question for my coursework. A: A lot of different algorithms work much faster with sorted array, including searching, comparing and merging arrays. For one time operation, you're right, it is easier and faster to use unsorted array. But as soon as you need to repeat the operation multiple times on the same array, it is much faster to sort the array one time, and then use its benefits. Even if you are going to change array, you can keep it sorted, again it improves performance of all other operations. A: Sorting brings useful structure in a list of values. In raw data, reading a value tells you absolutely nothing about the other value in the list. In a sorted list, when you read a value, you know that all preceding elements are not larger, and following elements are not smaller. So to search a raw list, you have no other choice than exhaustive comparison, while when searching a sorted list, comparing to the middle element tells you in which half the searched value can be found and this drastically reduces the number of tests to be performed. When the list is given in sorted order, you can benefit from this. When it is given in no known order, you have to ponder if it is worth affording the cost of the sort to accelerate the searches. Sorting has other algorithmic uses than search in a list, but it is always the ordering property which is exploited.
Why would we need to sort an array?
This might be a silly question; for all of you who have been in this field for a while, nevertheless, I would still appreciate your insight on the matter - why does an array need to be sorted, and in what scenario would we need to sort an array? So far it is clear to me that the whole purpose of sorting is to organize the data in such a way that will minimize the searching complexity and improve the overall efficiency of our program, though I would appreciate it if someone could describe a scenario in which it would be most useful to sort an array? If we are searching for something specific like a number wouldn't the process of sorting an array be equally demanding as the process of just iterating through the array until we find what we are looking for? I hope that made sense. Thanks. This is just a general question for my coursework.
[ "A lot of different algorithms work much faster with sorted array, including searching, comparing and merging arrays.\nFor one time operation, you're right, it is easier and faster to use unsorted array. But as soon as you need to repeat the operation multiple times on the same array, it is much faster to sort the array one time, and then use its benefits.\nEven if you are going to change array, you can keep it sorted, again it improves performance of all other operations.\n", "Sorting brings useful structure in a list of values.\nIn raw data, reading a value tells you absolutely nothing about the other value in the list. In a sorted list, when you read a value, you know that all preceding elements are not larger, and following elements are not smaller.\nSo to search a raw list, you have no other choice than exhaustive comparison, while when searching a sorted list, comparing to the middle element tells you in which half the searched value can be found and this drastically reduces the number of tests to be performed.\n\nWhen the list is given in sorted order, you can benefit from this. When it is given in no known order, you have to ponder if it is worth affording the cost of the sort to accelerate the searches.\n\nSorting has other algorithmic uses than search in a list, but it is always the ordering property which is exploited.\n" ]
[ 0, 0 ]
[ "Sorting an array can be useful in many situations, but one common scenario is when you need to search for an element in the array. If the array is not sorted, you would need to search through the entire array to find the element you are looking for, which can be time-consuming if the array is large. However, if the array is sorted, you can use a more efficient search algorithm, such as binary search, to find the element more quickly.\nAnother reason to sort an array is to make it easier to compare elements in the array. For example, if you are looking for the largest or smallest element or kth largest or kth smallest element in the array, you can quickly find it by sorting the array in ascending or descending order, respectively. This can be useful in many applications, such as finding the maximum or minimum value in a set of data.\nAlso, say you are looking to find the smallest value which is greater than x and you are solving it for q queries, then by sorting it out, later using binary search for each query would result in O(nlgn + qlgn) time complexity.\nIn short, sorting an array can make it easier to search for elements, compare elements, and perform other operations on the data. Although it can require some computational effort to sort an array, this effort is often worthwhile because it can improve the overall efficiency of the program.\n" ]
[ -2 ]
[ "algorithm", "arrays", "sorting" ]
stackoverflow_0074679908_algorithm_arrays_sorting.txt
Q: H2-Database console not opening with Spring-Security i am using the H2-Database and Spring Security, but im unable to open it in the browser at http://localhost:8080/h2-console Here my pom.xml (only the H2 entry) <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <scope>runtime</scope> </dependency> Here my application.properties spring.datasource.url=jdbc:h2:file:/data/noNameDB spring.h2.console.enabled=true spring.datasource.driverClassName=org.h2.Driver spring.datasource.username=admin spring.datasource.password=admin spring.jpa.database-platform=org.hibernate.dialect.H2Dialect spring.h2.console.path=/h2-console spring.jpa.show-sql=true spring.jpa.hibernate.ddl-auto=update spring.jackson.serialization.fail-on-empty-beans=false And here is my SecurityConfig.java import com.example.noName.security.JwtAuthenticationEntryPoint; import com.example.noName.security.JwtAuthenticationFilter; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.authentication.AuthenticationManager; import org.springframework.security.config.annotation.authentication.configuration.AuthenticationConfiguration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.http.SessionCreationPolicy; import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; import org.springframework.security.crypto.password.PasswordEncoder; import org.springframework.security.web.SecurityFilterChain; import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter; @Configuration @EnableWebSecurity public class SecurityConfig { private static final String[] AUTH_WHITE_LIST = { "/v3/api-docs/**", "/swagger-ui/**", "/v2/api-docs/**", "/swagger-resources/**", "/h2-console/**", "/console/**", "/account/**" }; @Autowired private JwtAuthenticationEntryPoint jwtAuthenticationEntryPoint; @Bean public JwtAuthenticationFilter jwtAuthenticationFilter() { return new JwtAuthenticationFilter(); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } @Bean public AuthenticationManager authenticationManager( AuthenticationConfiguration authConfig) throws Exception { return authConfig.getAuthenticationManager(); } @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .cors() .and() .csrf() .disable() .exceptionHandling() .authenticationEntryPoint(jwtAuthenticationEntryPoint) .and() .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .authorizeHttpRequests() .requestMatchers(AUTH_WHITE_LIST) .permitAll() .and() .headers() .frameOptions() .disable() .and() .authorizeHttpRequests() .anyRequest() .authenticated() .and() .httpBasic() .and() .addFilterBefore(jwtAuthenticationFilter(), UsernamePasswordAuthenticationFilter.class) .httpBasic(); return http.build(); } } The following is shown in the console if i try to access the console via http://localhost:8080/h2-console INFO 3664 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet' INFO 3664 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' INFO 3664 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 1 ms I have already tried everything I could find on the Internet. The funny thing is that the "exception handling" works for Swagger. If I try to access the database via http://localhost:8080/h2-console I always get the error 401 - Unauthorized Each one is strange because the access was allowed in the SecurityConfig. @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .cors() .and() .csrf() .disable() .exceptionHandling() .authenticationEntryPoint(jwtAuthenticationEntryPoint) .and() .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .authorizeHttpRequests() .requestMatchers(AUTH_WHITE_LIST) .permitAll() .and() .headers() .frameOptions() .disable() .and() .authorizeHttpRequests() .anyRequest() .authenticated() .and() .httpBasic() .and() .addFilterBefore(jwtAuthenticationFilter(), UsernamePasswordAuthenticationFilter.class) .httpBasic(); return http.build(); } I can access the database through an internal database test. This is provided by Intellij. However working/editing in the database is not possible through this. AND: If i change the AUTH_WHITE_LIST to this, it works. private static final String[] AUTH_WHITE_LIST = { "/**" }; A: #It looks like you have not specified the server.servlet.context-path property in your application.properties file. This property specifies the context path of your application, and it looks like your application is running on a context path other than the default "/" path. #To fix the issue, add the following line to your application.properties file: server.servlet.context-path= #Replace with the actual context path of your application. For example, if your application is running on the "/myapp" context path, you would set the server.servlet.context-path property as follows: server.servlet.context-path=/myapp #After doing this, you should be able to access the H2 console at the following URL: http://localhost:8080//h2-console #For example, if your context path is "/myapp", you would access the H2 console at the following URL: http://localhost:8080/myapp/h2-console
H2-Database console not opening with Spring-Security
i am using the H2-Database and Spring Security, but im unable to open it in the browser at http://localhost:8080/h2-console Here my pom.xml (only the H2 entry) <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <scope>runtime</scope> </dependency> Here my application.properties spring.datasource.url=jdbc:h2:file:/data/noNameDB spring.h2.console.enabled=true spring.datasource.driverClassName=org.h2.Driver spring.datasource.username=admin spring.datasource.password=admin spring.jpa.database-platform=org.hibernate.dialect.H2Dialect spring.h2.console.path=/h2-console spring.jpa.show-sql=true spring.jpa.hibernate.ddl-auto=update spring.jackson.serialization.fail-on-empty-beans=false And here is my SecurityConfig.java import com.example.noName.security.JwtAuthenticationEntryPoint; import com.example.noName.security.JwtAuthenticationFilter; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.authentication.AuthenticationManager; import org.springframework.security.config.annotation.authentication.configuration.AuthenticationConfiguration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.http.SessionCreationPolicy; import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; import org.springframework.security.crypto.password.PasswordEncoder; import org.springframework.security.web.SecurityFilterChain; import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter; @Configuration @EnableWebSecurity public class SecurityConfig { private static final String[] AUTH_WHITE_LIST = { "/v3/api-docs/**", "/swagger-ui/**", "/v2/api-docs/**", "/swagger-resources/**", "/h2-console/**", "/console/**", "/account/**" }; @Autowired private JwtAuthenticationEntryPoint jwtAuthenticationEntryPoint; @Bean public JwtAuthenticationFilter jwtAuthenticationFilter() { return new JwtAuthenticationFilter(); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } @Bean public AuthenticationManager authenticationManager( AuthenticationConfiguration authConfig) throws Exception { return authConfig.getAuthenticationManager(); } @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .cors() .and() .csrf() .disable() .exceptionHandling() .authenticationEntryPoint(jwtAuthenticationEntryPoint) .and() .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .authorizeHttpRequests() .requestMatchers(AUTH_WHITE_LIST) .permitAll() .and() .headers() .frameOptions() .disable() .and() .authorizeHttpRequests() .anyRequest() .authenticated() .and() .httpBasic() .and() .addFilterBefore(jwtAuthenticationFilter(), UsernamePasswordAuthenticationFilter.class) .httpBasic(); return http.build(); } } The following is shown in the console if i try to access the console via http://localhost:8080/h2-console INFO 3664 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet' INFO 3664 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' INFO 3664 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 1 ms I have already tried everything I could find on the Internet. The funny thing is that the "exception handling" works for Swagger. If I try to access the database via http://localhost:8080/h2-console I always get the error 401 - Unauthorized Each one is strange because the access was allowed in the SecurityConfig. @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .cors() .and() .csrf() .disable() .exceptionHandling() .authenticationEntryPoint(jwtAuthenticationEntryPoint) .and() .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .authorizeHttpRequests() .requestMatchers(AUTH_WHITE_LIST) .permitAll() .and() .headers() .frameOptions() .disable() .and() .authorizeHttpRequests() .anyRequest() .authenticated() .and() .httpBasic() .and() .addFilterBefore(jwtAuthenticationFilter(), UsernamePasswordAuthenticationFilter.class) .httpBasic(); return http.build(); } I can access the database through an internal database test. This is provided by Intellij. However working/editing in the database is not possible through this. AND: If i change the AUTH_WHITE_LIST to this, it works. private static final String[] AUTH_WHITE_LIST = { "/**" };
[ "#It looks like you have not specified the server.servlet.context-path property in your application.properties file. This property specifies the context path of your application, and it looks like your application is running on a context path other than the default \"/\" path.\n#To fix the issue, add the following line to your application.properties file:\nserver.servlet.context-path=\n#Replace with the actual context path of your application. For example, if your application is running on the \"/myapp\" context path, you would set the server.servlet.context-path property as follows:\nserver.servlet.context-path=/myapp\n#After doing this, you should be able to access the H2 console at the following URL:\nhttp://localhost:8080//h2-console\n#For example, if your context path is \"/myapp\", you would access the H2 console at the following URL:\nhttp://localhost:8080/myapp/h2-console\n" ]
[ 0 ]
[]
[]
[ "h2", "java", "spring_boot", "spring_security" ]
stackoverflow_0074680244_h2_java_spring_boot_spring_security.txt
Q: How to highlight and get active state for main navigation menu item and sub navigation menu item on the same page using React Material UI Tabs In my React <Header> component I use Material UI Tabs to create a primary menu for the main navigation, with react router links instead of showing tab content: <Tabs value={location.pathname}> {(items || []).map((item) => ( <Tab key={item.url} label={item.title} value={item.url} href={item.url} disableRipple /> ))} </Tabs> On the same page I use Material UI Tabs again for a secondary menu. The route path to the main navigation menu item is: /example-path. When I navigate to this route, this menu items is highlighted and has active state. When the tab of secondary menu item with the same route as the highlighted primary menu item /example-path, both menu items gets highlighted and active state. When I click on another secondary menu item only this secondary menu item gets highlighted /example-path/tab-two. How do I manage both gets highlighted in the same time, The parent from main navigation and the various menu items from the secondary menu? A: The Header component should match on only the root path segment while the Submenu component can compare the complete URL pathname. Note that if you've further sub-routes then Submenu would need to check the first two segments. Instead of the Tab components rendering a raw anchor tag <a> it should render a Link component. This is so the router can respond to and handle the navigation action instead of letting the browser make a page request to the server and reload the entire app. Header import { Tabs, Tab } from "@material-ui/core"; import { Link, useLocation } from "react-router-dom"; const Header = () => { const { pathname } = useLocation(); const base = `/${pathname.slice(1).split("/").shift()}`; return ( <Tabs value={base}> <Tab component={Link} to="/" label="Home" key="/" value="/" /> <Tab component={Link} to="/about" label="About" key="/about" value="/about" /> <Tab component={Link} to="/dashboard" label="Dashboard" key="/dashboard" value="/dashboard" /> </Tabs> ); }; export default Header; Submenu import { Tabs, Tab } from "@material-ui/core"; import { Link, useLocation } from "react-router-dom"; const Submenu = () => { const { pathname } = useLocation(); return ( <Tabs value={pathname}> <Tab component={Link} to="/about/about-one" label="About sub one" key="1" value="/about/about-one" /> <Tab component={Link} to="/about/about-two" label="About sub two" key="2" value="/about/about-two" /> </Tabs> ); }; export default Submenu; App Order the routes within the Switch in inverse order of path specificity so that more specific paths are matched before falling back to less specific paths. <Router> <Layout> <Switch> <Route path="/about/about-one"> <AboutOne /> </Route> <Route path="/about/about-two"> <AboutTwo /> </Route> <Route path="/about"> <About /> </Route> <Route path="/dashboard"> <Dashboard /> </Route> <Route path="/"> <Home /> </Route> </Switch> </Layout> </Router>
How to highlight and get active state for main navigation menu item and sub navigation menu item on the same page using React Material UI Tabs
In my React <Header> component I use Material UI Tabs to create a primary menu for the main navigation, with react router links instead of showing tab content: <Tabs value={location.pathname}> {(items || []).map((item) => ( <Tab key={item.url} label={item.title} value={item.url} href={item.url} disableRipple /> ))} </Tabs> On the same page I use Material UI Tabs again for a secondary menu. The route path to the main navigation menu item is: /example-path. When I navigate to this route, this menu items is highlighted and has active state. When the tab of secondary menu item with the same route as the highlighted primary menu item /example-path, both menu items gets highlighted and active state. When I click on another secondary menu item only this secondary menu item gets highlighted /example-path/tab-two. How do I manage both gets highlighted in the same time, The parent from main navigation and the various menu items from the secondary menu?
[ "The Header component should match on only the root path segment while the Submenu component can compare the complete URL pathname. Note that if you've further sub-routes then Submenu would need to check the first two segments.\nInstead of the Tab components rendering a raw anchor tag <a> it should render a Link component. This is so the router can respond to and handle the navigation action instead of letting the browser make a page request to the server and reload the entire app.\nHeader\nimport { Tabs, Tab } from \"@material-ui/core\";\nimport { Link, useLocation } from \"react-router-dom\";\n\nconst Header = () => {\n const { pathname } = useLocation();\n const base = `/${pathname.slice(1).split(\"/\").shift()}`;\n\n return (\n <Tabs value={base}>\n <Tab component={Link} to=\"/\" label=\"Home\" key=\"/\" value=\"/\" />\n <Tab\n component={Link}\n to=\"/about\"\n label=\"About\"\n key=\"/about\"\n value=\"/about\"\n />\n <Tab\n component={Link}\n to=\"/dashboard\"\n label=\"Dashboard\"\n key=\"/dashboard\"\n value=\"/dashboard\"\n />\n </Tabs>\n );\n};\n\nexport default Header;\n\nSubmenu\nimport { Tabs, Tab } from \"@material-ui/core\";\nimport { Link, useLocation } from \"react-router-dom\";\n\nconst Submenu = () => {\n const { pathname } = useLocation();\n\n return (\n <Tabs value={pathname}>\n <Tab\n component={Link}\n to=\"/about/about-one\"\n label=\"About sub one\"\n key=\"1\"\n value=\"/about/about-one\"\n />\n <Tab\n component={Link}\n to=\"/about/about-two\"\n label=\"About sub two\"\n key=\"2\"\n value=\"/about/about-two\"\n />\n </Tabs>\n );\n};\n\nexport default Submenu;\n\nApp\nOrder the routes within the Switch in inverse order of path specificity so that more specific paths are matched before falling back to less specific paths.\n<Router>\n <Layout>\n <Switch>\n <Route path=\"/about/about-one\">\n <AboutOne />\n </Route>\n <Route path=\"/about/about-two\">\n <AboutTwo />\n </Route>\n <Route path=\"/about\">\n <About />\n </Route>\n <Route path=\"/dashboard\">\n <Dashboard />\n </Route>\n <Route path=\"/\">\n <Home />\n </Route>\n </Switch>\n </Layout>\n</Router>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "material_ui", "navigation", "react_router", "reactjs" ]
stackoverflow_0074556799_material_ui_navigation_react_router_reactjs.txt
Q: Nested subqueries in SQL from hr.employees There is a task: using the HR.EMPLOYEES table, get a list of departments in which the average work experience is above the average for the entire company. I tried to implement it this way, I know that the request is not correct, but I don’t understand how to distribute it to the entire company select department_id from hr.employees where avg(MONTHS_BETWEEN(sysdate, hire_date)) > (select hire_date from hr.employees where avg(MONTHS_BETWEEN(sysdate, hire_date)) The database looks like this: EMPLOYEE_ID FIRST_NAME LAST_NAME EMAIL PHONE_NUMBER HIRE_DATE JOB_ID SALARY COMMISSION_PCT MANAGER_ID DEPARTMENT_ID 100 Steven King SKING 515.123.4567 17-JUN-03 AD_PRES 24000 A: [Disclaimer: The answer below is answered by OpenAI ChatGPT, if it is against the community guidelines, I will delete this answer.] SELECT department FROM HR.EMPLOYEES GROUP BY department HAVING AVG(work_experience) > (SELECT AVG(work_experience) FROM HR.EMPLOYEES) Edit 1: As OP requested to provide him a solution using nested subqueries, here is version 2: SELECT department FROM HR.EMPLOYEES WHERE department IN (SELECT department FROM HR.EMPLOYEES GROUP BY department HAVING AVG(work_experience) > (SELECT AVG(work_experience) FROM HR.EMPLOYEES)) Edit 2: As OP asked to remove the repetitions, here is the version 3: SELECT DISTINCT department FROM HR.EMPLOYEES WHERE department IN (SELECT department FROM HR.EMPLOYEES GROUP BY department HAVING AVG(work_experience) > (SELECT AVG(work_experience) FROM HR.EMPLOYEES))
Nested subqueries in SQL from hr.employees
There is a task: using the HR.EMPLOYEES table, get a list of departments in which the average work experience is above the average for the entire company. I tried to implement it this way, I know that the request is not correct, but I don’t understand how to distribute it to the entire company select department_id from hr.employees where avg(MONTHS_BETWEEN(sysdate, hire_date)) > (select hire_date from hr.employees where avg(MONTHS_BETWEEN(sysdate, hire_date)) The database looks like this: EMPLOYEE_ID FIRST_NAME LAST_NAME EMAIL PHONE_NUMBER HIRE_DATE JOB_ID SALARY COMMISSION_PCT MANAGER_ID DEPARTMENT_ID 100 Steven King SKING 515.123.4567 17-JUN-03 AD_PRES 24000
[ "[Disclaimer: The answer below is answered by OpenAI ChatGPT, if it is against the community guidelines, I will delete this answer.]\nSELECT department\nFROM HR.EMPLOYEES\nGROUP BY department\nHAVING AVG(work_experience) > (SELECT AVG(work_experience) FROM HR.EMPLOYEES)\n\nEdit 1: As OP requested to provide him a solution using nested subqueries, here is version 2:\nSELECT department\nFROM HR.EMPLOYEES\nWHERE department IN (SELECT department FROM HR.EMPLOYEES\nGROUP BY department \nHAVING AVG(work_experience) > (SELECT AVG(work_experience) \nFROM HR.EMPLOYEES))\n\nEdit 2: As OP asked to remove the repetitions, here is the version 3:\nSELECT DISTINCT department\nFROM HR.EMPLOYEES\nWHERE department IN (SELECT department FROM HR.EMPLOYEES\nGROUP BY department\nHAVING AVG(work_experience) > (SELECT AVG(work_experience) \nFROM HR.EMPLOYEES))\n\n" ]
[ 1 ]
[]
[]
[ "sql" ]
stackoverflow_0074680223_sql.txt
Q: How can I make a dictionary (dict) from separate lists of keys and values? I want to combine these: keys = ['name', 'age', 'food'] values = ['Monty', 42, 'spam'] Into a single dictionary: {'name': 'Monty', 'age': 42, 'food': 'spam'} A: Like this: keys = ['a', 'b', 'c'] values = [1, 2, 3] dictionary = dict(zip(keys, values)) print(dictionary) # {'a': 1, 'b': 2, 'c': 3} Voila :-) The pairwise dict constructor and zip function are awesomely useful. A: Imagine that you have: keys = ('name', 'age', 'food') values = ('Monty', 42, 'spam') What is the simplest way to produce the following dictionary ? dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'} Most performant, dict constructor with zip new_dict = dict(zip(keys, values)) In Python 3, zip now returns a lazy iterator, and this is now the most performant approach. dict(zip(keys, values)) does require the one-time global lookup each for dict and zip, but it doesn't form any unnecessary intermediate data-structures or have to deal with local lookups in function application. Runner-up, dict comprehension: A close runner-up to using the dict constructor is to use the native syntax of a dict comprehension (not a list comprehension, as others have mistakenly put it): new_dict = {k: v for k, v in zip(keys, values)} Choose this when you need to map or filter based on the keys or value. In Python 2, zip returns a list, to avoid creating an unnecessary list, use izip instead (aliased to zip can reduce code changes when you move to Python 3). from itertools import izip as zip So that is still (2.7): new_dict = {k: v for k, v in zip(keys, values)} Python 2, ideal for <= 2.6 izip from itertools becomes zip in Python 3. izip is better than zip for Python 2 (because it avoids the unnecessary list creation), and ideal for 2.6 or below: from itertools import izip new_dict = dict(izip(keys, values)) Result for all cases: In all cases: >>> new_dict {'age': 42, 'name': 'Monty', 'food': 'spam'} Explanation: If we look at the help on dict we see that it takes a variety of forms of arguments: >>> help(dict) class dict(object) | dict() -> new empty dictionary | dict(mapping) -> new dictionary initialized from a mapping object's | (key, value) pairs | dict(iterable) -> new dictionary initialized as if via: | d = {} | for k, v in iterable: | d[k] = v | dict(**kwargs) -> new dictionary initialized with the name=value pairs | in the keyword argument list. For example: dict(one=1, two=2) The optimal approach is to use an iterable while avoiding creating unnecessary data structures. In Python 2, zip creates an unnecessary list: >>> zip(keys, values) [('name', 'Monty'), ('age', 42), ('food', 'spam')] In Python 3, the equivalent would be: >>> list(zip(keys, values)) [('name', 'Monty'), ('age', 42), ('food', 'spam')] and Python 3's zip merely creates an iterable object: >>> zip(keys, values) <zip object at 0x7f0e2ad029c8> Since we want to avoid creating unnecessary data structures, we usually want to avoid Python 2's zip (since it creates an unnecessary list). Less performant alternatives: This is a generator expression being passed to the dict constructor: generator_expression = ((k, v) for k, v in zip(keys, values)) dict(generator_expression) or equivalently: dict((k, v) for k, v in zip(keys, values)) And this is a list comprehension being passed to the dict constructor: dict([(k, v) for k, v in zip(keys, values)]) In the first two cases, an extra layer of non-operative (thus unnecessary) computation is placed over the zip iterable, and in the case of the list comprehension, an extra list is unnecessarily created. I would expect all of them to be less performant, and certainly not more-so. Performance review: In 64 bit Python 3.8.2 provided by Nix, on Ubuntu 16.04, ordered from fastest to slowest: >>> min(timeit.repeat(lambda: dict(zip(keys, values)))) 0.6695233230129816 >>> min(timeit.repeat(lambda: {k: v for k, v in zip(keys, values)})) 0.6941362579818815 >>> min(timeit.repeat(lambda: {keys[i]: values[i] for i in range(len(keys))})) 0.8782548159942962 >>> >>> min(timeit.repeat(lambda: dict([(k, v) for k, v in zip(keys, values)]))) 1.077607496001292 >>> min(timeit.repeat(lambda: dict((k, v) for k, v in zip(keys, values)))) 1.1840861019445583 dict(zip(keys, values)) wins even with small sets of keys and values, but for larger sets, the differences in performance will become greater. A commenter said: min seems like a bad way to compare performance. Surely mean and/or max would be much more useful indicators for real usage. We use min because these algorithms are deterministic. We want to know the performance of the algorithms under the best conditions possible. If the operating system hangs for any reason, it has nothing to do with what we're trying to compare, so we need to exclude those kinds of results from our analysis. If we used mean, those kinds of events would skew our results greatly, and if we used max we will only get the most extreme result - the one most likely affected by such an event. A commenter also says: In python 3.6.8, using mean values, the dict comprehension is indeed still faster, by about 30% for these small lists. For larger lists (10k random numbers), the dict call is about 10% faster. I presume we mean dict(zip(... with 10k random numbers. That does sound like a fairly unusual use case. It does makes sense that the most direct calls would dominate in large datasets, and I wouldn't be surprised if OS hangs are dominating given how long it would take to run that test, further skewing your numbers. And if you use mean or max I would consider your results meaningless. Let's use a more realistic size on our top examples: import numpy import timeit l1 = list(numpy.random.random(100)) l2 = list(numpy.random.random(100)) And we see here that dict(zip(... does indeed run faster for larger datasets by about 20%. >>> min(timeit.repeat(lambda: {k: v for k, v in zip(l1, l2)})) 9.698965263989521 >>> min(timeit.repeat(lambda: dict(zip(l1, l2)))) 7.9965161079890095 A: Try this: >>> import itertools >>> keys = ('name', 'age', 'food') >>> values = ('Monty', 42, 'spam') >>> adict = dict(itertools.izip(keys,values)) >>> adict {'food': 'spam', 'age': 42, 'name': 'Monty'} In Python 2, it's also more economical in memory consumption compared to zip. A: keys = ('name', 'age', 'food') values = ('Monty', 42, 'spam') out = dict(zip(keys, values)) Output: {'food': 'spam', 'age': 42, 'name': 'Monty'} A: You can also use dictionary comprehensions in Python ≥ 2.7: >>> keys = ('name', 'age', 'food') >>> values = ('Monty', 42, 'spam') >>> {k: v for k, v in zip(keys, values)} {'food': 'spam', 'age': 42, 'name': 'Monty'} A: A more natural way is to use dictionary comprehension keys = ('name', 'age', 'food') values = ('Monty', 42, 'spam') dict = {keys[i]: values[i] for i in range(len(keys))} A: If you need to transform keys or values before creating a dictionary then a generator expression could be used. Example: >>> adict = dict((str(k), v) for k, v in zip(['a', 1, 'b'], [2, 'c', 3])) Take a look Code Like a Pythonista: Idiomatic Python. A: with Python 3.x, goes for dict comprehensions keys = ('name', 'age', 'food') values = ('Monty', 42, 'spam') dic = {k:v for k,v in zip(keys, values)} print(dic) More on dict comprehensions here, an example is there: >>> print {i : chr(65+i) for i in range(4)} {0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'} A: For those who need simple code and aren’t familiar with zip: List1 = ['This', 'is', 'a', 'list'] List2 = ['Put', 'this', 'into', 'dictionary'] This can be done by one line of code: d = {List1[n]: List2[n] for n in range(len(List1))} A: you can use this below code: dict(zip(['name', 'age', 'food'], ['Monty', 42, 'spam'])) But make sure that length of the lists will be same.if length is not same.then zip function turncate the longer one. A: 2018-04-18 The best solution is still: In [92]: keys = ('name', 'age', 'food') ...: values = ('Monty', 42, 'spam') ...: In [93]: dt = dict(zip(keys, values)) In [94]: dt Out[94]: {'age': 42, 'food': 'spam', 'name': 'Monty'} Tranpose it: lst = [('name', 'Monty'), ('age', 42), ('food', 'spam')] keys, values = zip(*lst) In [101]: keys Out[101]: ('name', 'age', 'food') In [102]: values Out[102]: ('Monty', 42, 'spam') A: Here is also an example of adding a list value in you dictionary list1 = ["Name", "Surname", "Age"] list2 = [["Cyd", "JEDD", "JESS"], ["DEY", "AUDIJE", "PONGARON"], [21, 32, 47]] dic = dict(zip(list1, list2)) print(dic) always make sure the your "Key"(list1) is always in the first parameter. {'Name': ['Cyd', 'JEDD', 'JESS'], 'Surname': ['DEY', 'AUDIJE', 'PONGARON'], 'Age': [21, 32, 47]} A: I had this doubt while I was trying to solve a graph-related problem. The issue I had was I needed to define an empty adjacency list and wanted to initialize all the nodes with an empty list, that's when I thought how about I check if it is fast enough, I mean if it will be worth doing a zip operation rather than simple assignment key-value pair. After all most of the times, the time factor is an important ice breaker. So I performed timeit operation for both approaches. import timeit def dictionary_creation(n_nodes): dummy_dict = dict() for node in range(n_nodes): dummy_dict[node] = [] return dummy_dict def dictionary_creation_1(n_nodes): keys = list(range(n_nodes)) values = [[] for i in range(n_nodes)] graph = dict(zip(keys, values)) return graph def wrapper(func, *args, **kwargs): def wrapped(): return func(*args, **kwargs) return wrapped iteration = wrapper(dictionary_creation, n_nodes) shorthand = wrapper(dictionary_creation_1, n_nodes) for trail in range(1, 8): print(f'Itertion: {timeit.timeit(iteration, number=trails)}\nShorthand: {timeit.timeit(shorthand, number=trails)}') For n_nodes = 10,000,000 I get, Iteration: 2.825081646999024 Shorthand: 3.535717916001886 Iteration: 5.051560923002398 Shorthand: 6.255070794999483 Iteration: 6.52859034499852 Shorthand: 8.221581164998497 Iteration: 8.683652416999394 Shorthand: 12.599181543999293 Iteration: 11.587241565001023 Shorthand: 15.27298851100204 Iteration: 14.816342867001367 Shorthand: 17.162912737003353 Iteration: 16.645022411001264 Shorthand: 19.976680120998935 You can clearly see after a certain point, iteration approach at n_th step overtakes the time taken by shorthand approach at n-1_th step. A: It can be done by the following way. keys = ['name', 'age', 'food'] values = ['Monty', 42, 'spam'] dict = {} for i in range(len(keys)): dict[keys[i]] = values[i] print(dict) {'name': 'Monty', 'age': 42, 'food': 'spam'} A: All answers sum up: l = [1, 5, 8, 9] ll = [3, 7, 10, 11] zip: dict(zip(l,ll)) # {1: 3, 5: 7, 8: 10, 9: 11} #if you want to play with key or value @recommended {k:v*10 for k, v in zip(l, ll)} #{1: 30, 5: 70, 8: 100, 9: 110} counter: d = {} c=0 for k in l: d[k] = ll[c] #setting up keys from the second list values c += 1 print(d) {1: 3, 5: 7, 8: 10, 9: 11} enumerate: d = {} for i,k in enumerate(l): d[k] = ll[i] print(d) {1: 3, 5: 7, 8: 10, 9: 11} A: Solution as dictionary comprehension with enumerate: dict = {item : values[index] for index, item in enumerate(keys)} Solution as for loop with enumerate: dict = {} for index, item in enumerate(keys): dict[item] = values[index] A: If you are working with more than 1 set of values and wish to have a list of dicts you can use this: def as_dict_list(data: list, columns: list): return [dict((zip(columns, row))) for row in data] Real-life example would be a list of tuples from a db query paired to a tuple of columns from the same query. Other answers only provided for 1 to 1. A: keys = ['name', 'age', 'food'] values = ['Monty', 42, 'spam'] dic = {} c = 0 for i in keys: dic[i] = values[c] c += 1 print(dic) {'name': 'Monty', 'age': 42, 'food': 'spam'} A: import pprint def makeDictUsingAlternateLists1(**rest): print("*rest.keys() : ",*rest.keys()) print("rest.keys() : ",rest.keys()) print("*rest.values() : ",*rest.values()) print("**rest.keys() : ",rest.keys()) print("**rest.values() : ",rest.values()) [print(a) for a in zip(*rest.values())] [ print(dict(zip(rest.keys(),a))) for a in zip(*rest.values())] print("...") finalRes= [ dict( zip( rest.keys(),a)) for a in zip(*rest.values())] return finalRes l = makeDictUsingAlternateLists1(p=p,q=q,r=r,s=s) pprint.pprint(l) """ *rest.keys() : p q r s rest.keys() : dict_keys(['p', 'q', 'r', 's']) *rest.values() : ['A', 'B', 'C'] [5, 2, 7] ['M', 'F', 'M'] ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola'] **rest.keys() : dict_keys(['p', 'q', 'r', 's']) **rest.values() : dict_values([['A', 'B', 'C'], [5, 2, 7], ['M', 'F', 'M'], ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola']]) ('A', 5, 'M', 'Sovabazaar') ('B', 2, 'F', 'Shyambazaar') ('C', 7, 'M', 'Bagbazaar') {'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'} {'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'} {'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'} ... [{'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'}, {'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'}, {'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'}] """
How can I make a dictionary (dict) from separate lists of keys and values?
I want to combine these: keys = ['name', 'age', 'food'] values = ['Monty', 42, 'spam'] Into a single dictionary: {'name': 'Monty', 'age': 42, 'food': 'spam'}
[ "Like this:\nkeys = ['a', 'b', 'c']\nvalues = [1, 2, 3]\ndictionary = dict(zip(keys, values))\nprint(dictionary) # {'a': 1, 'b': 2, 'c': 3}\n\nVoila :-) The pairwise dict constructor and zip function are awesomely useful.\n", "\nImagine that you have:\nkeys = ('name', 'age', 'food')\nvalues = ('Monty', 42, 'spam')\n\nWhat is the simplest way to produce the following dictionary ?\ndict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}\n\n\nMost performant, dict constructor with zip\nnew_dict = dict(zip(keys, values))\n\nIn Python 3, zip now returns a lazy iterator, and this is now the most performant approach.\ndict(zip(keys, values)) does require the one-time global lookup each for dict and zip, but it doesn't form any unnecessary intermediate data-structures or have to deal with local lookups in function application.\nRunner-up, dict comprehension:\nA close runner-up to using the dict constructor is to use the native syntax of a dict comprehension (not a list comprehension, as others have mistakenly put it):\nnew_dict = {k: v for k, v in zip(keys, values)}\n\nChoose this when you need to map or filter based on the keys or value.\nIn Python 2, zip returns a list, to avoid creating an unnecessary list, use izip instead (aliased to zip can reduce code changes when you move to Python 3).\nfrom itertools import izip as zip\n\nSo that is still (2.7):\nnew_dict = {k: v for k, v in zip(keys, values)}\n\nPython 2, ideal for <= 2.6\nizip from itertools becomes zip in Python 3. izip is better than zip for Python 2 (because it avoids the unnecessary list creation), and ideal for 2.6 or below:\nfrom itertools import izip\nnew_dict = dict(izip(keys, values))\n\nResult for all cases:\nIn all cases:\n>>> new_dict\n{'age': 42, 'name': 'Monty', 'food': 'spam'}\n\nExplanation:\nIf we look at the help on dict we see that it takes a variety of forms of arguments:\n\n>>> help(dict)\n\nclass dict(object)\n | dict() -> new empty dictionary\n | dict(mapping) -> new dictionary initialized from a mapping object's\n | (key, value) pairs\n | dict(iterable) -> new dictionary initialized as if via:\n | d = {}\n | for k, v in iterable:\n | d[k] = v\n | dict(**kwargs) -> new dictionary initialized with the name=value pairs\n | in the keyword argument list. For example: dict(one=1, two=2)\n\n\nThe optimal approach is to use an iterable while avoiding creating unnecessary data structures. In Python 2, zip creates an unnecessary list:\n>>> zip(keys, values)\n[('name', 'Monty'), ('age', 42), ('food', 'spam')]\n\nIn Python 3, the equivalent would be:\n>>> list(zip(keys, values))\n[('name', 'Monty'), ('age', 42), ('food', 'spam')]\n\nand Python 3's zip merely creates an iterable object:\n>>> zip(keys, values)\n<zip object at 0x7f0e2ad029c8>\n\nSince we want to avoid creating unnecessary data structures, we usually want to avoid Python 2's zip (since it creates an unnecessary list).\nLess performant alternatives:\nThis is a generator expression being passed to the dict constructor:\ngenerator_expression = ((k, v) for k, v in zip(keys, values))\ndict(generator_expression)\n\nor equivalently:\ndict((k, v) for k, v in zip(keys, values))\n\nAnd this is a list comprehension being passed to the dict constructor:\ndict([(k, v) for k, v in zip(keys, values)])\n\nIn the first two cases, an extra layer of non-operative (thus unnecessary) computation is placed over the zip iterable, and in the case of the list comprehension, an extra list is unnecessarily created. I would expect all of them to be less performant, and certainly not more-so.\nPerformance review:\nIn 64 bit Python 3.8.2 provided by Nix, on Ubuntu 16.04, ordered from fastest to slowest:\n>>> min(timeit.repeat(lambda: dict(zip(keys, values))))\n0.6695233230129816\n>>> min(timeit.repeat(lambda: {k: v for k, v in zip(keys, values)}))\n0.6941362579818815\n>>> min(timeit.repeat(lambda: {keys[i]: values[i] for i in range(len(keys))}))\n0.8782548159942962\n>>> \n>>> min(timeit.repeat(lambda: dict([(k, v) for k, v in zip(keys, values)])))\n1.077607496001292\n>>> min(timeit.repeat(lambda: dict((k, v) for k, v in zip(keys, values))))\n1.1840861019445583\n\ndict(zip(keys, values)) wins even with small sets of keys and values, but for larger sets, the differences in performance will become greater.\nA commenter said:\n\nmin seems like a bad way to compare performance. Surely mean and/or max would be much more useful indicators for real usage.\n\nWe use min because these algorithms are deterministic. We want to know the performance of the algorithms under the best conditions possible. \nIf the operating system hangs for any reason, it has nothing to do with what we're trying to compare, so we need to exclude those kinds of results from our analysis.\nIf we used mean, those kinds of events would skew our results greatly, and if we used max we will only get the most extreme result - the one most likely affected by such an event.\nA commenter also says:\n\nIn python 3.6.8, using mean values, the dict comprehension is indeed still faster, by about 30% for these small lists. For larger lists (10k random numbers), the dict call is about 10% faster. \n\nI presume we mean dict(zip(... with 10k random numbers. That does sound like a fairly unusual use case. It does makes sense that the most direct calls would dominate in large datasets, and I wouldn't be surprised if OS hangs are dominating given how long it would take to run that test, further skewing your numbers. And if you use mean or max I would consider your results meaningless.\nLet's use a more realistic size on our top examples:\nimport numpy\nimport timeit\nl1 = list(numpy.random.random(100))\nl2 = list(numpy.random.random(100))\n\nAnd we see here that dict(zip(... does indeed run faster for larger datasets by about 20%.\n>>> min(timeit.repeat(lambda: {k: v for k, v in zip(l1, l2)}))\n9.698965263989521\n>>> min(timeit.repeat(lambda: dict(zip(l1, l2))))\n7.9965161079890095\n\n", "Try this:\n>>> import itertools\n>>> keys = ('name', 'age', 'food')\n>>> values = ('Monty', 42, 'spam')\n>>> adict = dict(itertools.izip(keys,values))\n>>> adict\n{'food': 'spam', 'age': 42, 'name': 'Monty'}\n\nIn Python 2, it's also more economical in memory consumption compared to zip.\n", "keys = ('name', 'age', 'food')\nvalues = ('Monty', 42, 'spam')\nout = dict(zip(keys, values))\n\nOutput:\n{'food': 'spam', 'age': 42, 'name': 'Monty'}\n\n", "You can also use dictionary comprehensions in Python ≥ 2.7:\n>>> keys = ('name', 'age', 'food')\n>>> values = ('Monty', 42, 'spam')\n>>> {k: v for k, v in zip(keys, values)}\n{'food': 'spam', 'age': 42, 'name': 'Monty'}\n\n", "A more natural way is to use dictionary comprehension \nkeys = ('name', 'age', 'food')\nvalues = ('Monty', 42, 'spam') \ndict = {keys[i]: values[i] for i in range(len(keys))}\n\n", "If you need to transform keys or values before creating a dictionary then a generator expression could be used. Example:\n>>> adict = dict((str(k), v) for k, v in zip(['a', 1, 'b'], [2, 'c', 3])) \n\nTake a look Code Like a Pythonista: Idiomatic Python.\n", "with Python 3.x, goes for dict comprehensions\nkeys = ('name', 'age', 'food')\nvalues = ('Monty', 42, 'spam')\n\ndic = {k:v for k,v in zip(keys, values)}\n\nprint(dic)\n\nMore on dict comprehensions here, an example is there:\n>>> print {i : chr(65+i) for i in range(4)}\n {0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}\n\n", "For those who need simple code and aren’t familiar with zip:\nList1 = ['This', 'is', 'a', 'list']\nList2 = ['Put', 'this', 'into', 'dictionary']\n\nThis can be done by one line of code:\nd = {List1[n]: List2[n] for n in range(len(List1))}\n\n", "you can use this below code:\ndict(zip(['name', 'age', 'food'], ['Monty', 42, 'spam']))\n\nBut make sure that length of the lists will be same.if length is not same.then zip function turncate the longer one.\n", "\n2018-04-18\n\nThe best solution is still:\nIn [92]: keys = ('name', 'age', 'food')\n...: values = ('Monty', 42, 'spam')\n...: \n\nIn [93]: dt = dict(zip(keys, values))\nIn [94]: dt\nOut[94]: {'age': 42, 'food': 'spam', 'name': 'Monty'}\n\nTranpose it:\n lst = [('name', 'Monty'), ('age', 42), ('food', 'spam')]\n keys, values = zip(*lst)\n In [101]: keys\n Out[101]: ('name', 'age', 'food')\n In [102]: values\n Out[102]: ('Monty', 42, 'spam')\n\n", "Here is also an example of adding a list value in you dictionary\nlist1 = [\"Name\", \"Surname\", \"Age\"]\nlist2 = [[\"Cyd\", \"JEDD\", \"JESS\"], [\"DEY\", \"AUDIJE\", \"PONGARON\"], [21, 32, 47]]\ndic = dict(zip(list1, list2))\nprint(dic)\n\nalways make sure the your \"Key\"(list1) is always in the first parameter.\n{'Name': ['Cyd', 'JEDD', 'JESS'], 'Surname': ['DEY', 'AUDIJE', 'PONGARON'], 'Age': [21, 32, 47]}\n\n", "I had this doubt while I was trying to solve a graph-related problem. The issue I had was I needed to define an empty adjacency list and wanted to initialize all the nodes with an empty list, that's when I thought how about I check if it is fast enough, I mean if it will be worth doing a zip operation rather than simple assignment key-value pair. After all most of the times, the time factor is an important ice breaker. So I performed timeit operation for both approaches.\nimport timeit\ndef dictionary_creation(n_nodes):\n dummy_dict = dict()\n for node in range(n_nodes):\n dummy_dict[node] = []\n return dummy_dict\n\n\ndef dictionary_creation_1(n_nodes):\n keys = list(range(n_nodes))\n values = [[] for i in range(n_nodes)]\n graph = dict(zip(keys, values))\n return graph\n\n\ndef wrapper(func, *args, **kwargs):\n def wrapped():\n return func(*args, **kwargs)\n return wrapped\n\niteration = wrapper(dictionary_creation, n_nodes)\nshorthand = wrapper(dictionary_creation_1, n_nodes)\n\nfor trail in range(1, 8):\n print(f'Itertion: {timeit.timeit(iteration, number=trails)}\\nShorthand: {timeit.timeit(shorthand, number=trails)}')\n\nFor n_nodes = 10,000,000\nI get,\nIteration: 2.825081646999024\nShorthand: 3.535717916001886\nIteration: 5.051560923002398\nShorthand: 6.255070794999483\nIteration: 6.52859034499852\nShorthand: 8.221581164998497\nIteration: 8.683652416999394\nShorthand: 12.599181543999293\nIteration: 11.587241565001023\nShorthand: 15.27298851100204\nIteration: 14.816342867001367\nShorthand: 17.162912737003353\nIteration: 16.645022411001264\nShorthand: 19.976680120998935\nYou can clearly see after a certain point, iteration approach at n_th step overtakes the time taken by shorthand approach at n-1_th step.\n", "It can be done by the following way.\nkeys = ['name', 'age', 'food']\nvalues = ['Monty', 42, 'spam'] \n\ndict = {}\n\nfor i in range(len(keys)):\n dict[keys[i]] = values[i]\n \nprint(dict)\n\n{'name': 'Monty', 'age': 42, 'food': 'spam'}\n\n", "All answers sum up:\nl = [1, 5, 8, 9]\nll = [3, 7, 10, 11]\n\nzip:\ndict(zip(l,ll)) # {1: 3, 5: 7, 8: 10, 9: 11}\n\n#if you want to play with key or value @recommended\n\n{k:v*10 for k, v in zip(l, ll)} #{1: 30, 5: 70, 8: 100, 9: 110}\n\ncounter:\nd = {}\nc=0\nfor k in l:\n d[k] = ll[c] #setting up keys from the second list values\n c += 1\nprint(d)\n{1: 3, 5: 7, 8: 10, 9: 11}\n\n\nenumerate:\nd = {}\nfor i,k in enumerate(l):\n d[k] = ll[i]\nprint(d)\n{1: 3, 5: 7, 8: 10, 9: 11}\n\n", "Solution as dictionary comprehension with enumerate:\ndict = {item : values[index] for index, item in enumerate(keys)}\n\nSolution as for loop with enumerate:\ndict = {}\nfor index, item in enumerate(keys):\n dict[item] = values[index]\n\n", "If you are working with more than 1 set of values and wish to have a list of dicts you can use this:\ndef as_dict_list(data: list, columns: list):\n return [dict((zip(columns, row))) for row in data]\n\nReal-life example would be a list of tuples from a db query paired to a tuple of columns from the same query. Other answers only provided for 1 to 1.\n", "keys = ['name', 'age', 'food']\nvalues = ['Monty', 42, 'spam']\ndic = {}\nc = 0\nfor i in keys:\n dic[i] = values[c]\n c += 1\n\nprint(dic)\n{'name': 'Monty', 'age': 42, 'food': 'spam'}\n\n", " import pprint\n def makeDictUsingAlternateLists1(**rest):\n print(\"*rest.keys() : \",*rest.keys())\n print(\"rest.keys() : \",rest.keys())\n print(\"*rest.values() : \",*rest.values())\n print(\"**rest.keys() : \",rest.keys())\n print(\"**rest.values() : \",rest.values())\n [print(a) for a in zip(*rest.values())]\n \n [ print(dict(zip(rest.keys(),a))) for a in zip(*rest.values())]\n print(\"...\")\n \n \n finalRes= [ dict( zip( rest.keys(),a)) for a in zip(*rest.values())] \n return finalRes\n \n l = makeDictUsingAlternateLists1(p=p,q=q,r=r,s=s)\n pprint.pprint(l) \n\"\"\"\n*rest.keys() : p q r s\nrest.keys() : dict_keys(['p', 'q', 'r', 's'])\n*rest.values() : ['A', 'B', 'C'] [5, 2, 7] ['M', 'F', 'M'] ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola']\n**rest.keys() : dict_keys(['p', 'q', 'r', 's'])\n**rest.values() : dict_values([['A', 'B', 'C'], [5, 2, 7], ['M', 'F', 'M'], ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola']])\n('A', 5, 'M', 'Sovabazaar')\n('B', 2, 'F', 'Shyambazaar')\n('C', 7, 'M', 'Bagbazaar')\n{'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'}\n{'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'}\n{'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'}\n...\n[{'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'},\n {'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'},\n {'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'}]\n\"\"\"\n\n" ]
[ 2791, 220, 134, 40, 31, 19, 15, 11, 10, 3, 3, 2, 2, 1, 1, 0, 0, 0, 0 ]
[ "method without zip function\nl1 = [1,2,3,4,5]\nl2 = ['a','b','c','d','e']\nd1 = {}\nfor l1_ in l1:\n for l2_ in l2:\n d1[l1_] = l2_\n l2.remove(l2_)\n break \n\nprint (d1)\n\n\n{1: 'd', 2: 'b', 3: 'e', 4: 'a', 5: 'c'}\n\n", "Although there are multiple ways of doing this but i think most fundamental way of approaching it; creating a loop and dictionary and store values into that dictionary. In the recursive approach the idea is still same it but instead of using a loop, the function called itself until it reaches to the end. Of course there are other approaches like using dict(zip(key, value)) and etc. These aren't the most effective solutions.\ny = [1,2,3,4]\nx = [\"a\",\"b\",\"c\",\"d\"]\n\n# This below is a brute force method\nobj = {}\nfor i in range(len(y)):\n obj[y[i]] = x[i]\nprint(obj)\n\n# Recursive approach \nobj = {}\ndef map_two_lists(a,b,j=0):\n if j < len(a):\n obj[b[j]] = a[j]\n j +=1\n map_two_lists(a, b, j)\n return obj\n \n\n\nres = map_two_lists(x,y)\nprint(res)\n\n\nBoth the results should print\n{1: 'a', 2: 'b', 3: 'c', 4: 'd'} \n\n" ]
[ -1, -1 ]
[ "dictionary", "list", "python" ]
stackoverflow_0000209840_dictionary_list_python.txt
Q: Comparing date part only without comparing time in JavaScript What is wrong with the code below? Maybe it would be simpler to just compare date and not time. I am not sure how to do this either, and I searched, but I couldn't find my exact problem. BTW, when I display the two dates in an alert, they show as exactly the same. My code: window.addEvent('domready', function() { var now = new Date(); var input = $('datum').getValue(); var dateArray = input.split('/'); var userMonth = parseInt(dateArray[1])-1; var userDate = new Date(); userDate.setFullYear(dateArray[2], userMonth, dateArray[0], now.getHours(), now.getMinutes(), now.getSeconds(), now.getMilliseconds()); if (userDate > now) { alert(now + '\n' + userDate); } }); Is there a simpler way to compare dates and not including the time? A: I'm still learning JavaScript, and the only way that I've found which works for me to compare two dates without the time is to use the setHours method of the Date object and set the hours, minutes, seconds and milliseconds to zero. Then compare the two dates. For example, date1 = new Date() date2 = new Date(2011,8,20) date2 will be set with hours, minutes, seconds and milliseconds to zero, but date1 will have them set to the time that date1 was created. To get rid of the hours, minutes, seconds and milliseconds on date1 do the following: date1.setHours(0,0,0,0) Now you can compare the two dates as DATES only without worrying about time elements. A: BEWARE THE TIMEZONE Using the date object to represent just-a-date straight away gets you into a huge excess precision problem. You need to manage time and timezone to keep them out, and they can sneak back in at any step. The accepted answer to this question falls into the trap. A javascript date has no notion of timezone. It's a moment in time (ticks since the epoch) with handy (static) functions for translating to and from strings, using by default the "local" timezone of the device, or, if specified, UTC or another timezone. To represent just-a-date™ with a date object, you want your dates to represent UTC midnight at the start of the date in question. This is a common and necessary convention that lets you work with dates regardless of the season or timezone of their creation. So you need to be very vigilant to manage the notion of timezone, both when you create your midnight UTC Date object, and when you serialize it. Lots of folks are confused by the default behaviour of the console. If you spray a date to the console, the output you see will include your timezone. This is just because the console calls toString() on your date, and toString() gives you a local represenation. The underlying date has no timezone! (So long as the time matches the timezone offset, you still have a midnight UTC date object) Deserializing (or creating midnight UTC Date objects) This is the rounding step, with the trick that there are two "right" answers. Most of the time, you will want your date to reflect the local timezone of the user. What's the date here where I am.. Users in NZ and US can click at the same time and usually get different dates. In that case, do this... // create a date (utc midnight) reflecting the value of myDate and the environment's timezone offset. new Date(Date.UTC(myDate.getFullYear(),myDate.getMonth(), myDate.getDate())); Sometimes, international comparability trumps local accuracy. In that case, do this... // the date in London of a moment in time. Device timezone is ignored. new Date(Date.UTC(myDate.getUTCFullYear(), myDate.getUTCMonth(), myDate.getUTCDate())); Deserialize a date Often dates on the wire will be in the format YYYY-MM-DD. To deserialize them, do this... var midnightUTCDate = new Date( dateString + 'T00:00:00Z'); Serializing Having taken care to manage timezone when you create, you now need to be sure to keep timezone out when you convert back to a string representation. So you can safely use... toISOString() getUTCxxx() getTime() //returns a number with no time or timezone. .toLocaleDateString("fr",{timeZone:"UTC"}) // whatever locale you want, but ALWAYS UTC. And totally avoid everything else, especially... getYear(),getMonth(),getDate() So to answer your question, 7 years too late... <input type="date" onchange="isInPast(event)"> <script> var isInPast = function(event){ var userEntered = new Date(event.target.valueAsNumber); // valueAsNumber has no time or timezone! var now = new Date(); var today = new Date(Date.UTC(now.getUTCFullYear(), now.getUTCMonth(), now.getUTCDate() )); if(userEntered.getTime() < today.getTime()) alert("date is past"); else if(userEntered.getTime() == today.getTime()) alert("date is today"); else alert("date is future"); } </script> See it running... Update 2022... free stuff with tests ... The code below is now an npm package, Epoq. The code is on github. You're welcome :-) Update 2019... free stuff... Given the popularity of this answer, I've put it all in code. The following function returns a wrapped date object, and only exposes those functions that are safe to use with just-a-date™. Call it with a Date object and it will resolve to JustADate reflecting the timezone of the user. Call it with a string: if the string is an ISO 8601 with timezone specified, we'll just round off the time part. If timezone is not specified, we'll convert it to a date reflecting the local timezone, just as for date objects. function JustADate(initDate){ var utcMidnightDateObj = null // if no date supplied, use Now. if(!initDate) initDate = new Date(); // if initDate specifies a timezone offset, or is already UTC, just keep the date part, reflecting the date _in that timezone_ if(typeof initDate === "string" && initDate.match(/(-\d\d|(\+|-)\d{2}:\d{2}|Z)$/gm)){ utcMidnightDateObj = new Date( initDate.substring(0,10) + 'T00:00:00Z'); } else { // if init date is not already a date object, feed it to the date constructor. if(!(initDate instanceof Date)) initDate = new Date(initDate); // Vital Step! Strip time part. Create UTC midnight dateObj according to local timezone. utcMidnightDateObj = new Date(Date.UTC(initDate.getFullYear(),initDate.getMonth(), initDate.getDate())); } return { toISOString:()=>utcMidnightDateObj.toISOString(), getUTCDate:()=>utcMidnightDateObj.getUTCDate(), getUTCDay:()=>utcMidnightDateObj.getUTCDay(), getUTCFullYear:()=>utcMidnightDateObj.getUTCFullYear(), getUTCMonth:()=>utcMidnightDateObj.getUTCMonth(), setUTCDate:(arg)=>utcMidnightDateObj.setUTCDate(arg), setUTCFullYear:(arg)=>utcMidnightDateObj.setUTCFullYear(arg), setUTCMonth:(arg)=>utcMidnightDateObj.setUTCMonth(arg), addDays:(days)=>{ utcMidnightDateObj.setUTCDate(utcMidnightDateObj.getUTCDate + days) }, toString:()=>utcMidnightDateObj.toString(), toLocaleDateString:(locale,options)=>{ options = options || {}; options.timeZone = "UTC"; locale = locale || "en-EN"; return utcMidnightDateObj.toLocaleDateString(locale,options) } } } // if initDate already has a timezone, we'll just use the date part directly console.log(JustADate('1963-11-22T12:30:00-06:00').toLocaleDateString()) // Test case from @prototype's comment console.log("@prototype's issue fixed... " + JustADate('1963-11-22').toLocaleDateString()) A: How about this? Date.prototype.withoutTime = function () { var d = new Date(this); d.setHours(0, 0, 0, 0); return d; } It allows you to compare the date part of the date like this without affecting the value of your variable: var date1 = new Date(2014,1,1); new Date().withoutTime() > date1.withoutTime(); // true A: Using Moment.js If you have the option of including a third-party library, it's definitely worth taking a look at Moment.js. It makes working with Date and DateTime much, much easier. For example, seeing if one Date comes after another Date but excluding their times, you would do something like this: var date1 = new Date(2016,9,20,12,0,0); // October 20, 2016 12:00:00 var date2 = new Date(2016,9,20,12,1,0); // October 20, 2016 12:01:00 // Comparison including time. moment(date2).isAfter(date1); // => true // Comparison excluding time. moment(date2).isAfter(date1, 'day'); // => false The second parameter you pass into isAfter is the precision to do the comparison and can be any of year, month, week, day, hour, minute or second. A: Simply compare using .toDateString like below: new Date().toDateString(); This will return you date part only and not time or timezone, like this: "Fri Feb 03 2017" Hence both date can be compared in this format likewise without time part of it. A: Just use toDateString() on both dates. toDateString doesn't include the time, so for 2 times on the same date, the values will be equal, as demonstrated below. var d1 = new Date(2019,01,01,1,20) var d2 = new Date(2019,01,01,2,20) console.log(d1==d2) // false console.log(d1.toDateString() == d2.toDateString()) // true Obviously some of the timezone concerns expressed elsewhere on this question are valid, but in many scenarios, those are not relevant. A: If you are truly comparing date only with no time component, another solution that may feel wrong but works and avoids all Date() time and timezone headaches is to compare the ISO string date directly using string comparison: > "2019-04-22" <= "2019-04-23" true > "2019-04-22" <= "2019-04-22" true > "2019-04-22" <= "2019-04-21" false > "2019-04-22" === "2019-04-22" true You can get the current date (UTC date, not neccesarily the user's local date) using: > new Date().toISOString().split("T")[0] "2019-04-22" My argument in favor of it is programmer simplicity -- you're much less likely to botch this than trying to handle datetimes and offsets correctly, probably at the cost of speed (I haven't compared performance) A: This might be a little cleaner version, also note that you should always use a radix when using parseInt. window.addEvent('domready', function() { // Create a Date object set to midnight on today's date var today = new Date((new Date()).setHours(0, 0, 0, 0)), input = $('datum').getValue(), dateArray = input.split('/'), // Always specify a radix with parseInt(), setting the radix to 10 ensures that // the number is interpreted as a decimal. It is particularly important with // dates, if the user had entered '09' for the month and you don't use a // radix '09' is interpreted as an octal number and parseInt would return 0, not 9! userMonth = parseInt(dateArray[1], 10) - 1, // Create a Date object set to midnight on the day the user specified userDate = new Date(dateArray[2], userMonth, dateArray[0], 0, 0, 0, 0); // Convert date objects to milliseconds and compare if(userDate.getTime() > today.getTime()) { alert(today+'\n'+userDate); } }); Checkout the MDC parseInt page for more information about the radix. JSLint is a great tool for catching things like a missing radix and many other things that can cause obscure and hard to debug errors. It forces you to use better coding standards so you avoid future headaches. I use it on every JavaScript project I code. A: An efficient and correct way to compare dates is: Math.floor(date1.getTime() / 86400000) > Math.floor(date2.getTime() / 86400000); It ignores the time part, it works for different timezones, and you can compare for equality == too. 86400000 is the number of milliseconds in a day (= 24*60*60*1000). Beware that the equality operator == should never be used for comparing Date objects because it fails when you would expect an equality test to work because it is comparing two Date objects (and does not compare the two dates) e.g.: > date1; outputs: Thu Mar 08 2018 00:00:00 GMT+1300 > date2; outputs: Thu Mar 08 2018 00:00:00 GMT+1300 > date1 == date2; outputs: false > Math.floor(date1.getTime() / 86400000) == Math.floor(date2.getTime() / 86400000); outputs: true Notes: If you are comparing Date objects that have the time part set to zero, then you could use date1.getTime() == date2.getTime() but it is hardly worth the optimisation. You can use <, >, <=, or >= when comparing Date objects directly because these operators first convert the Date object by calling .valueOf() before the operator does the comparison. A: As I don't see here similar approach, and I'm not enjoying setting h/m/s/ms to 0, as it can cause problems with accurate transition to local time zone with changed date object (I presume so), let me introduce here this, written few moments ago, lil function: +: Easy to use, makes a basic comparison operations done (comparing day, month and year without time.) -: It seems that this is a complete opposite of "out of the box" thinking. function datecompare(date1, sign, date2) { var day1 = date1.getDate(); var mon1 = date1.getMonth(); var year1 = date1.getFullYear(); var day2 = date2.getDate(); var mon2 = date2.getMonth(); var year2 = date2.getFullYear(); if (sign === '===') { if (day1 === day2 && mon1 === mon2 && year1 === year2) return true; else return false; } else if (sign === '>') { if (year1 > year2) return true; else if (year1 === year2 && mon1 > mon2) return true; else if (year1 === year2 && mon1 === mon2 && day1 > day2) return true; else return false; } } Usage: datecompare(date1, '===', date2) for equality check, datecompare(date1, '>', date2) for greater check, !datecompare(date1, '>', date2) for less or equal check Also, obviously, you can switch date1 and date2 in places to achieve any other simple comparison. A: This JS will change the content after the set date here's the same thing but on w3schools date1 = new Date() date2 = new Date(2019,5,2) //the date you are comparing date1.setHours(0,0,0,0) var stockcnt = document.getElementById('demo').innerHTML; if (date1 > date2){ document.getElementById('demo').innerHTML="yes"; //change if date is > set date (date2) }else{ document.getElementById('demo').innerHTML="hello"; //change if date is < set date (date2) } <p id="demo">hello</p> <!--What will be changed--> <!--if you check back in tomorrow, it will say yes instead of hello... or you could change the date... or change > to <--> A: The date.js library is handy for these things. It makes all JS date-related scriping a lot easier. A: This is the way I do it: var myDate = new Date($('input[name=frequency_start]').val()).setHours(0,0,0,0); var today = new Date().setHours(0,0,0,0); if(today>myDate){ jAlert('Please Enter a date in the future','Date Start Error', function(){ $('input[name=frequency_start]').focus().select(); }); } A: After reading this question quite same time after it is posted I have decided to post another solution, as I didn't find it that quite satisfactory, at least to my needs: I have used something like this: var currentDate= new Date().setHours(0,0,0,0); var startDay = new Date(currentDate - 86400000 * 2); var finalDay = new Date(currentDate + 86400000 * 2); In that way I could have used the dates in the format I wanted for processing afterwards. But this was only for my need, but I have decided to post it anyway, maybe it will help someone A: This works for me: export default (chosenDate) => { const now = new Date(); const today = new Date(Date.UTC(now.getUTCFullYear(), now.getUTCMonth(), now.getUTCDate())); const splitChosenDate = chosenDate.split('/'); today.setHours(0, 0, 0, 0); const fromDate = today.getTime(); const toDate = new Date(splitChosenDate[2], splitChosenDate[1] - 1, splitChosenDate[0]).getTime(); return toDate < fromDate; }; In accepted answer, there is timezone issue and in the other time is not 00:00:00 A: Make sure you construct userDate with a 4 digit year as setFullYear(10, ...) !== setFullYear(2010, ...). A: You can use some arithmetic with the total of ms. var date = new Date(date1); date.setHours(0, 0, 0, 0); var diff = date2.getTime() - date.getTime(); return diff >= 0 && diff < 86400000; I like this because no updates to the original dates are made and perfom faster than string split and compare. Hope this help! A: Comparing with setHours() will be a solution. Sample: var d1 = new Date(); var d2 = new Date("2019-2-23"); if(d1.setHours(0,0,0,0) == d2.setHours(0,0,0,0)){ console.log(true) }else{ console.log(false) } A: I know this question have been already answered and this may not be the best way, but in my scenario its working perfectly, so I thought it may help someone like me. if you have date string as String dateString="2018-01-01T18:19:12.543"; and you just want to compare the date part with another Date object in JS, var anotherDate=new Date(); //some date then you have to convert the string to Date object by using new Date("2018-01-01T18:19:12.543"); and here is the trick :- var valueDate =new Date(new Date(dateString).toDateString()); return valueDate.valueOf() == anotherDate.valueOf(); //here is the final result I have used toDateString() of Date object of JS, which returns the Date string only. Note: Don't forget to use the .valueOf() function while comparing the dates. more info about .valeOf() is here reference Happy codding. A: This will help. I managed to get it like this. var currentDate = new Date(new Date().getFullYear(), new Date().getMonth() , new Date().getDate()) A: var fromdate = new Date(MM/DD/YYYY); var todate = new Date(MM/DD/YYYY); if (fromdate > todate){ console.log('False'); }else{ console.log('True'); } if your date formate is different then use moment.js library to convert the format of your date and then use above code for compare two date Example : If your Date is in "DD/MM/YYYY" and wants to convert it into "MM/DD/YYYY" then see the below code example var newfromdate = new Date(moment(fromdate, "DD/MM/YYYY").format("MM/DD/YYYY")); console.log(newfromdate); var newtodate = new Date(moment(todate, "DD/MM/YYYY").format("MM/DD/YYYY")); console.log(newtodate); A: You can use fp_incr(0). Which sets the timezone part to midnight and returns a date object. A: Compare Date and Time: var t1 = new Date(); // say, in ISO String = '2022-01-21T12:30:15.422Z' var t2 = new Date(); // say, in ISO String = '2022-01-21T12:30:15.328Z' var t3 = t1; Compare 2 date objects by milliseconds level: console.log(t1 === t2); // false - Bcos there is some milliseconds difference console.log(t1 === t3); // true - Both dates have milliseconds level same values Compare 2 date objects ONLY by date (Ignore any time difference): console.log(t1.toISOString().split('T')[0] === t2.toISOString().split('T')[0]); // true; '2022-01-21' === '2022-01-21' Compare 2 date objects ONLY by time(ms) (Ignore any date difference): console.log(t1.toISOString().split('T')[1] === t3.toISOString().split('T')[1]); // true; '12:30:15.422Z' === '12:30:15.422Z' Above 2 methods uses toISOString() method so you no need to worry about the time zone difference across the countries. A: One option that I ended up using was to use the diff function of Moment.js. By calling something like start.diff(end, 'days') you can compare difference in whole numbers of days. A: Works for me: I needed to compare a date to a local dateRange let dateToCompare = new Date().toLocaleDateString().split("T")[0]) let compareTime = new Date(dateToCompare).getTime() let startDate = new Date().toLocaleDateString().split("T")[0]) let startTime = new Date(startDate).getTime() let endDate = new Date().toLocaleDateString().split("T")[0]) let endTime = new Date(endDate).getTime() return compareTime >= startTime && compareTime <= endTime A: As per usual. Too little, too late. Nowadays use of momentjs is discouraged (their words, not mine) and dayjs is preferred. One can use dayjs's isSame. https://day.js.org/docs/en/query/is-same dayjs().isSame('2011-01-01', 'date') There are also a bunch of other units you can use for the comparisons: https://day.js.org/docs/en/manipulate/start-of#list-of-all-available-units A: Using javascript you can set time values to zero for existing date objects and then parse back to Date. After parsing back to Date, Time value is 0 for both and you can do further comparison let firstDate = new Date(mydate1.setHours(0, 0, 0, 0)); let secondDate = new Date(mydate2.setHours(0, 0, 0, 0)); if (selectedDate == currentDate) { console.log('same date'); } else { console.log(`not same date`); }
Comparing date part only without comparing time in JavaScript
What is wrong with the code below? Maybe it would be simpler to just compare date and not time. I am not sure how to do this either, and I searched, but I couldn't find my exact problem. BTW, when I display the two dates in an alert, they show as exactly the same. My code: window.addEvent('domready', function() { var now = new Date(); var input = $('datum').getValue(); var dateArray = input.split('/'); var userMonth = parseInt(dateArray[1])-1; var userDate = new Date(); userDate.setFullYear(dateArray[2], userMonth, dateArray[0], now.getHours(), now.getMinutes(), now.getSeconds(), now.getMilliseconds()); if (userDate > now) { alert(now + '\n' + userDate); } }); Is there a simpler way to compare dates and not including the time?
[ "I'm still learning JavaScript, and the only way that I've found which works for me to compare two dates without the time is to use the setHours method of the Date object and set the hours, minutes, seconds and milliseconds to zero. Then compare the two dates.\nFor example,\ndate1 = new Date()\ndate2 = new Date(2011,8,20)\n\ndate2 will be set with hours, minutes, seconds and milliseconds to zero, but date1 will have them set to the time that date1 was created. To get rid of the hours, minutes, seconds and milliseconds on date1 do the following:\ndate1.setHours(0,0,0,0)\n\nNow you can compare the two dates as DATES only without worrying about time elements.\n", "BEWARE THE TIMEZONE\nUsing the date object to represent just-a-date straight away gets you into a huge excess precision problem. You need to manage time and timezone to keep them out, and they can sneak back in at any step. The accepted answer to this question falls into the trap.\nA javascript date has no notion of timezone. It's a moment in time (ticks since the epoch) with handy (static) functions for translating to and from strings, using by default the \"local\" timezone of the device, or, if specified, UTC or another timezone. To represent just-a-date™ with a date object, you want your dates to represent UTC midnight at the start of the date in question. This is a common and necessary convention that lets you work with dates regardless of the season or timezone of their creation. So you need to be very vigilant to manage the notion of timezone, both when you create your midnight UTC Date object, and when you serialize it.\nLots of folks are confused by the default behaviour of the console. If you spray a date to the console, the output you see will include your timezone. This is just because the console calls toString() on your date, and toString() gives you a local represenation. The underlying date has no timezone! (So long as the time matches the timezone offset, you still have a midnight UTC date object)\nDeserializing (or creating midnight UTC Date objects)\nThis is the rounding step, with the trick that there are two \"right\" answers. Most of the time, you will want your date to reflect the local timezone of the user. What's the date here where I am.. Users in NZ and US can click at the same time and usually get different dates. In that case, do this...\n// create a date (utc midnight) reflecting the value of myDate and the environment's timezone offset.\nnew Date(Date.UTC(myDate.getFullYear(),myDate.getMonth(), myDate.getDate()));\n\nSometimes, international comparability trumps local accuracy. In that case, do this...\n// the date in London of a moment in time. Device timezone is ignored.\nnew Date(Date.UTC(myDate.getUTCFullYear(), myDate.getUTCMonth(), myDate.getUTCDate()));\n\nDeserialize a date\nOften dates on the wire will be in the format YYYY-MM-DD. To deserialize them, do this...\nvar midnightUTCDate = new Date( dateString + 'T00:00:00Z');\n\nSerializing\nHaving taken care to manage timezone when you create, you now need to be sure to keep timezone out when you convert back to a string representation. So you can safely use...\n\ntoISOString()\ngetUTCxxx()\ngetTime() //returns a number with no time or timezone.\n.toLocaleDateString(\"fr\",{timeZone:\"UTC\"}) // whatever locale you want, but ALWAYS UTC.\n\nAnd totally avoid everything else, especially...\n\ngetYear(),getMonth(),getDate()\n\nSo to answer your question, 7 years too late...\n<input type=\"date\" onchange=\"isInPast(event)\">\n<script>\nvar isInPast = function(event){\n var userEntered = new Date(event.target.valueAsNumber); // valueAsNumber has no time or timezone!\n var now = new Date();\n var today = new Date(Date.UTC(now.getUTCFullYear(), now.getUTCMonth(), now.getUTCDate() ));\n if(userEntered.getTime() < today.getTime())\n alert(\"date is past\");\n else if(userEntered.getTime() == today.getTime())\n alert(\"date is today\");\n else\n alert(\"date is future\");\n\n}\n</script>\n\nSee it running...\nUpdate 2022... free stuff with tests ...\nThe code below is now an npm package, Epoq. The code is on github. You're welcome :-)\nUpdate 2019... free stuff...\nGiven the popularity of this answer, I've put it all in code. The following function returns a wrapped date object, and only exposes those functions that are safe to use with just-a-date™.\nCall it with a Date object and it will resolve to JustADate reflecting the timezone of the user. Call it with a string: if the string is an ISO 8601 with timezone specified, we'll just round off the time part. If timezone is not specified, we'll convert it to a date reflecting the local timezone, just as for date objects.\n\n\nfunction JustADate(initDate){\n var utcMidnightDateObj = null\n // if no date supplied, use Now.\n if(!initDate)\n initDate = new Date();\n\n // if initDate specifies a timezone offset, or is already UTC, just keep the date part, reflecting the date _in that timezone_\n if(typeof initDate === \"string\" && initDate.match(/(-\\d\\d|(\\+|-)\\d{2}:\\d{2}|Z)$/gm)){ \n utcMidnightDateObj = new Date( initDate.substring(0,10) + 'T00:00:00Z');\n } else {\n // if init date is not already a date object, feed it to the date constructor.\n if(!(initDate instanceof Date))\n initDate = new Date(initDate);\n // Vital Step! Strip time part. Create UTC midnight dateObj according to local timezone.\n utcMidnightDateObj = new Date(Date.UTC(initDate.getFullYear(),initDate.getMonth(), initDate.getDate()));\n }\n\n return {\n toISOString:()=>utcMidnightDateObj.toISOString(),\n getUTCDate:()=>utcMidnightDateObj.getUTCDate(),\n getUTCDay:()=>utcMidnightDateObj.getUTCDay(),\n getUTCFullYear:()=>utcMidnightDateObj.getUTCFullYear(),\n getUTCMonth:()=>utcMidnightDateObj.getUTCMonth(),\n setUTCDate:(arg)=>utcMidnightDateObj.setUTCDate(arg),\n setUTCFullYear:(arg)=>utcMidnightDateObj.setUTCFullYear(arg),\n setUTCMonth:(arg)=>utcMidnightDateObj.setUTCMonth(arg),\n addDays:(days)=>{\n utcMidnightDateObj.setUTCDate(utcMidnightDateObj.getUTCDate + days)\n },\n toString:()=>utcMidnightDateObj.toString(),\n toLocaleDateString:(locale,options)=>{\n options = options || {};\n options.timeZone = \"UTC\";\n locale = locale || \"en-EN\";\n return utcMidnightDateObj.toLocaleDateString(locale,options)\n }\n }\n}\n\n\n// if initDate already has a timezone, we'll just use the date part directly\nconsole.log(JustADate('1963-11-22T12:30:00-06:00').toLocaleDateString())\n// Test case from @prototype's comment\nconsole.log(\"@prototype's issue fixed... \" + JustADate('1963-11-22').toLocaleDateString())\n\n\n\n", "How about this? \nDate.prototype.withoutTime = function () {\n var d = new Date(this);\n d.setHours(0, 0, 0, 0);\n return d;\n}\n\nIt allows you to compare the date part of the date like this without affecting the value of your variable:\nvar date1 = new Date(2014,1,1);\nnew Date().withoutTime() > date1.withoutTime(); // true\n\n", "Using Moment.js\nIf you have the option of including a third-party library, it's definitely worth taking a look at Moment.js. It makes working with Date and DateTime much, much easier.\nFor example, seeing if one Date comes after another Date but excluding their times, you would do something like this:\nvar date1 = new Date(2016,9,20,12,0,0); // October 20, 2016 12:00:00\nvar date2 = new Date(2016,9,20,12,1,0); // October 20, 2016 12:01:00\n\n// Comparison including time.\nmoment(date2).isAfter(date1); // => true\n\n// Comparison excluding time.\nmoment(date2).isAfter(date1, 'day'); // => false\n\nThe second parameter you pass into isAfter is the precision to do the comparison and can be any of year, month, week, day, hour, minute or second.\n", "Simply compare using .toDateString like below:\nnew Date().toDateString();\n\nThis will return you date part only and not time or timezone, like this:\n\n\"Fri Feb 03 2017\"\n\nHence both date can be compared in this format likewise without time part of it.\n", "Just use toDateString() on both dates. toDateString doesn't include the time, so for 2 times on the same date, the values will be equal, as demonstrated below. \nvar d1 = new Date(2019,01,01,1,20)\nvar d2 = new Date(2019,01,01,2,20)\nconsole.log(d1==d2) // false\nconsole.log(d1.toDateString() == d2.toDateString()) // true\n\nObviously some of the timezone concerns expressed elsewhere on this question are valid, but in many scenarios, those are not relevant. \n", "If you are truly comparing date only with no time component, another solution that may feel wrong but works and avoids all Date() time and timezone headaches is to compare the ISO string date directly using string comparison:\n> \"2019-04-22\" <= \"2019-04-23\"\ntrue\n> \"2019-04-22\" <= \"2019-04-22\"\ntrue\n> \"2019-04-22\" <= \"2019-04-21\"\nfalse\n> \"2019-04-22\" === \"2019-04-22\"\ntrue\n\nYou can get the current date (UTC date, not neccesarily the user's local date) using:\n> new Date().toISOString().split(\"T\")[0]\n\"2019-04-22\"\n\nMy argument in favor of it is programmer simplicity -- you're much less likely to botch this than trying to handle datetimes and offsets correctly, probably at the cost of speed (I haven't compared performance)\n", "This might be a little cleaner version, also note that you should always use a radix when using parseInt.\nwindow.addEvent('domready', function() {\n // Create a Date object set to midnight on today's date\n var today = new Date((new Date()).setHours(0, 0, 0, 0)),\n input = $('datum').getValue(),\n dateArray = input.split('/'),\n // Always specify a radix with parseInt(), setting the radix to 10 ensures that\n // the number is interpreted as a decimal. It is particularly important with\n // dates, if the user had entered '09' for the month and you don't use a\n // radix '09' is interpreted as an octal number and parseInt would return 0, not 9!\n userMonth = parseInt(dateArray[1], 10) - 1,\n // Create a Date object set to midnight on the day the user specified\n userDate = new Date(dateArray[2], userMonth, dateArray[0], 0, 0, 0, 0);\n\n // Convert date objects to milliseconds and compare\n if(userDate.getTime() > today.getTime())\n {\n alert(today+'\\n'+userDate);\n }\n});\n\nCheckout the MDC parseInt page for more information about the radix.\nJSLint is a great tool for catching things like a missing radix and many other things that can cause obscure and hard to debug errors. It forces you to use better coding standards so you avoid future headaches. I use it on every JavaScript project I code.\n", "An efficient and correct way to compare dates is:\nMath.floor(date1.getTime() / 86400000) > Math.floor(date2.getTime() / 86400000);\n\nIt ignores the time part, it works for different timezones, and you can compare for equality == too. 86400000 is the number of milliseconds in a day (= 24*60*60*1000). \nBeware that the equality operator == should never be used for comparing Date objects because it fails when you would expect an equality test to work because it is comparing two Date objects (and does not compare the two dates) e.g.:\n> date1;\noutputs: Thu Mar 08 2018 00:00:00 GMT+1300\n\n> date2;\noutputs: Thu Mar 08 2018 00:00:00 GMT+1300\n\n> date1 == date2;\noutputs: false\n\n> Math.floor(date1.getTime() / 86400000) == Math.floor(date2.getTime() / 86400000);\noutputs: true\n\nNotes: If you are comparing Date objects that have the time part set to zero, then you could use date1.getTime() == date2.getTime() but it is hardly worth the optimisation. You can use <, >, <=, or >= when comparing Date objects directly because these operators first convert the Date object by calling .valueOf() before the operator does the comparison.\n", "As I don't see here similar approach, and I'm not enjoying setting h/m/s/ms to 0, as it can cause problems with accurate transition to local time zone with changed date object (I presume so), let me introduce here this, written few moments ago, lil function:\n+: Easy to use, makes a basic comparison operations done (comparing day, month and year without time.)\n-: It seems that this is a complete opposite of \"out of the box\" thinking.\nfunction datecompare(date1, sign, date2) {\n var day1 = date1.getDate();\n var mon1 = date1.getMonth();\n var year1 = date1.getFullYear();\n var day2 = date2.getDate();\n var mon2 = date2.getMonth();\n var year2 = date2.getFullYear();\n if (sign === '===') {\n if (day1 === day2 && mon1 === mon2 && year1 === year2) return true;\n else return false;\n }\n else if (sign === '>') {\n if (year1 > year2) return true;\n else if (year1 === year2 && mon1 > mon2) return true;\n else if (year1 === year2 && mon1 === mon2 && day1 > day2) return true;\n else return false;\n } \n}\n\nUsage:\ndatecompare(date1, '===', date2) for equality check,\ndatecompare(date1, '>', date2) for greater check,\n!datecompare(date1, '>', date2) for less or equal check \nAlso, obviously, you can switch date1 and date2 in places to achieve any other simple comparison.\n", "This JS will change the content after the set date \nhere's the same thing but on w3schools\n\n\ndate1 = new Date()\r\ndate2 = new Date(2019,5,2) //the date you are comparing\r\n\r\ndate1.setHours(0,0,0,0)\r\n\r\nvar stockcnt = document.getElementById('demo').innerHTML;\r\nif (date1 > date2){\r\ndocument.getElementById('demo').innerHTML=\"yes\"; //change if date is > set date (date2)\r\n}else{\r\ndocument.getElementById('demo').innerHTML=\"hello\"; //change if date is < set date (date2)\r\n}\n<p id=\"demo\">hello</p> <!--What will be changed-->\r\n<!--if you check back in tomorrow, it will say yes instead of hello... or you could change the date... or change > to <-->\n\n\n\n", "The date.js library is handy for these things. It makes all JS date-related scriping a lot easier.\n", "This is the way I do it:\nvar myDate = new Date($('input[name=frequency_start]').val()).setHours(0,0,0,0);\nvar today = new Date().setHours(0,0,0,0);\nif(today>myDate){\n jAlert('Please Enter a date in the future','Date Start Error', function(){\n $('input[name=frequency_start]').focus().select();\n });\n}\n\n", "After reading this question quite same time after it is posted I have decided to post another solution, as I didn't find it that quite satisfactory, at least to my needs:\nI have used something like this:\nvar currentDate= new Date().setHours(0,0,0,0);\n\nvar startDay = new Date(currentDate - 86400000 * 2);\nvar finalDay = new Date(currentDate + 86400000 * 2);\n\nIn that way I could have used the dates in the format I wanted for processing afterwards. But this was only for my need, but I have decided to post it anyway, maybe it will help someone\n", "This works for me:\n export default (chosenDate) => {\n const now = new Date();\n const today = new Date(Date.UTC(now.getUTCFullYear(), now.getUTCMonth(), now.getUTCDate()));\n const splitChosenDate = chosenDate.split('/');\n\n today.setHours(0, 0, 0, 0);\n const fromDate = today.getTime();\n const toDate = new Date(splitChosenDate[2], splitChosenDate[1] - 1, splitChosenDate[0]).getTime();\n\n return toDate < fromDate;\n};\n\nIn accepted answer, there is timezone issue and in the other time is not 00:00:00\n", "Make sure you construct userDate with a 4 digit year as setFullYear(10, ...) !== setFullYear(2010, ...).\n", "You can use some arithmetic with the total of ms. \nvar date = new Date(date1);\ndate.setHours(0, 0, 0, 0);\n\nvar diff = date2.getTime() - date.getTime();\nreturn diff >= 0 && diff < 86400000;\n\nI like this because no updates to the original dates are made and perfom faster than string split and compare.\nHope this help!\n", "Comparing with setHours() will be a solution. Sample:\nvar d1 = new Date();\nvar d2 = new Date(\"2019-2-23\");\nif(d1.setHours(0,0,0,0) == d2.setHours(0,0,0,0)){\n console.log(true)\n}else{\n console.log(false)\n}\n\n", "I know this question have been already answered and this may not be the best way, but in my scenario its working perfectly, so I thought it may help someone like me. \nif you have date string as \nString dateString=\"2018-01-01T18:19:12.543\";\n\nand you just want to compare the date part with another Date object in JS,\nvar anotherDate=new Date(); //some date\n\nthen you have to convert the string to Date object by using new Date(\"2018-01-01T18:19:12.543\");\nand here is the trick :-\nvar valueDate =new Date(new Date(dateString).toDateString());\n\n return valueDate.valueOf() == anotherDate.valueOf(); //here is the final result\n\nI have used toDateString() of Date object of JS, which returns the Date string only. \nNote: Don't forget to use the .valueOf() function while comparing the dates.\nmore info about .valeOf() is here reference \nHappy codding.\n", "This will help. I managed to get it like this. \nvar currentDate = new Date(new Date().getFullYear(), new Date().getMonth() , new Date().getDate())\n\n", "var fromdate = new Date(MM/DD/YYYY);\nvar todate = new Date(MM/DD/YYYY);\nif (fromdate > todate){\n console.log('False');\n}else{\n console.log('True');\n}\n\nif your date formate is different then use moment.js library to convert the format of your date and then use above code for compare two date\nExample :\nIf your Date is in \"DD/MM/YYYY\" and wants to convert it into \"MM/DD/YYYY\" then see the below code example\nvar newfromdate = new Date(moment(fromdate, \"DD/MM/YYYY\").format(\"MM/DD/YYYY\"));\nconsole.log(newfromdate);\nvar newtodate = new Date(moment(todate, \"DD/MM/YYYY\").format(\"MM/DD/YYYY\"));\nconsole.log(newtodate);\n\n\n", "You can use fp_incr(0). Which sets the timezone part to midnight and returns a date object.\n", "Compare Date and Time:\nvar t1 = new Date(); // say, in ISO String = '2022-01-21T12:30:15.422Z'\nvar t2 = new Date(); // say, in ISO String = '2022-01-21T12:30:15.328Z'\nvar t3 = t1;\n\nCompare 2 date objects by milliseconds level:\nconsole.log(t1 === t2); // false - Bcos there is some milliseconds difference\nconsole.log(t1 === t3); // true - Both dates have milliseconds level same values\n\nCompare 2 date objects ONLY by date (Ignore any time difference):\nconsole.log(t1.toISOString().split('T')[0] === t2.toISOString().split('T')[0]); \n // true; '2022-01-21' === '2022-01-21'\n\nCompare 2 date objects ONLY by time(ms) (Ignore any date difference):\nconsole.log(t1.toISOString().split('T')[1] === t3.toISOString().split('T')[1]); \n // true; '12:30:15.422Z' === '12:30:15.422Z'\n\nAbove 2 methods uses toISOString() method so you no need to worry about the time zone difference across the countries.\n", "One option that I ended up using was to use the diff function of Moment.js. By calling something like start.diff(end, 'days') you can compare difference in whole numbers of days.\n", "Works for me:\nI needed to compare a date to a local dateRange\nlet dateToCompare = new Date().toLocaleDateString().split(\"T\")[0])\nlet compareTime = new Date(dateToCompare).getTime()\n\nlet startDate = new Date().toLocaleDateString().split(\"T\")[0])\nlet startTime = new Date(startDate).getTime()\n\nlet endDate = new Date().toLocaleDateString().split(\"T\")[0])\nlet endTime = new Date(endDate).getTime()\n\nreturn compareTime >= startTime && compareTime <= endTime\n\n", "As per usual. Too little, too late.\nNowadays use of momentjs is discouraged (their words, not mine) and dayjs is preferred.\nOne can use dayjs's isSame.\nhttps://day.js.org/docs/en/query/is-same\ndayjs().isSame('2011-01-01', 'date')\n\nThere are also a bunch of other units you can use for the comparisons:\nhttps://day.js.org/docs/en/manipulate/start-of#list-of-all-available-units\n", "Using javascript you can set time values to zero for existing date objects and then parse back to Date. After parsing back to Date, Time value is 0 for both and you can do further comparison\n let firstDate = new Date(mydate1.setHours(0, 0, 0, 0));\n let secondDate = new Date(mydate2.setHours(0, 0, 0, 0));\n\n if (selectedDate == currentDate)\n {\n console.log('same date');\n }\n else\n {\n console.log(`not same date`);\n }\n\n" ]
[ 921, 226, 100, 28, 25, 16, 14, 9, 6, 3, 3, 2, 2, 2, 2, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "comparison", "date", "javascript", "mootools" ]
stackoverflow_0002698725_comparison_date_javascript_mootools.txt
Q: flutter Unhandled Exception: PlatformException(sign_in_failed, m3.b: 10: , null, null) I ran my flutter app in release mode and decided to test my google sign in authentication, on debug mode it works pretty well but when I try in release mode flutter run --release and click the google signin button I get this error [+24158 ms] E/flutter ( 1858): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: PlatformException(sign_in_failed, m3.b: 10: , null, null) [ +1 ms] E/flutter ( 1858): #0 StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:653) [ ] E/flutter ( 1858): #1 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:296) [ +1 ms] E/flutter ( 1858): <asynchronous suspension> [ +1 ms] E/flutter ( 1858): #2 MethodChannel.invokeMapMethod (package:flutter/src/services/platform_channel.dart:499) [ +1 ms] E/flutter ( 1858): <asynchronous suspension> [ +1 ms] E/flutter ( 1858): #3 GoogleSignIn._callMethod (package:google_sign_in/google_sign_in.dart:273) [ +1 ms] E/flutter ( 1858): <asynchronous suspension> [ ] E/flutter ( 1858): #4 GoogleSignIn.signIn.isCanceled (package:google_sign_in/google_sign_in.dart:407) [ ] E/flutter ( 1858): <asynchronous suspension> [ ] E/flutter ( 1858): i don't know where the issue is coming from, i have tried creating my signing keys (SHA1 & SHA256) and added to firebase and downloaded the service json. Still nothing has changed $ flutter doctor -v [√] Flutter (Channel stable, 3.3.4, on Microsoft Windows [Version 6.3.9600], locale en-US) • Flutter version 3.3.4 on channel stable at D:\flutter\Sdk\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision eb6d86ee27 (9 weeks ago), 2022-10-04 22:31:45 -0700 • Engine revision c08d7d5efc • Dart version 2.18.2 • DevTools version 2.15.0 [√] Android toolchain - develop for Android devices (Android SDK version 32.1.0-rc1) • Android SDK at C:\Users\bright\AppData\Local\Android\sdk • Platform android-33, build-tools 32.1.0-rc1 • Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java • Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) • All Android licenses accepted. [√] Chrome - develop for the web • Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe [X] Visual Studio - develop for Windows X Visual Studio not installed; this is necessary for Windows development. Download at https://visualstudio.microsoft.com/downloads/. Please install the "Desktop development with C++" workload, including all of its default components [√] Android Studio (version 3.5) • Android Studio at C:\Program Files\Android\Android Studio • Flutter plugin version 44.0.1 • Dart plugin version 191.8593 • Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) [√] VS Code (version 1.73.1) • VS Code at C:\Users\bright\AppData\Local\Programs\Microsoft VS Code • Flutter extension version 3.54.0 [√] Connected device (4 available) • Infinix X652A (mobile) • 0494625032002068 • android-arm64 • Android 9 (API 28) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 6.3.9600] • Chrome (web) • chrome • web-javascript • Google Chrome 101.0.4951.54 • Edge (web) • edge • web-javascript • Microsoft Edge 107.0.1418.62 [√] HTTP Host Availability • All required HTTP hosts are available ! Doctor found issues in 1 category. google_sign_in: ^5.4.2 firebase_auth: ^3.11.1 firebase_core: ^1.24.0 Please what do you think is the cause of this issue and how can I go about fixing it If you still need extra piece of information or code please tell me. A: The issue you are experiencing is likely due to the fact that you have not added the release version of your app's SHA-1 fingerprint to your Firebase project. To fix this, you will need to: Generate a release key for your app by following the instructions at https://flutter.dev/docs/deployment/android#signing-the-app. Add the release key's SHA-1 fingerprint to your Firebase project by following the instructions at https://firebase.google.com/docs/android/setup#console. Download the updated google-services.json file from the Firebase console and add it to your app's android/app directory. After completing these steps, try running your app in release mode again and see if the issue is resolved. If you continue to experience issues, please provide more details about the error message you are seeing so that we can provide more specific help.
flutter Unhandled Exception: PlatformException(sign_in_failed, m3.b: 10: , null, null)
I ran my flutter app in release mode and decided to test my google sign in authentication, on debug mode it works pretty well but when I try in release mode flutter run --release and click the google signin button I get this error [+24158 ms] E/flutter ( 1858): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: PlatformException(sign_in_failed, m3.b: 10: , null, null) [ +1 ms] E/flutter ( 1858): #0 StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:653) [ ] E/flutter ( 1858): #1 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:296) [ +1 ms] E/flutter ( 1858): <asynchronous suspension> [ +1 ms] E/flutter ( 1858): #2 MethodChannel.invokeMapMethod (package:flutter/src/services/platform_channel.dart:499) [ +1 ms] E/flutter ( 1858): <asynchronous suspension> [ +1 ms] E/flutter ( 1858): #3 GoogleSignIn._callMethod (package:google_sign_in/google_sign_in.dart:273) [ +1 ms] E/flutter ( 1858): <asynchronous suspension> [ ] E/flutter ( 1858): #4 GoogleSignIn.signIn.isCanceled (package:google_sign_in/google_sign_in.dart:407) [ ] E/flutter ( 1858): <asynchronous suspension> [ ] E/flutter ( 1858): i don't know where the issue is coming from, i have tried creating my signing keys (SHA1 & SHA256) and added to firebase and downloaded the service json. Still nothing has changed $ flutter doctor -v [√] Flutter (Channel stable, 3.3.4, on Microsoft Windows [Version 6.3.9600], locale en-US) • Flutter version 3.3.4 on channel stable at D:\flutter\Sdk\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision eb6d86ee27 (9 weeks ago), 2022-10-04 22:31:45 -0700 • Engine revision c08d7d5efc • Dart version 2.18.2 • DevTools version 2.15.0 [√] Android toolchain - develop for Android devices (Android SDK version 32.1.0-rc1) • Android SDK at C:\Users\bright\AppData\Local\Android\sdk • Platform android-33, build-tools 32.1.0-rc1 • Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java • Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) • All Android licenses accepted. [√] Chrome - develop for the web • Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe [X] Visual Studio - develop for Windows X Visual Studio not installed; this is necessary for Windows development. Download at https://visualstudio.microsoft.com/downloads/. Please install the "Desktop development with C++" workload, including all of its default components [√] Android Studio (version 3.5) • Android Studio at C:\Program Files\Android\Android Studio • Flutter plugin version 44.0.1 • Dart plugin version 191.8593 • Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) [√] VS Code (version 1.73.1) • VS Code at C:\Users\bright\AppData\Local\Programs\Microsoft VS Code • Flutter extension version 3.54.0 [√] Connected device (4 available) • Infinix X652A (mobile) • 0494625032002068 • android-arm64 • Android 9 (API 28) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 6.3.9600] • Chrome (web) • chrome • web-javascript • Google Chrome 101.0.4951.54 • Edge (web) • edge • web-javascript • Microsoft Edge 107.0.1418.62 [√] HTTP Host Availability • All required HTTP hosts are available ! Doctor found issues in 1 category. google_sign_in: ^5.4.2 firebase_auth: ^3.11.1 firebase_core: ^1.24.0 Please what do you think is the cause of this issue and how can I go about fixing it If you still need extra piece of information or code please tell me.
[ "The issue you are experiencing is likely due to the fact that you have not added the release version of your app's SHA-1 fingerprint to your Firebase project.\nTo fix this, you will need to:\n\nGenerate a release key for your app by following the instructions at https://flutter.dev/docs/deployment/android#signing-the-app.\nAdd the release key's SHA-1 fingerprint to your Firebase project by following the instructions at https://firebase.google.com/docs/android/setup#console.\nDownload the updated google-services.json file from the Firebase console and add it to your app's android/app directory.\n\nAfter completing these steps, try running your app in release mode again and see if the issue is resolved. If you continue to experience issues, please provide more details about the error message you are seeing so that we can provide more specific help.\n" ]
[ 1 ]
[]
[]
[ "authentication", "dart", "firebase_authentication", "flutter", "google_signin" ]
stackoverflow_0074680212_authentication_dart_firebase_authentication_flutter_google_signin.txt
Q: How to use command-line tool `openssl` to decrypt ciphertext encripted by Perl module Crypt::DES? How to use command-line tool openssl to decrypt the ciphertext that was encrypted with Perl module Crypt::DES? Assume we have a Perl script like this: #!/usr/bin/perl -w use strict; use 5.010; use Getopt::Long qw(:config no_ignore_case); use Crypt::CBC; ### initialization &GetOptions("mode=s" => \(my $mode = '')); my $secret = q/;[qO7e<_sZmR8Krhf>}]mRY`y)BI8"WEF*2nmL^o'WMKA=uEt1/; my $key = pack('H*', $secret); open(my $fh, '>', 'key.bin'); $fh->print($key); $fh->close(); my $cipher = Crypt::CBC->new( -key => $key, -cipher => 'DES' ); ### read file my $filename = shift @ARGV; open($fh, '<', $filename) or die "$!"; my $cchRead = read($fh, my $buffer, -s $fh); close($fh); die "$!" unless defined($cchRead); ### encrypt if ($mode eq 'encrypt') { print $cipher->encrypt($buffer); } ### decrypt else { print $cipher->decrypt($buffer); } We can use the Perl script like: $ ./cipher.pl --mode=encrypt foo.txt > foo.encrypted # Encrypt plaintext. $ ./cipher.pl --mode=decrypt foo.encrypted # Decrypt ciphertext. My question is how to decrypt foo.encrypted with command-line tool openssl? I've tried these commands but in vain. $ openssl enc -des -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-cbc -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-cfb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-cfb1 -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-cfb8 -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ecb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede-cbc -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede-cfb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede-ofb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede3 -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede3-cbc -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede3-cfb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede3-ofb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ofb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des3 -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -desx -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -desx-cbc -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' A: On my machine, decryption of the ciphertext generated with the Perl script is successful with: openssl enc -des -md md5 -pass file:key.bin -d -in foo.encrypted -des is equivalent to -des-cbc and specifies DES in CBC mode. No PBKDF2 is used as key derivation function, but the OpenSSL proprietary EVP_BytesToKey() with MD5 as digest. The expected ciphertext is the concatenation of the ASCII encoding of Salted__, followed by an 8 bytes salt, followed by the actual ciphertext. Regarding security: DES, EVP_BytesToKey() and MD5 are deprecated and insecure (better choose AES, PBKDF2 and SHA-256). As a side note, encrypt() returns the raw ciphertext, therefore with Windows the data must be output in binary (e.g. with binmode(STDOUT)) so that the ciphertext is not corrupted by CRLF⇔LF conversions. Alternatively, the ciphertext can be output Base64 encoded (in which case the -a option must be set in the OpenSSL statement).
How to use command-line tool `openssl` to decrypt ciphertext encripted by Perl module Crypt::DES?
How to use command-line tool openssl to decrypt the ciphertext that was encrypted with Perl module Crypt::DES? Assume we have a Perl script like this: #!/usr/bin/perl -w use strict; use 5.010; use Getopt::Long qw(:config no_ignore_case); use Crypt::CBC; ### initialization &GetOptions("mode=s" => \(my $mode = '')); my $secret = q/;[qO7e<_sZmR8Krhf>}]mRY`y)BI8"WEF*2nmL^o'WMKA=uEt1/; my $key = pack('H*', $secret); open(my $fh, '>', 'key.bin'); $fh->print($key); $fh->close(); my $cipher = Crypt::CBC->new( -key => $key, -cipher => 'DES' ); ### read file my $filename = shift @ARGV; open($fh, '<', $filename) or die "$!"; my $cchRead = read($fh, my $buffer, -s $fh); close($fh); die "$!" unless defined($cchRead); ### encrypt if ($mode eq 'encrypt') { print $cipher->encrypt($buffer); } ### decrypt else { print $cipher->decrypt($buffer); } We can use the Perl script like: $ ./cipher.pl --mode=encrypt foo.txt > foo.encrypted # Encrypt plaintext. $ ./cipher.pl --mode=decrypt foo.encrypted # Decrypt ciphertext. My question is how to decrypt foo.encrypted with command-line tool openssl? I've tried these commands but in vain. $ openssl enc -des -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-cbc -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-cfb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-cfb1 -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-cfb8 -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ecb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede-cbc -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede-cfb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede-ofb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede3 -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede3-cbc -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede3-cfb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ede3-ofb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des-ofb -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -des3 -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -desx -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted' $ openssl enc -desx-cbc -salt -pbkdf2 -pass file:key.bin -d -in 'foo.encrypted'
[ "On my machine, decryption of the ciphertext generated with the Perl script is successful with:\nopenssl enc -des -md md5 -pass file:key.bin -d -in foo.encrypted\n\n-des is equivalent to -des-cbc and specifies DES in CBC mode. No PBKDF2 is used as key derivation function, but the OpenSSL proprietary EVP_BytesToKey() with MD5 as digest. The expected ciphertext is the concatenation of the ASCII encoding of Salted__, followed by an 8 bytes salt, followed by the actual ciphertext.\nRegarding security: DES, EVP_BytesToKey() and MD5 are deprecated and insecure (better choose AES, PBKDF2 and SHA-256).\n\nAs a side note, encrypt() returns the raw ciphertext, therefore with Windows the data must be output in binary (e.g. with binmode(STDOUT)) so that the ciphertext is not corrupted by CRLF⇔LF conversions. Alternatively, the ciphertext can be output Base64 encoded (in which case the -a option must be set in the OpenSSL statement).\n" ]
[ 1 ]
[]
[]
[ "cryptography", "encryption", "openssl", "perl" ]
stackoverflow_0074675241_cryptography_encryption_openssl_perl.txt
Q: How to change socket.io connection url? I connecting to server this way: function initSockets() { socket = io.connect(connectionUrl, { reconnection: true, reconnectionDelayMax: 5000, reconnectionDelay: 1000 }); } I need to change connectionUrl. (I want to give this option to users) So, I make this: socket.disconnect(); connectionUrl = newConnectionUrl; initSockets(); The problem is when user give a wrong address and socket.io can't connect to it, then socket.io invokes infinite reconnecting. Even if user gave another address and connect to it, then socket.io still reconnecting and there is a lot of weird things hapening. A: It's possible to update the Manager uri attribute: import { io } from "socket.io-client"; const socket = io("https://my-domain-a.com"); socket.io.uri = "https://my-domain-b.com"; // force reconnection socket.disconnect().connect(); Answer source: https://github.com/socketio/socket.io/discussions/4246#discussioncomment-1952661
How to change socket.io connection url?
I connecting to server this way: function initSockets() { socket = io.connect(connectionUrl, { reconnection: true, reconnectionDelayMax: 5000, reconnectionDelay: 1000 }); } I need to change connectionUrl. (I want to give this option to users) So, I make this: socket.disconnect(); connectionUrl = newConnectionUrl; initSockets(); The problem is when user give a wrong address and socket.io can't connect to it, then socket.io invokes infinite reconnecting. Even if user gave another address and connect to it, then socket.io still reconnecting and there is a lot of weird things hapening.
[ "It's possible to update the Manager uri attribute:\nimport { io } from \"socket.io-client\";\n\nconst socket = io(\"https://my-domain-a.com\");\n\nsocket.io.uri = \"https://my-domain-b.com\";\n\n// force reconnection\nsocket.disconnect().connect();\n\nAnswer source: https://github.com/socketio/socket.io/discussions/4246#discussioncomment-1952661\n" ]
[ 0 ]
[]
[]
[ "node.js", "socket.io" ]
stackoverflow_0058383236_node.js_socket.io.txt
Q: Which dependency should be added in order to use "edit" extention for SharePreferences? I am following the example described here - https://medium.com/@helmersebastian/clean-sharedpreferences-in-android-using-kotlin-delegation-ffabffd26990 Following this implementation I added DefaultSharedPrefs var sharedApplicationContext: Context get() = _sharedApplicationContext ?: throw IllegalStateException( "Application context not initialized yet." ) set(value) { _sharedApplicationContext = value } private var _sharedApplicationContext: Context? = null private val PREF_NAME = "pref_name" object DefaultSharedPrefs : SharedPreferences by sharedApplicationContext.getSharedPreferences( PREF_NAME, Context.MODE_PRIVATE ) Usage suppose to be like this class Foo { ... private var DefaultSharedPrefs.count: Int get() = getInt("key", 0) set(value) = edit { putInt("key", value) } ... } However I get such an error So, looks like the problem or I missed some dependency or I use the wrong one. After some reserch I found out that I missed this class package androidx.core.content import android.annotation.SuppressLint import android.content.SharedPreferences /** * Allows editing of this preference instance with a call to [apply][SharedPreferences.Editor.apply] * or [commit][SharedPreferences.Editor.commit] to persist the changes. * Default behaviour is [apply][SharedPreferences.Editor.apply]. * ``` * prefs.edit { * putString("key", value) * } * ``` * To [commit][SharedPreferences.Editor.commit] changes: * ``` * prefs.edit(commit = true) { * putString("key", value) * } * ``` */ @SuppressLint("ApplySharedPref") public inline fun SharedPreferences.edit( commit: Boolean = false, action: SharedPreferences.Editor.() -> Unit ) { val editor = edit() action(editor) if (commit) { editor.commit() } else { editor.apply() } } from here - https://developer.android.com/reference/kotlin/androidx/core/content/package-summary Then I tried to add this dependency implementation "androidx.core.content:1.0.0" but I still get the same error. What am I missing? A: "androidx.core:core-ktx:$core_version" https://developer.android.com/kotlin/ktx/extensions-list#dependency_6
Which dependency should be added in order to use "edit" extention for SharePreferences?
I am following the example described here - https://medium.com/@helmersebastian/clean-sharedpreferences-in-android-using-kotlin-delegation-ffabffd26990 Following this implementation I added DefaultSharedPrefs var sharedApplicationContext: Context get() = _sharedApplicationContext ?: throw IllegalStateException( "Application context not initialized yet." ) set(value) { _sharedApplicationContext = value } private var _sharedApplicationContext: Context? = null private val PREF_NAME = "pref_name" object DefaultSharedPrefs : SharedPreferences by sharedApplicationContext.getSharedPreferences( PREF_NAME, Context.MODE_PRIVATE ) Usage suppose to be like this class Foo { ... private var DefaultSharedPrefs.count: Int get() = getInt("key", 0) set(value) = edit { putInt("key", value) } ... } However I get such an error So, looks like the problem or I missed some dependency or I use the wrong one. After some reserch I found out that I missed this class package androidx.core.content import android.annotation.SuppressLint import android.content.SharedPreferences /** * Allows editing of this preference instance with a call to [apply][SharedPreferences.Editor.apply] * or [commit][SharedPreferences.Editor.commit] to persist the changes. * Default behaviour is [apply][SharedPreferences.Editor.apply]. * ``` * prefs.edit { * putString("key", value) * } * ``` * To [commit][SharedPreferences.Editor.commit] changes: * ``` * prefs.edit(commit = true) { * putString("key", value) * } * ``` */ @SuppressLint("ApplySharedPref") public inline fun SharedPreferences.edit( commit: Boolean = false, action: SharedPreferences.Editor.() -> Unit ) { val editor = edit() action(editor) if (commit) { editor.commit() } else { editor.apply() } } from here - https://developer.android.com/reference/kotlin/androidx/core/content/package-summary Then I tried to add this dependency implementation "androidx.core.content:1.0.0" but I still get the same error. What am I missing?
[ "\"androidx.core:core-ktx:$core_version\"\nhttps://developer.android.com/kotlin/ktx/extensions-list#dependency_6\n" ]
[ 0 ]
[]
[]
[ "android", "java", "kotlin", "sharedpreferences" ]
stackoverflow_0074680042_android_java_kotlin_sharedpreferences.txt
Q: modify fasta file with a function using biopython I should do this command for thounsands of fasta file, so I'm wondering if there is a function to accelerate the process from Bio import SeqIO new= open("new.fasta", "w") for rec in SeqIO.parse("old.fasta","fasta"): print(rec.id) print(rec.seq.reverse_complement()) new.write(">rc_"+rec.id+"\n") new.write(str(rec.seq.reverse_complement())+"\n") new.close() A: I rewrote you code into a function that can be called using each filename you have, possibly collected into a list using os.listdir(). from Bio import SeqIO def parse_file(filename): new_name = f"rc_{filename}" with open(new_name, "w") as new: for rec in SeqIO.parse(filename, "fasta"): print(rec_id:=rec.id) print(rev_comp:=str(rec.seq.reverse_complement())) new.write(f">rc_{rec_id}\n{rev_comp}\n") I used f-strings to create both the new filename and the strings written to that file. I also used the "walrus operator" to assign the values of rec.id and rec.seq.reverse_complement() to temp variables so we don't have to run those operations again when we write the data. This will save compute cycles and time over the long run. However, use of := means the code will only run under Python 3.8 and later.
modify fasta file with a function using biopython
I should do this command for thounsands of fasta file, so I'm wondering if there is a function to accelerate the process from Bio import SeqIO new= open("new.fasta", "w") for rec in SeqIO.parse("old.fasta","fasta"): print(rec.id) print(rec.seq.reverse_complement()) new.write(">rc_"+rec.id+"\n") new.write(str(rec.seq.reverse_complement())+"\n") new.close()
[ "I rewrote you code into a function that can be called using each filename you have, possibly collected into a list using os.listdir().\nfrom Bio import SeqIO\n\ndef parse_file(filename):\n new_name = f\"rc_{filename}\"\n with open(new_name, \"w\") as new:\n for rec in SeqIO.parse(filename, \"fasta\"):\n print(rec_id:=rec.id)\n print(rev_comp:=str(rec.seq.reverse_complement()))\n new.write(f\">rc_{rec_id}\\n{rev_comp}\\n\")\n\nI used f-strings to create both the new filename and the strings written to that file. I also used the \"walrus operator\" to assign the values of rec.id and rec.seq.reverse_complement() to temp variables so we don't have to run those operations again when we write the data. This will save compute cycles and time over the long run. However, use of := means the code will only run under Python 3.8 and later.\n" ]
[ 0 ]
[]
[]
[ "biopython", "python" ]
stackoverflow_0074671147_biopython_python.txt
Q: How to Assign array input values to an Array in PHP I want to loop through multiple items that i want to add after the user chooses the number of items he wants to add, and then add those input items into an array and access their values to add them to the db. However, when i assign the values and try to echo them, no values are present. Why is that? What am i doing wrong? Here is my code: <?php extract($_POST); ?> <html> <body> <h3 align= center>Please enter the number of items</h3> <form method='POST' align = center> Number of items <input type ="number" name='items' /><br /><br /> <input type='submit' name='submit' value='Submit' /> </form> <div align = center> <?php $itemsArray = array(); if (isset($items) && $items != 0) { echo "<h4 align = center>Please enter the items </h4>"; for ($i = 0; $i < $items; $i++){ $num = $i + 1; echo "<center><h4>Item $num</h4></center>"; ?> <form method='post' align = center > Item name <input tpye="text" name="name[]" /><br /><br /> Item Description <input type='text' name="desc[]" /> <br /><br /> <?php $itemsArray[$i] = array($name[$i], $desc[$i]); } echo "<input type='submit' name='SubmitItem' value='Submit' />"; echo "</form>"; } if(isset($SubmitItem)){ foreach($itemsArray as $item => $data){ $it = $data[0]; $it2 = $data[1]; echo $it; // not getting any values after submitting the form. echo $it2; } } ?> </div> </body> </html> A: There are a couple of issues with your code. First, you are using the same name for the input fields for each item. This means that when you submit the form, only the last set of inputs will be submitted because the previous inputs with the same name will be overwritten. To fix this, you can include the item number in the input name, like this: Item name <input tpye="text" name="name<?php echo $i; ?>" /><br /><br /> Item Description <input type='text' name="desc<?php echo $i; ?>" /> <br /><br /> Secondly, you are not closing the for loop after the form, which means that the form is not properly closed. To fix this, simply add a closing } at the end of the for loop like this: for ($i = 0; $i < $items; $i++){ $num = $i + 1; echo "<center><h4>Item $num</h4></center>"; ?> <form method='post' align = center > Item name <input tpye="text" name="name<?php echo $i; ?>" /><br /><br /> Item Description <input type='text' name="desc<?php echo $i; ?>" /> <br /><br /> <?php $itemsArray[$i] = array($name[$i], $desc[$i]); } Lastly, you are not properly accessing the values in the $itemsArray after the form is submitted. In your foreach loop, you are accessing the values as $data[0] and $data[1], but since you added the item number to the input name, you need to access the values using the item number, like this: foreach($itemsArray as $item => $data){ $it = $data[0]; $it2 = $data[1]; echo $name[$item]; // access the input value using the item number echo $desc[$item]; }
How to Assign array input values to an Array in PHP
I want to loop through multiple items that i want to add after the user chooses the number of items he wants to add, and then add those input items into an array and access their values to add them to the db. However, when i assign the values and try to echo them, no values are present. Why is that? What am i doing wrong? Here is my code: <?php extract($_POST); ?> <html> <body> <h3 align= center>Please enter the number of items</h3> <form method='POST' align = center> Number of items <input type ="number" name='items' /><br /><br /> <input type='submit' name='submit' value='Submit' /> </form> <div align = center> <?php $itemsArray = array(); if (isset($items) && $items != 0) { echo "<h4 align = center>Please enter the items </h4>"; for ($i = 0; $i < $items; $i++){ $num = $i + 1; echo "<center><h4>Item $num</h4></center>"; ?> <form method='post' align = center > Item name <input tpye="text" name="name[]" /><br /><br /> Item Description <input type='text' name="desc[]" /> <br /><br /> <?php $itemsArray[$i] = array($name[$i], $desc[$i]); } echo "<input type='submit' name='SubmitItem' value='Submit' />"; echo "</form>"; } if(isset($SubmitItem)){ foreach($itemsArray as $item => $data){ $it = $data[0]; $it2 = $data[1]; echo $it; // not getting any values after submitting the form. echo $it2; } } ?> </div> </body> </html>
[ "There are a couple of issues with your code. First, you are using the same name for the input fields for each item. This means that when you submit the form, only the last set of inputs will be submitted because the previous inputs with the same name will be overwritten. To fix this, you can include the item number in the input name, like this:\nItem name <input tpye=\"text\" name=\"name<?php echo $i; ?>\" /><br /><br />\nItem Description <input type='text' name=\"desc<?php echo $i; ?>\" /> <br /><br />\n\nSecondly, you are not closing the for loop after the form, which means that the form is not properly closed. To fix this, simply add a closing } at the end of the for loop like this:\nfor ($i = 0; $i < $items; $i++){ \n $num = $i + 1;\n echo \"<center><h4>Item $num</h4></center>\";\n ?>\n <form method='post' align = center >\n Item name <input tpye=\"text\" name=\"name<?php echo $i; ?>\" /><br /><br />\n Item Description <input type='text' name=\"desc<?php echo $i; ?>\" /> <br /><br />\n <?php \n $itemsArray[$i] = array($name[$i], $desc[$i]);\n}\n\nLastly, you are not properly accessing the values in the $itemsArray after the form is submitted. In your foreach loop, you are accessing the values as $data[0] and $data[1], but since you added the item number to the input name, you need to access the values using the item number, like this:\nforeach($itemsArray as $item => $data){\n $it = $data[0];\n $it2 = $data[1];\n echo $name[$item]; // access the input value using the item number\n echo $desc[$item];\n}\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "php" ]
stackoverflow_0074680287_arrays_php.txt
Q: Is there a way to translate date placeholders in Javascript? I'm looking for a way to translate the placeholder for a date picker in the input as soon as the user changes the language. For example, the default (EN) format is dd/mm/yyyy, but if the user changes the language to french, this should be changed to jj/mm/aaaa. Currently I'm using the momentJS library but unfortunately, this doesn't support date translation, only the correct local format. Is there a library/other way suitable for this? Thank you A: // Get the current language from the user's browser or some other source let language = navigator.language; // Create a DateTimeFormat object for the current language let dateTimeFormat = new Intl.DateTimeFormat(language, { year: "numeric", month: "2-digit", day: "2-digit" }); // Get the formatted date pattern for the current language let pattern = dateTimeFormat.formatToParts().map(part => part.value).join(""); // Set the placeholder of the date picker to the formatted date pattern let datePicker = document.getElementById("date-picker"); datePicker.placeholder = pattern;
Is there a way to translate date placeholders in Javascript?
I'm looking for a way to translate the placeholder for a date picker in the input as soon as the user changes the language. For example, the default (EN) format is dd/mm/yyyy, but if the user changes the language to french, this should be changed to jj/mm/aaaa. Currently I'm using the momentJS library but unfortunately, this doesn't support date translation, only the correct local format. Is there a library/other way suitable for this? Thank you
[ "// Get the current language from the user's browser or some other source\nlet language = navigator.language;\n\n// Create a DateTimeFormat object for the current language\nlet dateTimeFormat = new Intl.DateTimeFormat(language, {\n year: \"numeric\",\n month: \"2-digit\",\n day: \"2-digit\"\n});\n\n// Get the formatted date pattern for the current language\nlet pattern = dateTimeFormat.formatToParts().map(part => part.value).join(\"\");\n\n// Set the placeholder of the date picker to the formatted date pattern\nlet datePicker = document.getElementById(\"date-picker\");\ndatePicker.placeholder = pattern;\n\n" ]
[ 2 ]
[]
[]
[ "date_formatting", "momentjs", "placeholder" ]
stackoverflow_0074655696_date_formatting_momentjs_placeholder.txt
Q: Most efficient way to reverse the order of a BitArray? I've been wondering what the most efficient way to reverse the order of a BitArray in C#. To be clear, I don't want to inverse the Bitarray by calling .Not(), I want to reverse the order of the bits in the array. Cheers, Chris A: public void Reverse(BitArray array) { int length = array.Length; int mid = (length / 2); for (int i = 0; i < mid; i++) { bool bit = array[i]; array[i] = array[length - i - 1]; array[length - i - 1] = bit; } } A: For a long array and relative few uses, just wrap it: class BitArrayReverse { private BitArray _ba; public BitArrayReverse(BitArray ba) { _ba = ba; } public bool this[int index] { get { return _ba[_ba.Length - 1 - index]; } set { _ba[_ba.Length - 1 - index] = value; } } } A: This will be the best way to reverse MSB <-> LSB of any length using XOR in the for loop public static BitArray BitsReverse(BitArray bits) { int len = bits.Count; BitArray a = new BitArray(bits); BitArray b = new BitArray(bits); for (int i = 0, j = len-1; i < len; ++i, --j) { a[i] = a[i] ^ b[j]; b[j] = a[i] ^ b[j]; a[i] = a[i] ^ b[j]; } return a; } // in 010000011010000011100b // out 001110000010110000010b A: Dim myBA As New BitArray(4) myBA(0) = True myBA(1) = False myBA(2) = True myBA(3) = True Dim myBoolArray1(3) As Boolean myBA.CopyTo(myBoolArray1, 0) Array.Reverse(myBoolArray1) myBA = New BitArray(myBoolArray1) A: Because the size if fixed at 8-bits just the "table" lookup from below is sufficient -- when dealing with a plain byte a look-up is likely the quickest way. The extra overhead of BitSet to get/set the data may, however, nullify the look-up benefit. Also the initial build cost and persistent overhead need to be considered (but the values could be coded into an array literal ... ick!) On the other hand, if the data is only 8 bit (ever), and "performance is important", why use a BitArray at all? A BitArray could always be used for the nice features, such as "exploding" to an Enumerable while C# already has decent byte bit manipulation built-in. Assuming a more general case that the data is 8-bit aligned... but of some undetermined length Is this actually better (faster, more efficient, etc) than just doing it "per item" in the BitArray? I have no idea but suspect not. I would definitely start with the "simple" methods -- this is here as just a proof-of-concept and may (or may not be) interesting to compare in a benchmark. Anyway, write for clarity first ... and the below is not it! (There is at least one bug in it -- I blame the extra complexity ;-) byte reverse (byte b) { byte o = 0; for (var i = 0; i < 8; i++) { o <<= 1; o |= (byte)(b & 1); b >>= 1; } return o; } byte[] table; BitArray reverse8 (BitArray ar) { if (ar.Count % 8 != 0) { throw new Exception("no!"); } byte[] d = new byte[ar.Count / 8]; ar.CopyTo(d, 0); // this only works if the bit array is // a multiple of 8. we swap bytes and // then reverse bits in each byte int mid = d.Length / 2; for (int i = 0, j = d.Length - 1; i < mid; i++, j--) { byte t = d[i]; d[i] = table[d[j]]; d[j] = table[t]; } return new BitArray(d); } string tostr (BitArray x) { return string.Join("", x.OfType<bool>().Select(i => i ? "1" : "0").ToArray()); } void Main() { table = Enumerable.Range(0,256).Select(v => reverse((byte)v)).ToArray(); { byte[] s = new byte[] { 1, 0xff }; BitArray ar = new BitArray(s); // linqpad :) tostr(ar).Dump(); tostr(reverse8(ar)).Dump(); } "--".Dump(); { byte[] s = new byte[] { 3, 42, 19 }; BitArray ar = new BitArray(s); // linqpad :) tostr(ar).Dump(); tostr(reverse8(ar)).Dump(); } } Output: 1000000011111111 1111111100000001 -- 110000000101010011001000 000100110101010000000011 The expr.Dump() is a LINQPad feature. A: For a short but inefficient answer: using System.Linq; var reversedBa = new BitArray(myBa.Cast<bool>().Reverse().ToArray()) A: Adapted the answer from @TimLoyd and turned it into an extension for easier use. public static BitArray Reverse(this BitArray array) { int length = array.Length; int mid = (length / 2); for (int i = 0; i < mid; i++) { bool bit = array[i]; array[i] = array[length - i - 1]; array[length - i - 1] = bit; } return new BitArray(array); } Usage: var bits = new BitArray(some_bytes).Reverse();
Most efficient way to reverse the order of a BitArray?
I've been wondering what the most efficient way to reverse the order of a BitArray in C#. To be clear, I don't want to inverse the Bitarray by calling .Not(), I want to reverse the order of the bits in the array. Cheers, Chris
[ "public void Reverse(BitArray array)\n{\n int length = array.Length;\n int mid = (length / 2);\n\n for (int i = 0; i < mid; i++)\n {\n bool bit = array[i];\n array[i] = array[length - i - 1];\n array[length - i - 1] = bit;\n } \n}\n\n", "For a long array and relative few uses, just wrap it:\n class BitArrayReverse\n {\n private BitArray _ba;\n\n public BitArrayReverse(BitArray ba) { _ba = ba; }\n\n public bool this[int index]\n {\n get { return _ba[_ba.Length - 1 - index]; }\n set { _ba[_ba.Length - 1 - index] = value; }\n }\n\n }\n\n", "This will be the best way\nto reverse MSB <-> LSB of any length using XOR in the for loop\npublic static BitArray BitsReverse(BitArray bits)\n{\n int len = bits.Count;\n BitArray a = new BitArray(bits);\n BitArray b = new BitArray(bits);\n\n for (int i = 0, j = len-1; i < len; ++i, --j)\n {\n a[i] = a[i] ^ b[j];\n b[j] = a[i] ^ b[j];\n a[i] = a[i] ^ b[j];\n }\n\n return a; \n} \n// in 010000011010000011100b\n// out 001110000010110000010b\n\n", " Dim myBA As New BitArray(4)\n myBA(0) = True\n myBA(1) = False\n myBA(2) = True\n myBA(3) = True\n Dim myBoolArray1(3) As Boolean\n myBA.CopyTo(myBoolArray1, 0)\n Array.Reverse(myBoolArray1)\n myBA = New BitArray(myBoolArray1)\n\n", "Because the size if fixed at 8-bits just the \"table\" lookup from below is sufficient -- when dealing with a plain byte a look-up is likely the quickest way. The extra overhead of BitSet to get/set the data may, however, nullify the look-up benefit. Also the initial build cost and persistent overhead need to be considered (but the values could be coded into an array literal ... ick!)\nOn the other hand, if the data is only 8 bit (ever), and \"performance is important\", why use a BitArray at all? A BitArray could always be used for the nice features, such as \"exploding\" to an Enumerable while C# already has decent byte bit manipulation built-in.\nAssuming a more general case that the data is 8-bit aligned... but of some undetermined length\nIs this actually better (faster, more efficient, etc) than just doing it \"per item\" in the BitArray? I have no idea but suspect not. I would definitely start with the \"simple\" methods -- this is here as just a proof-of-concept and may (or may not be) interesting to compare in a benchmark. Anyway, write for clarity first ... and the below is not it! (There is at least one bug in it -- I blame the extra complexity ;-)\nbyte reverse (byte b) { \n byte o = 0;\n for (var i = 0; i < 8; i++) {\n o <<= 1;\n o |= (byte)(b & 1);\n b >>= 1;\n }\n return o;\n}\n\nbyte[] table;\nBitArray reverse8 (BitArray ar) {\n if (ar.Count % 8 != 0) {\n throw new Exception(\"no!\");\n }\n\n byte[] d = new byte[ar.Count / 8];\n ar.CopyTo(d, 0);\n\n // this only works if the bit array is\n // a multiple of 8. we swap bytes and\n // then reverse bits in each byte\n int mid = d.Length / 2; \n for (int i = 0, j = d.Length - 1; i < mid; i++, j--) {\n byte t = d[i];\n d[i] = table[d[j]];\n d[j] = table[t];\n }\n\n return new BitArray(d);\n}\n\nstring tostr (BitArray x) {\n return string.Join(\"\",\n x.OfType<bool>().Select(i => i ? \"1\" : \"0\").ToArray());\n}\n\nvoid Main()\n{\n table = Enumerable.Range(0,256).Select(v => reverse((byte)v)).ToArray();\n {\n byte[] s = new byte[] { 1, 0xff }; \n BitArray ar = new BitArray(s); \n // linqpad :)\n tostr(ar).Dump();\n tostr(reverse8(ar)).Dump();\n }\n \"--\".Dump();\n {\n byte[] s = new byte[] { 3, 42, 19 };\n BitArray ar = new BitArray(s); \n // linqpad :)\n tostr(ar).Dump();\n tostr(reverse8(ar)).Dump();\n } \n}\n\nOutput:\n1000000011111111\n1111111100000001\n--\n110000000101010011001000\n000100110101010000000011\n\nThe expr.Dump() is a LINQPad feature.\n", "For a short but inefficient answer:\nusing System.Linq;\n\nvar reversedBa = new BitArray(myBa.Cast<bool>().Reverse().ToArray())\n\n", "Adapted the answer from @TimLoyd and turned it into an extension for easier use.\npublic static BitArray Reverse(this BitArray array)\n{\n int length = array.Length;\n int mid = (length / 2);\n\n for (int i = 0; i < mid; i++)\n {\n bool bit = array[i];\n array[i] = array[length - i - 1];\n array[length - i - 1] = bit;\n }\n\n return new BitArray(array);\n}\n\nUsage:\nvar bits = new BitArray(some_bytes).Reverse();\n\n" ]
[ 36, 9, 6, 3, 1, 1, 0 ]
[]
[]
[ "bitarray", "c#" ]
stackoverflow_0004791202_bitarray_c#.txt
Q: "Unable to connect to web server" and issues with localhost ports When I initially made my Blazor WebAssembly solution, a few months ago, I didn't tick an authentication option in the Blazor template on Visual Studio, so I thought I could fix that just by making a new solution and copying in all of authentication aspects into the original. (Could a .NET update could be causing my issue?) Somehow, that caused my app to, as soon as it's built and ran, stop and Visual Studio throws the error "Unable to connect to web server 'Solution.Server'." It is also worth mentioning that app.UseIdentityServer throws an exception when the app is started in debug mode, but I think that's a whole other error, don't know. I tried googling it to no avail, but the Microsoft documentation did mention the exact error stating that it was caused by a difference between the ports specified in the localhost paths of the launchSettings.json documents and the port-specifying app.Run, but I can't find any port specified app.Run and am wondering if this is either outdated or inapplicable to WebAssemblies. Thank you for any help, in advance. A: Nevermind, after looking through a bunch of stuff I have determined that the problem was that I had no ConnectionString in appsettings.json.
"Unable to connect to web server" and issues with localhost ports
When I initially made my Blazor WebAssembly solution, a few months ago, I didn't tick an authentication option in the Blazor template on Visual Studio, so I thought I could fix that just by making a new solution and copying in all of authentication aspects into the original. (Could a .NET update could be causing my issue?) Somehow, that caused my app to, as soon as it's built and ran, stop and Visual Studio throws the error "Unable to connect to web server 'Solution.Server'." It is also worth mentioning that app.UseIdentityServer throws an exception when the app is started in debug mode, but I think that's a whole other error, don't know. I tried googling it to no avail, but the Microsoft documentation did mention the exact error stating that it was caused by a difference between the ports specified in the localhost paths of the launchSettings.json documents and the port-specifying app.Run, but I can't find any port specified app.Run and am wondering if this is either outdated or inapplicable to WebAssemblies. Thank you for any help, in advance.
[ "Nevermind, after looking through a bunch of stuff I have determined that the problem was that I had no ConnectionString in appsettings.json.\n" ]
[ 0 ]
[]
[]
[ "blazor", "blazor_webassembly", "launch", "localhost", "port" ]
stackoverflow_0074667799_blazor_blazor_webassembly_launch_localhost_port.txt
Q: how to implement ApolloGraphQl with Jetpack Compose I have an app that has bottom Navigation bar which implemented with compose navigation. This bottom navigation bar navigate between different screen (each screen is a composable function). I want to execute an enquiry using ApolloGraphQl Client inside home screen. However, if I want to execute the enquiry: fun HomeScreen() { Text(text = "Home") val response = apolloClient.query(LaunchListQuery()).execute() Log.d("LaunchList", "Success ${response.data}") } I get this error Suspend function 'execute' should be called only from a coroutine or another suspend function in ApollographQl tutorial, they used Fragments which has its own lifecycle, but I want to use composable functions and searching online didn't give me a clear answer. I tried @oliverbj answer as: @Preview @Composable fun HomeScreen() { val coroutineScope = rememberCoroutineScope() val getLaunchOnClick: () -> Unit = { coroutineScope.launch { val response = async(Dispatchers.IO) { apolloClient.query(LaunchListQuery()).execute() } Log.d("LaunchList", "Success ${response.await().data}") } } Text(text = "Home") Button(onClick = getLaunchOnClick) { Text("Launch") } } and it worked thanks @oliverbj also thanks for @heyheyhey A: To fix the error you are getting, you can use the withContext function to execute your Apollo GraphQL query within a coroutine. Here's how you can do that: @Composable fun HomeScreen() { val coroutineScope = rememberCoroutineScope() val getLaunchOnClick: () -> Unit = { coroutineScope.launch { val response = withContext(Dispatchers.IO) { apolloClient.query(LaunchListQuery()).execute() } Log.d("LaunchList", "Success ${response.data}") } } Text(text = "Home") } The withContext function lets you specify which coroutine dispatcher to use, in this case Dispatchers.IO, which is the recommended dispatcher for performing blocking IO operations such as making network requests. You can also use the async function to simplify the code even further: @Composable fun HomeScreen() { val coroutineScope = rememberCoroutineScope() val getLaunchOnClick: () -> Unit = { coroutineScope.launch { val response = async(Dispatchers.IO) { apolloClient.query(LaunchListQuery()).execute() } Log.d("LaunchList", "Success ${response.await().data}") } } Text(text = "Home") } The async function returns a Deferred object, which you can await to get the result of the suspended computation.
how to implement ApolloGraphQl with Jetpack Compose
I have an app that has bottom Navigation bar which implemented with compose navigation. This bottom navigation bar navigate between different screen (each screen is a composable function). I want to execute an enquiry using ApolloGraphQl Client inside home screen. However, if I want to execute the enquiry: fun HomeScreen() { Text(text = "Home") val response = apolloClient.query(LaunchListQuery()).execute() Log.d("LaunchList", "Success ${response.data}") } I get this error Suspend function 'execute' should be called only from a coroutine or another suspend function in ApollographQl tutorial, they used Fragments which has its own lifecycle, but I want to use composable functions and searching online didn't give me a clear answer. I tried @oliverbj answer as: @Preview @Composable fun HomeScreen() { val coroutineScope = rememberCoroutineScope() val getLaunchOnClick: () -> Unit = { coroutineScope.launch { val response = async(Dispatchers.IO) { apolloClient.query(LaunchListQuery()).execute() } Log.d("LaunchList", "Success ${response.await().data}") } } Text(text = "Home") Button(onClick = getLaunchOnClick) { Text("Launch") } } and it worked thanks @oliverbj also thanks for @heyheyhey
[ "To fix the error you are getting, you can use the withContext function to execute your Apollo GraphQL query within a coroutine. Here's how you can do that:\n@Composable\nfun HomeScreen() {\n val coroutineScope = rememberCoroutineScope()\n val getLaunchOnClick: () -> Unit = {\n coroutineScope.launch {\n val response = withContext(Dispatchers.IO) {\n apolloClient.query(LaunchListQuery()).execute()\n }\n Log.d(\"LaunchList\", \"Success ${response.data}\")\n }\n }\n Text(text = \"Home\")\n}\n\nThe withContext function lets you specify which coroutine dispatcher to use, in this case Dispatchers.IO, which is the recommended dispatcher for performing blocking IO operations such as making network requests.\nYou can also use the async function to simplify the code even further:\n@Composable\nfun HomeScreen() {\n val coroutineScope = rememberCoroutineScope()\n val getLaunchOnClick: () -> Unit = {\n coroutineScope.launch {\n val response = async(Dispatchers.IO) {\n apolloClient.query(LaunchListQuery()).execute()\n }\n Log.d(\"LaunchList\", \"Success ${response.await().data}\")\n }\n }\n Text(text = \"Home\")\n}\n\nThe async function returns a Deferred object, which you can await to get the result of the suspended computation.\n" ]
[ 1 ]
[]
[]
[ "android_jetpack_compose", "graphql", "kotlin" ]
stackoverflow_0074679772_android_jetpack_compose_graphql_kotlin.txt
Q: How to make an image width the same as the screen width I'm trying to make my school project's website mobile-friendly, but I don't want the user to scroll left or right to see images. I want the value of max-width of an image to be the same as the width of the users screen. (When on mobile) CSS html, body { height: 100%; width: 100%; margin: 0; } header{ margin: 0 auto; justify-content: center; width: 519px; height: 200px; } header img{ width: 519px; height: 200px; } HTML <header> <h1 class="hidden">Welcome to caillou.tk!</h1> <a href="/"><img src="img/k.png" alt="Caillou Logo"></a> </header> <nav> <ul> <li><a href="/">Home</a></li> <li><a href="https://www.youtube.com/channel/UC4yQCVlLhTmOqX5kUkAGr0g">Watch</a></li> <li><a href="characters">Characters</a></li> <li><a href="about">About Caillou</a></li> <li><a href="credits">Credits</a></li> </ul> </nav> Yes its a website about Caillou, I have too for a school project A: I recommend using the percentage attribute in a css class, like: img { width: 100%; height: 100%; } if it's only on mobile devices you could make use of the media query: @media screen and (min-width: 992px) { img { width: 100%; height: 100%; } } Edit: based on new code and with a random image to represent. html, body { height: 100%; width: 100%; margin: 0; } header{ margin: 0 auto; justify-content: center; width: 100%; height: 200px; } header img{ display:flex; width: 100%; height: 100%; } <header> <h1 class="hidden">Welcome to caillou.tk!</h1> <a href="/"><img src="https://www.intz.com.br/wp-content/uploads/wallpapers/WallpaperLiquifyUltraWide.png" alt="Caillou Logo"></a> </header> <nav> <ul> <li><a href="/">Home</a></li> <li><a href="https://www.youtube.com/channel/UC4yQCVlLhTmOqX5kUkAGr0g">Watch</a></li> <li><a href="characters">Characters</a></li> <li><a href="about">About Caillou</a></li> <li><a href="credits">Credits</a></li> </ul> </nav> A: If it's a background image, you can use this CSS: #image { min-width: 100vw; min-height: 100vh; overflow: hidden; } The overflow will be cut off so that the user isn't able to scroll. (Assuming image is the id of your image) A: You can use: img { width: 100vw; } A: use width in percentage img { width: 100%; height: 100%; }
How to make an image width the same as the screen width
I'm trying to make my school project's website mobile-friendly, but I don't want the user to scroll left or right to see images. I want the value of max-width of an image to be the same as the width of the users screen. (When on mobile) CSS html, body { height: 100%; width: 100%; margin: 0; } header{ margin: 0 auto; justify-content: center; width: 519px; height: 200px; } header img{ width: 519px; height: 200px; } HTML <header> <h1 class="hidden">Welcome to caillou.tk!</h1> <a href="/"><img src="img/k.png" alt="Caillou Logo"></a> </header> <nav> <ul> <li><a href="/">Home</a></li> <li><a href="https://www.youtube.com/channel/UC4yQCVlLhTmOqX5kUkAGr0g">Watch</a></li> <li><a href="characters">Characters</a></li> <li><a href="about">About Caillou</a></li> <li><a href="credits">Credits</a></li> </ul> </nav> Yes its a website about Caillou, I have too for a school project
[ "I recommend using the percentage attribute in a css class, like:\nimg {\n width: 100%;\n height: 100%;\n}\n\nif it's only on mobile devices you could make use of the media query:\n@media screen and (min-width: 992px) {\n img {\n width: 100%;\n height: 100%;\n }\n}\n\nEdit: based on new code and with a random image to represent.\n\n\nhtml, body {\n height: 100%;\n width: 100%;\n margin: 0;\n}\n\n\nheader{\n margin: 0 auto;\n justify-content: center;\n width: 100%;\n height: 200px;\n}\n\nheader img{\n display:flex;\n width: 100%;\n height: 100%;\n}\n<header>\n <h1 class=\"hidden\">Welcome to caillou.tk!</h1>\n <a href=\"/\"><img src=\"https://www.intz.com.br/wp-content/uploads/wallpapers/WallpaperLiquifyUltraWide.png\" alt=\"Caillou Logo\"></a>\n\n </header>\n <nav>\n <ul>\n <li><a href=\"/\">Home</a></li>\n <li><a href=\"https://www.youtube.com/channel/UC4yQCVlLhTmOqX5kUkAGr0g\">Watch</a></li>\n <li><a href=\"characters\">Characters</a></li>\n <li><a href=\"about\">About Caillou</a></li>\n <li><a href=\"credits\">Credits</a></li>\n </ul>\n </nav>\n\n\n\n", "If it's a background image, you can use this CSS:\n#image {\n min-width: 100vw;\n min-height: 100vh;\n overflow: hidden;\n}\n\nThe overflow will be cut off so that the user isn't able to scroll.\n(Assuming image is the id of your image)\n", "You can use:\nimg {\n width: 100vw;\n}\n\n", "use width in percentage\nimg {\n width: 100%;\n height: 100%;\n}\n\n" ]
[ 3, 2, 1, 0 ]
[]
[]
[ "css", "html", "javascript" ]
stackoverflow_0074680152_css_html_javascript.txt
Q: how to convert this time 2007-10-01T01:02:03.004 into datetime in snowflake 2007-10-01T01:02:03.004 convert into datetime in snowflake i used yyyymmddhh24miss 2022-11-03 09:13:48.000 A: This type of input time format is automatically recognized. The following should work: select to_timestamp_ntz('2007-10-01T01:02:03.004'); the output will depend on the output data format set as a session parameter, e.g.: alter session set timestamp_ntz_output_format = 'YYYY-MM-DD HH24:MI:SS.FF3'; Therefore the select statement above will output: 2007-10-01 01:02:03.004 doc refference To manipulate the actual timestamp value, you can use the DATEADD() function, e.g.: select dateadd(months, 175, to_timestamp_ntz('2007-10-01T01:02:03.004')) as result; Output: 2022-05-01 01:02:03.004 Doc refference
how to convert this time 2007-10-01T01:02:03.004 into datetime in snowflake
2007-10-01T01:02:03.004 convert into datetime in snowflake i used yyyymmddhh24miss 2022-11-03 09:13:48.000
[ "This type of input time format is automatically recognized. The following should work:\nselect to_timestamp_ntz('2007-10-01T01:02:03.004');\n\nthe output will depend on the output data format set as a session parameter, e.g.:\nalter session set timestamp_ntz_output_format = 'YYYY-MM-DD HH24:MI:SS.FF3';\n\nTherefore the select statement above will output:\n\n2007-10-01 01:02:03.004\n\ndoc refference\nTo manipulate the actual timestamp value, you can use the DATEADD() function, e.g.:\nselect dateadd(months, 175, to_timestamp_ntz('2007-10-01T01:02:03.004')) as result;\n\nOutput:\n\n2022-05-01 01:02:03.004\n\nDoc refference\n" ]
[ 0 ]
[]
[]
[ "snowflake_cloud_data_platform", "snowflake_schema" ]
stackoverflow_0074680111_snowflake_cloud_data_platform_snowflake_schema.txt
Q: How to restore postgres dump through pgadmin docker container (dockerized postgres) Dear stackoverflowers, I use docker-compose to run the dockerized postgresql server and dockerized pgadmin4 webserver. When i try to restore a dump via the web interface it shows me an empty folder with the path "/" for source location of the dump. Now my question, is it in general possible to restore a dump via dockerized pgadmin and if, what path from which container (postgres or pgadmin) do i have to mount as volume to provide the dump to be restored? version: '3.8' services: db: container_name: pg_container image: postgres:12.10 restart: always environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: *** POSTGRES_DB: postgres ports: - "5432:5432" pgadmin: container_name: pgadmin4_container image: dpage/pgadmin4 restart: always environment: PGADMIN_DEFAULT_EMAIL: admin@admin.com PGADMIN_DEFAULT_PASSWORD: *** ports: - "5050:80" With kind regards starguy A: I'm in the same situation that you are. in the web pgadmin, you have the option to upload a .sql just by clicking at the ... button. filepgadmin4 upload image.
How to restore postgres dump through pgadmin docker container (dockerized postgres)
Dear stackoverflowers, I use docker-compose to run the dockerized postgresql server and dockerized pgadmin4 webserver. When i try to restore a dump via the web interface it shows me an empty folder with the path "/" for source location of the dump. Now my question, is it in general possible to restore a dump via dockerized pgadmin and if, what path from which container (postgres or pgadmin) do i have to mount as volume to provide the dump to be restored? version: '3.8' services: db: container_name: pg_container image: postgres:12.10 restart: always environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: *** POSTGRES_DB: postgres ports: - "5432:5432" pgadmin: container_name: pgadmin4_container image: dpage/pgadmin4 restart: always environment: PGADMIN_DEFAULT_EMAIL: admin@admin.com PGADMIN_DEFAULT_PASSWORD: *** ports: - "5050:80" With kind regards starguy
[ "I'm in the same situation that you are. in the web pgadmin, you have the option to upload a .sql just by clicking at the ... button. filepgadmin4 upload image.\n" ]
[ 0 ]
[]
[]
[ "docker", "pgadmin", "postgresql" ]
stackoverflow_0071142272_docker_pgadmin_postgresql.txt
Q: How do I store a variable in a function so I can access it from a different file I am trying to make a program that allows you to select a day, and then store a value for the day with a separate file. However, I can't find a way to store the selected day in a variable that I can use. from tkinter import * from tkcalendar import * main = Tk() main.title('Calendar') main.geometry('600x400') cal = Calendar(main, selectmode='day') cal.pack() def set_date(): my_label.config(text=cal.get_date()) today = cal.get_date() print(today) my_button = Button(main, text='Get Date',command=set_date) my_button.pack(pady=20) my_label = Label(main,text="Haha") my_label.pack(pady=20) main.mainloop() If I store the vairable inside the function set_date() it stores the date that is selected but I can't import it on a separate file. And if I store the variable outside of the function set_date() it only stores the current date and not the one selected. A: I can't tell exactly what you want because of how you worded it, but I'm pretty sure this is what you want def set_date(): my_label.config(text=cal.get_date()) today = cal.get_date() return today If you then import this function in another file and call it like this selected_date = set_date() This isn't quite what you want though as it will modify the label each time you want to get the date, so you will want to add this function to get the selected date that you want. def get_date(): today = cal.get_date() return today
How do I store a variable in a function so I can access it from a different file
I am trying to make a program that allows you to select a day, and then store a value for the day with a separate file. However, I can't find a way to store the selected day in a variable that I can use. from tkinter import * from tkcalendar import * main = Tk() main.title('Calendar') main.geometry('600x400') cal = Calendar(main, selectmode='day') cal.pack() def set_date(): my_label.config(text=cal.get_date()) today = cal.get_date() print(today) my_button = Button(main, text='Get Date',command=set_date) my_button.pack(pady=20) my_label = Label(main,text="Haha") my_label.pack(pady=20) main.mainloop() If I store the vairable inside the function set_date() it stores the date that is selected but I can't import it on a separate file. And if I store the variable outside of the function set_date() it only stores the current date and not the one selected.
[ "I can't tell exactly what you want because of how you worded it, but I'm pretty sure this is what you want\ndef set_date():\n my_label.config(text=cal.get_date())\n today = cal.get_date()\n return today\n\nIf you then import this function in another file and call it like this\nselected_date = set_date()\n\nThis isn't quite what you want though as it will modify the label each time you want to get the date, so you will want to add this function to get the selected date that you want.\ndef get_date():\n today = cal.get_date()\n return today\n\n" ]
[ 1 ]
[ "a = 5\n\ndef set_a(val):\n global a\n a = val\n \nprint(a)\nset_a(0)\nprint(a)\n\nWhat you are doing is a very bad practice (you can use the global keyword before your variable to update it, but never do this). Instead either store it in a mutable data object for example a dictionary or as a pickle file(depending on your application).\nprogram_variables = {'a':5}\n\ndef set_a(val):\n program_variables['a'] = val\n \nprint(program_variables['a'])\nset_a(0)\nprint(program_variables['a'])\n\nif you need pickles, you can search for tutorials on using it as well\n" ]
[ -2 ]
[ "python", "tkcalendar", "tkinter" ]
stackoverflow_0074680264_python_tkcalendar_tkinter.txt
Q: Unity script - code is executed no in oder. How do I turn off multithreading? What do I mean by code not executing in order. I have the following code: IEnumerator Game(){ // do something yield return new WaitForSecondsRealtime(10f); // do something else Debug.Log("game method") } void Start(){ // some code .. StartCoroutine(Game()); // other code .. Debug.Log("end of the code"); Debug.Log("something else"); } For some reason I will get in the console this: end of the code something else game method I guess it is something related to multithreading. How do I turn it off (How do I avoid it)? A: it's executing in order. your code should look like this : void Start() { StartCoroutine(Game()); } IEnumerator Game(){ // do something yield return new WaitForSecondsRealtime(10f); // do something else Debug.Log("game method") ExecuteLast(); } void ExecuteLast() { Debug.Log("end of the code"); Debug.Log("something else"); }
Unity script - code is executed no in oder. How do I turn off multithreading?
What do I mean by code not executing in order. I have the following code: IEnumerator Game(){ // do something yield return new WaitForSecondsRealtime(10f); // do something else Debug.Log("game method") } void Start(){ // some code .. StartCoroutine(Game()); // other code .. Debug.Log("end of the code"); Debug.Log("something else"); } For some reason I will get in the console this: end of the code something else game method I guess it is something related to multithreading. How do I turn it off (How do I avoid it)?
[ "it's executing in order.\nyour code should look like this :\nvoid Start()\n{ \nStartCoroutine(Game());\n}\n\nIEnumerator Game(){\n// do something\nyield return new WaitForSecondsRealtime(10f);\n// do something else\nDebug.Log(\"game method\")\n\nExecuteLast();\n}\n\nvoid ExecuteLast()\n{\nDebug.Log(\"end of the code\");\nDebug.Log(\"something else\");\n}\n\n" ]
[ 1 ]
[]
[]
[ "c#", "unity3d" ]
stackoverflow_0074680256_c#_unity3d.txt
Q: data lost when read big file in chunks using fread I am a novice of c++. Those are trade data of NYSE, I am using those data to do quant analysis. A single file for one day is about 10G, so I have hundreds of those files. For the first step, I just want make sure that I could read the data by chunk correctly. Then I will maniupulate those chunk. I was told using python would be slow, so I try with c++. I try to read one file in chunks using fread(). I want to count number of lines to check my code, but it's different from the number of lines that I got when I using notepad++. Hope someone could help me with this problem thanks. Data: Q,14340,EUR/NZD,1.65027,1,1.6504,1 T,14340,EUR/NZD,1,1.65034,@,70,X Q,14340,AUD/NZD,1.03427,1,1.03437,1 T,14340,AUD/NZD,1,1.03432,@,70,X Q,14340,CAD/CHF,0.75142,1,0.75146,1 T,14340,CAD/CHF,1,0.75144,@,70,X Q,14340,GBP/NZD,1.90908,1,1.90927,1 T,14340,GBP/NZD,1,1.90918,@,70,X Q,14340,GBP/CHF,1.312,1,1.31208,1 T,14340,GBP/CHF,1,1.31204,@,70,X Q,13724,#6S,0.9928,12,0.9929,29 number of lines using fread: 279 174 248 number of lines using notepad++:279 485 508 Wrong Code: #include <iostream> #include <cstdio> using namespace std; int main() { clock_t start = clock(); char buffer[100000]="\0"; int cursor = sizeof(buffer); FILE* fp; int Judge; int offset = 0; Judge = fopen_s(&fp, "E:\\feedRec\\TFD20190227", "r"); int count = 0; int num = 0; while (1) { //read one chunk num=fread(buffer, sizeof(char), (sizeof(buffer) - 1), fp); // null terminated the buffer buffer[num] = '\0'; char* ptr=buffer; while (*ptr!='\0') { if (*ptr == '\n') { count++; } ptr++; } //Since the lines are not formatted. The buffer will end at //the middle of the line. So I need to move back the pointer //back to the beginning of the sliced line to let it be read in //the next loop. //T,14340,AUD/NZD,1,1.03432,@,70,X //if buffer end here| //pointer need to be move back to the beginning. //T,14340,AUD/NZD,1,1.03432,@,70,X //| //calculte the offset offset = 0; cursor = sizeof(buffer)-1; while (buffer[cursor] != '\n') { cursor--; offset++; } //move back the pointer if (offset > 0 && (num == sizeof(buffer) - 1)) { fseek(fp, -offset, SEEK_CUR); } //clear the buffer memset(buffer, '\0', sizeof(buffer) - 1); // deal with last chunk: if the num less than buffer size, stop if (num < sizeof(buffer)-1) { fclose(fp); break; } } //count the last chunk char* ptr = buffer; while (*ptr != '\0') { if (*ptr == '\n') { count++; } ptr++; } cout << "count: " << count << endl; clock_t end = clock(); cout << "time : " << ((double)end - start) / CLOCKS_PER_SEC << "s\n"; fclose(fp); return 0; } A: There are NUL char in some lines, using "*ptr!='\0'" to determined whether it reached the end of the buffer is wrong: T,14369,GBP/USD,1,1.33012,@,70,X LF T,14370,TIME$,1,9453,@,40, NUL LF Q,10516,#YKH9,901.125,1,911.875,2 LF The loop terminated after "@,40 ". clock_t start = clock(); char buffer[300]="\0"; int cursor = sizeof(buffer); FILE* fp; int Judge; int offset = 0; Judge = fopen_s(&fp, "E:\\feedRec\\TFD20190228", "rb"); int count = 0; size_t num = 0; while (1) { num=fread(buffer, sizeof(char), (sizeof(buffer) - 1), fp); buffer[num] = '\0'; char* ptr=buffer; char* endptr = &buffer[num]; int ct = 0; if (num == sizeof(buffer) - 1) { while (ptr != endptr) { ct++; if (*ptr == '\n') { count++; } ptr++; } } offset = 0; cursor = sizeof(buffer)-1; while (buffer[cursor-1] != '\n') { cursor--; offset++; } if (offset > 0 && (num == sizeof(buffer) - 1)) { fseek(fp, -offset, SEEK_CUR); } if (num < sizeof(buffer)-1) { fclose(fp); break; } memset(buffer, '\0', sizeof(buffer) - 1); } char* ptr = buffer; char* endptr = &buffer[num]; int ct = 0; if (num == sizeof(buffer) - 1) { while (ptr != endptr) { ct++; if (*ptr == '\n') { count++; } ptr++; } } cout << "count: " << count << endl; clock_t end = clock(); cout << "time : " << ((double)end - start) / CLOCKS_PER_SEC << "s\n"; fclose(fp); return 0;
data lost when read big file in chunks using fread
I am a novice of c++. Those are trade data of NYSE, I am using those data to do quant analysis. A single file for one day is about 10G, so I have hundreds of those files. For the first step, I just want make sure that I could read the data by chunk correctly. Then I will maniupulate those chunk. I was told using python would be slow, so I try with c++. I try to read one file in chunks using fread(). I want to count number of lines to check my code, but it's different from the number of lines that I got when I using notepad++. Hope someone could help me with this problem thanks. Data: Q,14340,EUR/NZD,1.65027,1,1.6504,1 T,14340,EUR/NZD,1,1.65034,@,70,X Q,14340,AUD/NZD,1.03427,1,1.03437,1 T,14340,AUD/NZD,1,1.03432,@,70,X Q,14340,CAD/CHF,0.75142,1,0.75146,1 T,14340,CAD/CHF,1,0.75144,@,70,X Q,14340,GBP/NZD,1.90908,1,1.90927,1 T,14340,GBP/NZD,1,1.90918,@,70,X Q,14340,GBP/CHF,1.312,1,1.31208,1 T,14340,GBP/CHF,1,1.31204,@,70,X Q,13724,#6S,0.9928,12,0.9929,29 number of lines using fread: 279 174 248 number of lines using notepad++:279 485 508 Wrong Code: #include <iostream> #include <cstdio> using namespace std; int main() { clock_t start = clock(); char buffer[100000]="\0"; int cursor = sizeof(buffer); FILE* fp; int Judge; int offset = 0; Judge = fopen_s(&fp, "E:\\feedRec\\TFD20190227", "r"); int count = 0; int num = 0; while (1) { //read one chunk num=fread(buffer, sizeof(char), (sizeof(buffer) - 1), fp); // null terminated the buffer buffer[num] = '\0'; char* ptr=buffer; while (*ptr!='\0') { if (*ptr == '\n') { count++; } ptr++; } //Since the lines are not formatted. The buffer will end at //the middle of the line. So I need to move back the pointer //back to the beginning of the sliced line to let it be read in //the next loop. //T,14340,AUD/NZD,1,1.03432,@,70,X //if buffer end here| //pointer need to be move back to the beginning. //T,14340,AUD/NZD,1,1.03432,@,70,X //| //calculte the offset offset = 0; cursor = sizeof(buffer)-1; while (buffer[cursor] != '\n') { cursor--; offset++; } //move back the pointer if (offset > 0 && (num == sizeof(buffer) - 1)) { fseek(fp, -offset, SEEK_CUR); } //clear the buffer memset(buffer, '\0', sizeof(buffer) - 1); // deal with last chunk: if the num less than buffer size, stop if (num < sizeof(buffer)-1) { fclose(fp); break; } } //count the last chunk char* ptr = buffer; while (*ptr != '\0') { if (*ptr == '\n') { count++; } ptr++; } cout << "count: " << count << endl; clock_t end = clock(); cout << "time : " << ((double)end - start) / CLOCKS_PER_SEC << "s\n"; fclose(fp); return 0; }
[ "There are NUL char in some lines, using \"*ptr!='\\0'\" to determined whether it reached the end of the buffer is wrong:\nT,14369,GBP/USD,1,1.33012,@,70,X LF\nT,14370,TIME$,1,9453,@,40, NUL LF\nQ,10516,#YKH9,901.125,1,911.875,2 LF\nThe loop terminated after \"@,40 \".\nclock_t start = clock();\nchar buffer[300]=\"\\0\";\nint cursor = sizeof(buffer);\nFILE* fp;\nint Judge;\nint offset = 0;\nJudge = fopen_s(&fp, \"E:\\\\feedRec\\\\TFD20190228\", \"rb\");\nint count = 0;\nsize_t num = 0;\nwhile (1)\n{\n num=fread(buffer, sizeof(char), (sizeof(buffer) - 1), fp);\n\n buffer[num] = '\\0';\n\n\n char* ptr=buffer;\n char* endptr = &buffer[num];\n int ct = 0;\n if (num == sizeof(buffer) - 1) \n { \n while (ptr != endptr) {\n ct++;\n if (*ptr == '\\n') {\n count++;\n }\n ptr++;\n }\n }\n offset = 0;\n cursor = sizeof(buffer)-1;\n while (buffer[cursor-1] != '\\n') {\n cursor--;\n offset++;\n }\n\n if (offset > 0 && (num == sizeof(buffer) - 1)) {\n fseek(fp, -offset, SEEK_CUR);\n }\n \n if (num < sizeof(buffer)-1) \n {\n fclose(fp); break; \n }\n memset(buffer, '\\0', sizeof(buffer) - 1);\n}\n\nchar* ptr = buffer;\nchar* endptr = &buffer[num];\nint ct = 0;\nif (num == sizeof(buffer) - 1)\n{\n while (ptr != endptr) {\n ct++;\n if (*ptr == '\\n') {\n count++;\n }\n ptr++;\n }\n}\ncout << \"count: \" << count << endl;\nclock_t end = clock();\ncout << \"time : \" << ((double)end - start) / CLOCKS_PER_SEC << \"s\\n\";\nfclose(fp);\nreturn 0;\n\n" ]
[ 0 ]
[]
[]
[ "c++", "fread" ]
stackoverflow_0074672465_c++_fread.txt
Q: How can I contain this image grid within the viewport height without using overflow: hidden? I'm trying to contain this image grid within the viewport height and it works if I use overflow: hidden on its wrapper. However, I want to add label elements to the images that overflow their wrappers, so I need to find a solution that would keep them visible. I also need the images to stay grouped together even if the viewport is resized (always touching) as they are right now. The images need to be shown fully. I've added a label example in the first wrapper. As you can see, most of it is hidden, but I'd like it all to be visible and for it to overflow out of the wrapper (not to be contained in it). Any help is appreciated. https://jsfiddle.net/k54doq89/2/ #_parent { display: flex; position: relative; height: 100vh; width: 50vw; } #_grid { display: flex; height: 100%; width: 100%; place-items: center; justify-content: center; margin: auto; border: 0; padding: 0; } #_row { display: grid; max-width: 100%; height: 100%; align-content: center; margin: 0; border: 0; padding: 0; grid-template-columns: repeat(3, 1fr); } ._img { height: 100%; width: 100%; object-fit: contain; margin: 0; border: 0; padding: 0; } .wrapper { overflow: hidden; display: flex; justify-content: center; align-items: flex-end; margin: 0; border: 0; padding: 0; position: relative; clear: both; } .label-example { position: absolute; display: flex; flex-direction: column; justify-content: center; align-items: center; align-content: center; height: 100%; width: 100%; color:magenta; } body { margin: 0; padding: 0; border: 0; } <div id="_parent"> <div id="_grid"> <div id="_row"> <div class="wrapper"> <div class="label-example">1234567890</div> <img id="" src="//placeimg.com/295/420?text=1" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=2" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=3" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=4" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=5" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=6" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=7" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=8" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=9" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=10" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=11" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=12" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=13" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=14" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=15" class="_img"> </div> </div> </div> </div> A: I've managed to solve it by adding "min-height: min-content;" to the wrapper element. A: #_parent { display: flex; position: relative; height: 100vh; width: 50vw; } #_grid { display: flex; height: inherit; width: inherit; } #_row { display: grid; width: 100%; height: 100%; grid-template-columns: repeat(3, 1fr); } ._img { height: 100%; width: 100%; object-fit: cover; object-position: center top; position: absolute; left: 0; top: 0; right: 0; z-index: 1; } .wrapper { position: relative; } .label-example { position: absolute; z-index: 2; display: flex; flex-direction: column; justify-content: center; align-items: center; align-content: center; height: 100%; width: 100%; color:magenta; } body { margin: 0; padding: 0; border: 0; } <div id="_parent"> <div id="_grid"> <div id="_row"> <div class="wrapper"> <div class="label-example">1234567890</div> <img id="" src="//placeimg.com/295/420?text=1" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=2" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=3" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=4" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=5" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=6" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=7" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=8" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=9" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=10" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=11" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=12" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=13" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=14" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=15" class="_img"> </div> </div> </div> </div> My example work on first browser paint #_parent { height: 100vh; width: 50vw; } #_grid { display: flex; height: 100%; width: 100%; } #_row { display: grid; height: 100%; grid-template-columns: repeat(3, 1fr); } ._img { height: 100%; width: 100%; object-fit: contain; display: block; } .wrapper { overflow: hidden; position: relative; } body { margin: 0; padding: 0; border: 0; } <div id="_parent"> <div id="_grid"> <div id="_row"> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=1" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=2" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=3" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=4" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=5" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=6" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=7" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=8" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=9" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=10" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=11" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=12" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=13" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=14" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=15" class="_img"> </div> </div> </div> </div>
How can I contain this image grid within the viewport height without using overflow: hidden?
I'm trying to contain this image grid within the viewport height and it works if I use overflow: hidden on its wrapper. However, I want to add label elements to the images that overflow their wrappers, so I need to find a solution that would keep them visible. I also need the images to stay grouped together even if the viewport is resized (always touching) as they are right now. The images need to be shown fully. I've added a label example in the first wrapper. As you can see, most of it is hidden, but I'd like it all to be visible and for it to overflow out of the wrapper (not to be contained in it). Any help is appreciated. https://jsfiddle.net/k54doq89/2/ #_parent { display: flex; position: relative; height: 100vh; width: 50vw; } #_grid { display: flex; height: 100%; width: 100%; place-items: center; justify-content: center; margin: auto; border: 0; padding: 0; } #_row { display: grid; max-width: 100%; height: 100%; align-content: center; margin: 0; border: 0; padding: 0; grid-template-columns: repeat(3, 1fr); } ._img { height: 100%; width: 100%; object-fit: contain; margin: 0; border: 0; padding: 0; } .wrapper { overflow: hidden; display: flex; justify-content: center; align-items: flex-end; margin: 0; border: 0; padding: 0; position: relative; clear: both; } .label-example { position: absolute; display: flex; flex-direction: column; justify-content: center; align-items: center; align-content: center; height: 100%; width: 100%; color:magenta; } body { margin: 0; padding: 0; border: 0; } <div id="_parent"> <div id="_grid"> <div id="_row"> <div class="wrapper"> <div class="label-example">1234567890</div> <img id="" src="//placeimg.com/295/420?text=1" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=2" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=3" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=4" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=5" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=6" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=7" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=8" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=9" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=10" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=11" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=12" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=13" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=14" class="_img"> </div> <div class="wrapper"> <img id="" src="//placeimg.com/295/420?text=15" class="_img"> </div> </div> </div> </div>
[ "I've managed to solve it by adding \"min-height: min-content;\" to the wrapper element.\n", "\n\n#_parent {\n display: flex;\n position: relative;\n height: 100vh;\n width: 50vw;\n}\n\n#_grid {\n display: flex;\n height: inherit;\n width: inherit;\n}\n\n#_row {\n display: grid;\n width: 100%;\n height: 100%;\n grid-template-columns: repeat(3, 1fr);\n}\n\n._img {\n height: 100%;\n width: 100%;\n object-fit: cover;\n object-position: center top;\n position: absolute;\n left: 0;\n top: 0;\n right: 0;\n z-index: 1;\n}\n\n.wrapper {\n position: relative;\n}\n\n.label-example {\n position: absolute;\n z-index: 2;\n display: flex;\n flex-direction: column;\n justify-content: center;\n align-items: center;\n align-content: center;\n height: 100%;\n width: 100%;\n color:magenta;\n}\n\nbody {\n margin: 0;\n padding: 0;\n border: 0;\n}\n<div id=\"_parent\">\n <div id=\"_grid\">\n <div id=\"_row\">\n <div class=\"wrapper\">\n <div class=\"label-example\">1234567890</div>\n <img id=\"\" src=\"//placeimg.com/295/420?text=1\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=2\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=3\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=4\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=5\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=6\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=7\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=8\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=9\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=10\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=11\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=12\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=13\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=14\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=15\" class=\"_img\">\n </div>\n </div>\n </div>\n</div>\n\n\n\nMy example work on first browser paint\n\n\n#_parent {\n height: 100vh;\n width: 50vw;\n}\n\n#_grid {\n display: flex;\n height: 100%;\n width: 100%;\n}\n\n#_row {\n display: grid;\n height: 100%;\n grid-template-columns: repeat(3, 1fr);\n}\n\n._img {\n height: 100%;\n width: 100%;\n object-fit: contain;\n display: block;\n}\n\n.wrapper {\n overflow: hidden;\n position: relative;\n}\n\nbody {\n margin: 0;\n padding: 0;\n border: 0;\n}\n<div id=\"_parent\">\n <div id=\"_grid\">\n <div id=\"_row\">\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=1\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=2\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=3\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=4\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=5\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=6\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=7\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=8\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=9\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=10\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=11\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=12\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=13\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=14\" class=\"_img\">\n </div>\n <div class=\"wrapper\">\n <img id=\"\" src=\"//placeimg.com/295/420?text=15\" class=\"_img\">\n </div>\n </div>\n </div>\n</div>\n\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074632472_css_html.txt
Q: np.genfromtxt could not convert string to float I was writing this code to make some data graphs on my jupyter notebook, and when I tried bringing in data from a csv file, I got a "could not convert string to float" error. So here's my code: phot_g = np.genfromtxt('gaia_hyades_search.csv', dtype='str', delimiter=",", skip_header=1, usecols=(6), unpack=True) phot_bp = np.genfromtxt('gaia_hyades_search.csv', dtype='str', delimiter=",", skip_header=1, usecols=(7), unpack=True) phot_rp = np.genfromtxt('gaia_hyades_search.csv', dtype='str', delimiter=",", skip_header=1, usecols=(8), unpack=True) phot_g = phot_g.astype(np.float64) phot_bp = phot_bp.astype(np.float64) phot_rp = phot_rp.astype(np.float64) And here's my error: ValueError Traceback (most recent call last) /tmp/ipykernel_63/3948901710.py in <module> ---> 18 phot_g = phot_g.astype(np.float64) 19 phot_bp = phot_bp.astype(np.float64) 20 phot_rp = phot_rp.astype(np.float64 ValueError: could not convert string to float: '' I've tried searching the error up, but a lot of the solutions I've gotten have been for numpy.loadtxt, and moreover, they don't seem to help me at all. Any help would be greatly appreciated. By the way, the error shows up for all three lines of code (phot_g, phot_bp, and phot_rp) A: Is that the full error message? I get more information when I try to recreate the error: works: In [104]: np.array(['1','2']).astype(float) Out[104]: array([1., 2.]) doesn't: In [105]: np.array(['1','2','two']).astype(float) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [105], in <cell line: 1>() ----> 1 np.array(['1','2','two']).astype(float) ValueError: could not convert string to float: 'two' See the 'two'! That tells me exactly what string is causing the problem. If a line (or more) has two delimiters next to each other, the string array could end up with ''. which can't be converted to a float: In [109]: np.array('1,2,,'.split(',')).astype(float) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [109], in <cell line: 1>() ----> 1 np.array('1,2,,'.split(',')).astype(float) ValueError: could not convert string to float: '' genfromtxt has some ability to fill missing data. pandas csv reader is even better for that. genfromtxt with 'dtype=float' (the default case), will put np.nan in the array when it can't make a float of the input.
np.genfromtxt could not convert string to float
I was writing this code to make some data graphs on my jupyter notebook, and when I tried bringing in data from a csv file, I got a "could not convert string to float" error. So here's my code: phot_g = np.genfromtxt('gaia_hyades_search.csv', dtype='str', delimiter=",", skip_header=1, usecols=(6), unpack=True) phot_bp = np.genfromtxt('gaia_hyades_search.csv', dtype='str', delimiter=",", skip_header=1, usecols=(7), unpack=True) phot_rp = np.genfromtxt('gaia_hyades_search.csv', dtype='str', delimiter=",", skip_header=1, usecols=(8), unpack=True) phot_g = phot_g.astype(np.float64) phot_bp = phot_bp.astype(np.float64) phot_rp = phot_rp.astype(np.float64) And here's my error: ValueError Traceback (most recent call last) /tmp/ipykernel_63/3948901710.py in <module> ---> 18 phot_g = phot_g.astype(np.float64) 19 phot_bp = phot_bp.astype(np.float64) 20 phot_rp = phot_rp.astype(np.float64 ValueError: could not convert string to float: '' I've tried searching the error up, but a lot of the solutions I've gotten have been for numpy.loadtxt, and moreover, they don't seem to help me at all. Any help would be greatly appreciated. By the way, the error shows up for all three lines of code (phot_g, phot_bp, and phot_rp)
[ "Is that the full error message? I get more information when I try to recreate the error:\nworks:\nIn [104]: np.array(['1','2']).astype(float)\nOut[104]: array([1., 2.])\n\ndoesn't:\nIn [105]: np.array(['1','2','two']).astype(float)\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\nInput In [105], in <cell line: 1>()\n----> 1 np.array(['1','2','two']).astype(float)\n\nValueError: could not convert string to float: 'two'\n\nSee the 'two'! That tells me exactly what string is causing the problem.\nIf a line (or more) has two delimiters next to each other, the string array could end up with ''. which can't be converted to a float:\nIn [109]: np.array('1,2,,'.split(',')).astype(float)\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\nInput In [109], in <cell line: 1>()\n----> 1 np.array('1,2,,'.split(',')).astype(float)\n\nValueError: could not convert string to float: ''\n\ngenfromtxt has some ability to fill missing data. pandas csv reader is even better for that.\ngenfromtxt with 'dtype=float' (the default case), will put np.nan in the array when it can't make a float of the input.\n" ]
[ 0 ]
[]
[]
[ "genfromtxt", "jupyter_notebook", "numpy" ]
stackoverflow_0074679768_genfromtxt_jupyter_notebook_numpy.txt
Q: Creating a 3D Container in flutter How can I create a container like this : I also want to rotate on the x and y-axis. I need to make all the sides visible while rendering it. A: If you have a 3-D model object, you can use this package in your Flutter app: https://pub.dev/packages/flutter_3d_obj
Creating a 3D Container in flutter
How can I create a container like this : I also want to rotate on the x and y-axis. I need to make all the sides visible while rendering it.
[ "If you have a 3-D model object, you can use this package in your Flutter app:\nhttps://pub.dev/packages/flutter_3d_obj\n" ]
[ 0 ]
[ "I will need it in the next days. If there is a good alternative method, we are waiting for your answers. thanks..\n" ]
[ -2 ]
[ "flutter", "flutter_animation" ]
stackoverflow_0074679889_flutter_flutter_animation.txt
Q: How to run 2 commands from 1 line I have 2 lines of code. I need the 1st line to run and while it is still running start the 2nd. The first line runs a powershell script that keeps the pc active. The second goes to the main script. No matter how I adjust it, it runs the 1st line and doesnt go to the 2nd. &, &&, | have not worked when combining to 1 line. Running separately, they work but I need it to go back to Master when the 1st script starts I forgot to mention, the powershell script is a repeating script that sends characters and never end till you stop it. Powershell.exe -executionpolicy remotesigned -File \\XXX-XXXXX\share\public\it\Scripts_For_Workstations\scripts\keep_alive\mouse1.ps1 call \\XXX-XXXXX\share\public\it\Scripts_For_Workstations\XXXXXX_Master_Script.bat :SUB_Start A: You can run 2 files at once, whitch contains one line of commands, here is an example: EXAMPLE 1: Main file: @echo off cls start file1 start file2 cls <Perform some actions> File1: <Perform some action> File2: <Perform some action> EXAMPLE 2: Main file: @echo off cls start file1 cls <Perform some actions> File1: <Perform some action>
How to run 2 commands from 1 line
I have 2 lines of code. I need the 1st line to run and while it is still running start the 2nd. The first line runs a powershell script that keeps the pc active. The second goes to the main script. No matter how I adjust it, it runs the 1st line and doesnt go to the 2nd. &, &&, | have not worked when combining to 1 line. Running separately, they work but I need it to go back to Master when the 1st script starts I forgot to mention, the powershell script is a repeating script that sends characters and never end till you stop it. Powershell.exe -executionpolicy remotesigned -File \\XXX-XXXXX\share\public\it\Scripts_For_Workstations\scripts\keep_alive\mouse1.ps1 call \\XXX-XXXXX\share\public\it\Scripts_For_Workstations\XXXXXX_Master_Script.bat :SUB_Start
[ "You can run 2 files at once, whitch contains one line of commands, here is an example:\nEXAMPLE 1:\nMain file:\n@echo off cls start file1 start file2 cls <Perform some actions>\nFile1:\n<Perform some action>\nFile2:\n<Perform some action>\nEXAMPLE 2:\nMain file:\n@echo off cls start file1 cls <Perform some actions>\nFile1:\n<Perform some action>\n" ]
[ 0 ]
[]
[]
[ "batch_file", "call", "command" ]
stackoverflow_0074632703_batch_file_call_command.txt
Q: C++ using shared_ptr/unique_ptr to automatically close win32 HANDLE I am unaware of a good way to check this myself so I am asking here. I am pretty new to smart pointers so my question is: In this code example, will all handlers will be automatically closed? Is there something else that should be done? #include <stdio.h> #include <windows.h> #include <fileapi.h> #include <stdlib.h> #include <string.h> #include <string> #include <memory> struct Deleter { void operator()(HANDLE* h) { CloseHandle(*h); } }; void Close(HANDLE* handle) { CloseHandle(*handle); } void helper() { std::wstring str = L"Project2.exe 30 30"; STARTUPINFO si = { sizeof(si) }; PROCESS_INFORMATION pi; CreateProcess(NULL, str.data(), nullptr, nullptr, false, NORMAL_PRIORITY_CLASS, nullptr, nullptr, &si, &pi); STARTUPINFO li = { sizeof(si) }; PROCESS_INFORMATION ki; CreateProcess(NULL, str.data(), nullptr, nullptr, false, NORMAL_PRIORITY_CLASS, nullptr, nullptr, &li, &ki); std::unique_ptr<HANDLE, Deleter> xx = std::unique_ptr<HANDLE, Deleter>(&pi.hProcess); std::unique_ptr<HANDLE, Deleter> yy(&pi.hThread); std::shared_ptr<HANDLE> zz(&ki.hProcess, Close); std::shared_ptr<HANDLE> ww(&ki.hThread, Close); } int WINAPI wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, PWSTR pCmdLine, int nCmdShow) { helper(); } ``` ` Unfortunately using CHandle is not possible for me. A: Yes, all of the handles will be automatically closed in this code. The std::unique_ptr and std::shared_ptr objects use the provided deleter functions to automatically close the handles when the pointers go out of scope. This ensures that the handles are properly closed and resources are freed even if an exception is thrown. One thing to note is that the code is passing the address of the HANDLE objects directly to the smart pointers, instead of the value of the HANDLE objects themselves. This is not a good practice because it can lead to undefined behavior, since the HANDLE objects are local variables and will be destroyed when the helper() function returns. Instead, the smart pointers should be constructed with the value of the HANDLE objects, like this: std::unique_ptr<HANDLE, Deleter> xx(pi.hProcess); std::unique_ptr<HANDLE, Deleter> yy(pi.hThread); std::shared_ptr<HANDLE> zz(ki.hProcess, Close); std::shared_ptr<HANDLE> ww(ki.hThread, Close); This will ensure that the smart pointers manage the lifetime of the HANDLE objects correctly, and the handles will be properly closed when the pointers go out of scope.
C++ using shared_ptr/unique_ptr to automatically close win32 HANDLE
I am unaware of a good way to check this myself so I am asking here. I am pretty new to smart pointers so my question is: In this code example, will all handlers will be automatically closed? Is there something else that should be done? #include <stdio.h> #include <windows.h> #include <fileapi.h> #include <stdlib.h> #include <string.h> #include <string> #include <memory> struct Deleter { void operator()(HANDLE* h) { CloseHandle(*h); } }; void Close(HANDLE* handle) { CloseHandle(*handle); } void helper() { std::wstring str = L"Project2.exe 30 30"; STARTUPINFO si = { sizeof(si) }; PROCESS_INFORMATION pi; CreateProcess(NULL, str.data(), nullptr, nullptr, false, NORMAL_PRIORITY_CLASS, nullptr, nullptr, &si, &pi); STARTUPINFO li = { sizeof(si) }; PROCESS_INFORMATION ki; CreateProcess(NULL, str.data(), nullptr, nullptr, false, NORMAL_PRIORITY_CLASS, nullptr, nullptr, &li, &ki); std::unique_ptr<HANDLE, Deleter> xx = std::unique_ptr<HANDLE, Deleter>(&pi.hProcess); std::unique_ptr<HANDLE, Deleter> yy(&pi.hThread); std::shared_ptr<HANDLE> zz(&ki.hProcess, Close); std::shared_ptr<HANDLE> ww(&ki.hThread, Close); } int WINAPI wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, PWSTR pCmdLine, int nCmdShow) { helper(); } ``` ` Unfortunately using CHandle is not possible for me.
[ "Yes, all of the handles will be automatically closed in this code. The std::unique_ptr and std::shared_ptr objects use the provided deleter functions to automatically close the handles when the pointers go out of scope. This ensures that the handles are properly closed and resources are freed even if an exception is thrown.\nOne thing to note is that the code is passing the address of the HANDLE objects directly to the smart pointers, instead of the value of the HANDLE objects themselves. This is not a good practice because it can lead to undefined behavior, since the HANDLE objects are local variables and will be destroyed when the helper() function returns. Instead, the smart pointers should be constructed with the value of the HANDLE objects, like this:\nstd::unique_ptr<HANDLE, Deleter> xx(pi.hProcess);\nstd::unique_ptr<HANDLE, Deleter> yy(pi.hThread);\nstd::shared_ptr<HANDLE> zz(ki.hProcess, Close);\nstd::shared_ptr<HANDLE> ww(ki.hThread, Close);\n\nThis will ensure that the smart pointers manage the lifetime of the HANDLE objects correctly, and the handles will be properly closed when the pointers go out of scope.\n" ]
[ 0 ]
[]
[]
[ "c++", "smart_pointers", "winapi" ]
stackoverflow_0074680125_c++_smart_pointers_winapi.txt
Q: Getting a doubled, mirrored image on the output of IDFT I would appreciate some help understanding the output of a small educational program I put together. I am new to OpenCV and don't have much C++ experience. The goal of the script is to perform the following: Load an image Perform DFT on the image Apply a circular binary mask to the spectrum, where the radius of the circle can be increased by hitting a key on the keyboard (essentially applying a very crude filter to the image) Display the result of the inverse transform of the spectrum after the mask was applied I have the basic functionality working: I can load the image, perform DFT, view the output image and increase the radius of the circle (advancing through a for-loop with the circle radius following i), and see the result of the inverse transform of the modified spectrum. However I do not understand why the output is showing a vertically flipped copy of the input superimposed on the image (see example below with Lena.png). This is not my intended result. When I imshow() the inverse DFT result without applying the mask, I get a normal, non-doubled image. But after applying the mask, the IDFT output looks like this: I am not looking for a quick solution: I would really appreciate if someone more experienced could ask leading questions to help me understand this result so that I can try to fix it myself. My code: #include <opencv2/core/core.hpp> #include <opencv2/highgui.hpp> #include <opencv2/imgcodecs.hpp> #include <opencv2/imgproc.hpp> #include <iostream> using namespace cv; void expand_img_to_optimal(Mat &padded, Mat &img); void fourier_transform(Mat &img); int main(int argc, char **argv) { Mat input_img; input_img = imread("Lena.png" , IMREAD_GRAYSCALE); if (input_img.empty()) { fprintf(stderr, "Could not Open image\n\n"); return -1; } fourier_transform(input_img); return 0; } void fourier_transform(Mat &img) { Mat padded; expand_img_to_optimal(padded, img); Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)}; Mat complexI; merge(planes, 2, complexI); dft(complexI, complexI, DFT_COMPLEX_OUTPUT); // For-loop to increase mask circle radius incrementally for (int i=0; i<400; i+=10) { Mat mask = Mat::ones(complexI.size(), complexI.type()); mask.convertTo(mask, CV_8U); Mat dest = Mat::ones(complexI.size(), complexI.type()); circle(mask, Point(mask.cols/2, mask.rows/2), i, 0, -1, 8, 0); complexI.copyTo(dest, mask); Mat inverseTransform; idft(dest, inverseTransform, DFT_INVERSE|DFT_REAL_OUTPUT); normalize(inverseTransform, inverseTransform, 0, 1, NORM_MINMAX); imshow("Reconstructed", inverseTransform); waitKey(0); } } void expand_img_to_optimal(Mat &padded, Mat &img) { int row = getOptimalDFTSize(img.rows); int col = getOptimalDFTSize(img.cols); copyMakeBorder(img, padded, 0, row - img.rows, 0, col - img.cols, BORDER_CONSTANT, Scalar::all(0)); } A: This happens because you are inverse-transforming a frequency spectrum that is not conjugate-symmetric around the origin. The origin of the frequency domain image is the top-left pixel. Your disk mask must be centered there. The frequency domain is periodic, so that the part of the mask that extends to the left of the image wraps around and comes in to the right edge, same with top and bottom edges. The easiest way to generate a proper mask is to create it with the origin at (mask.cols/2, mask.rows/2), like you already do, and then apply the ifftshift operation. OpenCV doesn’t have a ifftshift function, this answer has code that implements the ifftshift correctly. A: Firstly I'd like to thank @Cris Luengo for his helpful input on implementing the ifftshift in OpenCV. In the end, the problem with my code was in this line: Mat mask = Mat::ones(complexI.size(), complexI.type()); Instead of using the type of complexI, it looks like I should have used the type of img: Mat mask = Mat::ones(complexI.size(), img.type()); Why? I'm not sure yet. Still trying to understand. Here is my complete code that is working how I intended: #include <opencv2/core/core.hpp> #include <opencv2/highgui.hpp> #include <opencv2/imgcodecs.hpp> #include <opencv2/imgproc.hpp> #include <iostream> using namespace cv; void expand_img_to_optimal(Mat &padded, Mat &img); void fourier_transform(Mat &img); void ifft_shift(Mat &mask); int main(int argc, char **argv) { Mat input_img; input_img = imread("Lena.png" , IMREAD_GRAYSCALE); if (input_img.empty()) { fprintf(stderr, "Could not Open image\n\n"); return -1; } fourier_transform(input_img); return 0; } void fourier_transform(Mat &img) { Mat padded; expand_img_to_optimal(padded, img); Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)}; Mat complexI; merge(planes, 2, complexI); dft(complexI, complexI, DFT_COMPLEX_OUTPUT); for (float i=0; i<4000; i+=2) { // Create disk mask matrix Mat mask = Mat::ones(complexI.size(), CV_8U); circle(mask, Point(mask.cols/2, mask.rows/2), i, 0, -1, 8, 0); // Perform ifft shift ifft_shift(mask); // Destination matrix for masked spectrum Mat dest; complexI.copyTo(dest, mask); // Perform inverse DFT Mat inverseTransform; idft(dest, inverseTransform, DFT_INVERSE|DFT_REAL_OUTPUT); normalize(inverseTransform, inverseTransform, 0, 1, NORM_MINMAX); imshow("Reconstructed", inverseTransform); waitKey(0); } } void expand_img_to_optimal(Mat &padded, Mat &img) { int row = getOptimalDFTSize(img.rows); int col = getOptimalDFTSize(img.cols); copyMakeBorder(img, padded, 0, row - img.rows, 0, col - img.cols, BORDER_CONSTANT, Scalar::all(0)); } void ifft_shift(Mat &mask) { // input sizes int sx = mask.cols; int sy = mask.rows; // input origin int cx = sx / 2; int cy = sy / 2; // split the quadrants Mat top_left(mask, Rect(0, 0, cx, cy)); Mat top_right(mask, Rect(cx, 0, sx - cx, cy)); Mat bottom_left(mask, Rect(0, cy, cx, sy - cy)); Mat bottom_right(mask, Rect(cx, cy, sx - cx, sy - cy)); // merge the quadrants in right order Mat tmp1, tmp2; hconcat(bottom_right, bottom_left, tmp1); hconcat(top_right, top_left, tmp2); vconcat(tmp1, tmp2, mask); }
Getting a doubled, mirrored image on the output of IDFT
I would appreciate some help understanding the output of a small educational program I put together. I am new to OpenCV and don't have much C++ experience. The goal of the script is to perform the following: Load an image Perform DFT on the image Apply a circular binary mask to the spectrum, where the radius of the circle can be increased by hitting a key on the keyboard (essentially applying a very crude filter to the image) Display the result of the inverse transform of the spectrum after the mask was applied I have the basic functionality working: I can load the image, perform DFT, view the output image and increase the radius of the circle (advancing through a for-loop with the circle radius following i), and see the result of the inverse transform of the modified spectrum. However I do not understand why the output is showing a vertically flipped copy of the input superimposed on the image (see example below with Lena.png). This is not my intended result. When I imshow() the inverse DFT result without applying the mask, I get a normal, non-doubled image. But after applying the mask, the IDFT output looks like this: I am not looking for a quick solution: I would really appreciate if someone more experienced could ask leading questions to help me understand this result so that I can try to fix it myself. My code: #include <opencv2/core/core.hpp> #include <opencv2/highgui.hpp> #include <opencv2/imgcodecs.hpp> #include <opencv2/imgproc.hpp> #include <iostream> using namespace cv; void expand_img_to_optimal(Mat &padded, Mat &img); void fourier_transform(Mat &img); int main(int argc, char **argv) { Mat input_img; input_img = imread("Lena.png" , IMREAD_GRAYSCALE); if (input_img.empty()) { fprintf(stderr, "Could not Open image\n\n"); return -1; } fourier_transform(input_img); return 0; } void fourier_transform(Mat &img) { Mat padded; expand_img_to_optimal(padded, img); Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)}; Mat complexI; merge(planes, 2, complexI); dft(complexI, complexI, DFT_COMPLEX_OUTPUT); // For-loop to increase mask circle radius incrementally for (int i=0; i<400; i+=10) { Mat mask = Mat::ones(complexI.size(), complexI.type()); mask.convertTo(mask, CV_8U); Mat dest = Mat::ones(complexI.size(), complexI.type()); circle(mask, Point(mask.cols/2, mask.rows/2), i, 0, -1, 8, 0); complexI.copyTo(dest, mask); Mat inverseTransform; idft(dest, inverseTransform, DFT_INVERSE|DFT_REAL_OUTPUT); normalize(inverseTransform, inverseTransform, 0, 1, NORM_MINMAX); imshow("Reconstructed", inverseTransform); waitKey(0); } } void expand_img_to_optimal(Mat &padded, Mat &img) { int row = getOptimalDFTSize(img.rows); int col = getOptimalDFTSize(img.cols); copyMakeBorder(img, padded, 0, row - img.rows, 0, col - img.cols, BORDER_CONSTANT, Scalar::all(0)); }
[ "This happens because you are inverse-transforming a frequency spectrum that is not conjugate-symmetric around the origin.\nThe origin of the frequency domain image is the top-left pixel. Your disk mask must be centered there. The frequency domain is periodic, so that the part of the mask that extends to the left of the image wraps around and comes in to the right edge, same with top and bottom edges.\nThe easiest way to generate a proper mask is to\n\ncreate it with the origin at (mask.cols/2, mask.rows/2), like you already do, and then\napply the ifftshift operation.\n\nOpenCV doesn’t have a ifftshift function, this answer has code that implements the ifftshift correctly.\n", "Firstly I'd like to thank @Cris Luengo for his helpful input on implementing the ifftshift in OpenCV.\nIn the end, the problem with my code was in this line:\nMat mask = Mat::ones(complexI.size(), complexI.type());\n\nInstead of using the type of complexI, it looks like I should have used the type of img:\nMat mask = Mat::ones(complexI.size(), img.type());\n\nWhy? I'm not sure yet. Still trying to understand. Here is my complete code that is working how I intended:\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui.hpp>\n#include <opencv2/imgcodecs.hpp>\n#include <opencv2/imgproc.hpp>\n#include <iostream>\n\nusing namespace cv;\n\nvoid expand_img_to_optimal(Mat &padded, Mat &img);\nvoid fourier_transform(Mat &img);\nvoid ifft_shift(Mat &mask); \n\nint main(int argc, char **argv)\n{\n Mat input_img;\n input_img = imread(\"Lena.png\" , IMREAD_GRAYSCALE);\n\n if (input_img.empty())\n {\n fprintf(stderr, \"Could not Open image\\n\\n\");\n return -1;\n }\n\n fourier_transform(input_img);\n return 0;\n}\n\nvoid fourier_transform(Mat &img)\n{\n Mat padded;\n expand_img_to_optimal(padded, img);\n\n Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};\n Mat complexI;\n merge(planes, 2, complexI);\n\n dft(complexI, complexI, DFT_COMPLEX_OUTPUT);\n\n for (float i=0; i<4000; i+=2) {\n // Create disk mask matrix\n Mat mask = Mat::ones(complexI.size(), CV_8U);\n circle(mask, Point(mask.cols/2, mask.rows/2), i, 0, -1, 8, 0);\n\n // Perform ifft shift\n ifft_shift(mask);\n\n // Destination matrix for masked spectrum\n Mat dest;\n complexI.copyTo(dest, mask);\n\n // Perform inverse DFT\n Mat inverseTransform;\n idft(dest, inverseTransform, DFT_INVERSE|DFT_REAL_OUTPUT);\n normalize(inverseTransform, inverseTransform, 0, 1, NORM_MINMAX);\n imshow(\"Reconstructed\", inverseTransform);\n waitKey(0);\n }\n}\n\nvoid expand_img_to_optimal(Mat &padded, Mat &img) {\n int row = getOptimalDFTSize(img.rows);\n int col = getOptimalDFTSize(img.cols);\n copyMakeBorder(img, padded, 0, row - img.rows, 0, col - img.cols, BORDER_CONSTANT, Scalar::all(0));\n}\n\nvoid ifft_shift(Mat &mask) {\n // input sizes\n int sx = mask.cols;\n int sy = mask.rows;\n\n // input origin\n int cx = sx / 2;\n int cy = sy / 2;\n\n // split the quadrants\n Mat top_left(mask, Rect(0, 0, cx, cy));\n Mat top_right(mask, Rect(cx, 0, sx - cx, cy));\n Mat bottom_left(mask, Rect(0, cy, cx, sy - cy));\n Mat bottom_right(mask, Rect(cx, cy, sx - cx, sy - cy));\n\n // merge the quadrants in right order\n Mat tmp1, tmp2;\n hconcat(bottom_right, bottom_left, tmp1);\n hconcat(top_right, top_left, tmp2);\n vconcat(tmp1, tmp2, mask);\n}\n\n" ]
[ 3, 1 ]
[]
[]
[ "c++", "dft", "fft", "opencv" ]
stackoverflow_0074674289_c++_dft_fft_opencv.txt
Q: How do I use if-then-else statement with no else condition in Haskell? I have a list of relations and I wish to print the names of all fathers. Since there's no else condition, the following code doesn't work: relations = [("father", "arushi", "anandan"), ("mother", "arushi", "abigale"), ("father", "anandan", "ayuta"), ("mother", "anandan", "akanksha")] father ((r, c, f):xs) = if r == "father" then print(f) main = do father (relations) I do not wish to put any statement after else. A: Too bad, all ifs come with elses. But that's okay, there's a distinguished do-nothing IO action. father ((r, c, f):xs) = if r == "father" then print f else return () There are many other ways to skin this cat. One is pattern matching. father (("father", c, f):xs) = print f father ((r, c, f):xs) = return () Another that is specific to monadic actions is to use when. father ((r, c, f):xs) = when (r == "father") (print f) Of course, that's just hiding the else, which is again return (): when p act = if p then act else pure () -- okay, it's actually pure, not return A: The idiomatic Haskell way to solve such issues is to avoid mixing computation and I/O, when possible. In this case, instead of "printing the names of all fathers", you can first "compute the names of all fathers" (no I/O here) and then "print the computed names" (I/O here) relations = [ ("father", "arushi", "anandan") , ("mother", "arushi", "abigale") , ("father", "anandan", "ayuta") , ("mother", "anandan", "akanksha") ] -- compute only the fathers fathers = [ f | ("father", _, f) <- relations ] -- print them main :: IO () main = mapM_ putStrLn fathers No if needed, since mapM_ iterates over the list for us, and all the list entries have to be printed. A: Every if must have an else. father ((r, c, f):xs) = if r == "father" then print f else _what If you try to compile that, you'll be informed that there's a hole _what :: IO () So you need to manufacture something of that type. Fortunately, that's easy: father ((r, c, f):xs) = if r == "father" then print f else pure () pure x does nothing and returns x. Since what you're trying to do is quite common, there are two functions specifically designed for the task: when :: Applicative f => Bool -> f () -> f () when b m = if b then m else pure () unless :: Applicative f => Bool -> f () -> f () unless = when . not You can find both of these functions in Control.Monad. father ((r, c, f):xs) = when (r == "father") $ print f A: You can write a function that always writes the name, but then ensure it only gets called on values containing father. relations :: [(String,String,String)] relations = [("father", "arushi", "anandan") ,("mother", "arushi", "abigale") ,("father", "anandan", "ayuta") ,("mother", "anandan", "akanksha") ] printName :: (String,String,String) -> IO () printName (_, _, name) = print name printFathers :: [(String,String,String)] -> [IO ()] printFathers = fmap printName . filter (\(f, _, _) -> f == "father") main = sequence (printFathers relations) The definition of filter hides the logic of skipping certain elements of the list. The argument to filter always returns either True or False, but the result of filter only contains those elements for which you want to call print. (sequence, here, just turns the list of IO values into the single IO value that main must be by "swapping" IO and []. You could incorporate this into printName by defining it as sequence . fmap printName . ..., and replace sequence . fmap foo with traverse foo.) Note that if foo then bar else baz is syntactic sugar for a complete case expression case foo of True -> foo False -> baz However, a case expression doesn't have to handle every possible value of the foo argument. You could write father ((r, c, f):xs) = (case r of "father" -> print f) : father xs It would be instructive, though, to see what happens when r doesn't match "father". A: I feel the need to explain why an if must have an else in Haskell. Haskell is an implementation of typed lambda calculus and in lambda calculus we have expressions and values nothing else. In it we evaluate/reduce expressions to values or into expressions that can't be reduced any further. Now in typed lambda calculus we add types and abstractions but we still to evaluate down to values and expressions one of these expressions being if predicate then value else value. This if expression must reduce to a value therefore both branches of the if expression must reduce to values of the same type. If we had an "if predicate then value" it means we would have a branch that doesn't reduce to a value. you can use run, reduce and evaluate interchangeably in the context of this answer. When we run Haskell code we are reducing lambda terms into values or expressions that can't be reduced any further. The compiler exists to help us write valid lambda terms. Going by lambda calculus we see that the if statement must, when evaluated, reduce to a value (or be capable of doing so) and because Haskell is implemented typed lambda calculus an if expression in Haskell without an else wouldn't have the possibility of evaluating down to a value all the time. TL;DR The "if ... then ... else" statement should when evaluated reduce to a value. As long as both branches of the if statement evaluates to the same type it evaluates correctly. If any branch doesn't evaluate to a value or are going to evaluate to values of different types that is not a valid lambda term and the code will not typecheck. A: Was struggling with this style as well, & with Haskell formatting requirements. But I found that instead of placing emphasis on a required [else], one can use [else do] to include all following lines of code without an additional indention on concurrent lines, such as.. main = do --if conditions exit main, if t < 1 || t > 100000 then return () else do --code here, even with the else, -- is only run when if condition above -> is false Proof code can be in simpler form if True return () else do -- add code below this line to prove return works
How do I use if-then-else statement with no else condition in Haskell?
I have a list of relations and I wish to print the names of all fathers. Since there's no else condition, the following code doesn't work: relations = [("father", "arushi", "anandan"), ("mother", "arushi", "abigale"), ("father", "anandan", "ayuta"), ("mother", "anandan", "akanksha")] father ((r, c, f):xs) = if r == "father" then print(f) main = do father (relations) I do not wish to put any statement after else.
[ "Too bad, all ifs come with elses. But that's okay, there's a distinguished do-nothing IO action.\nfather ((r, c, f):xs) = if r == \"father\" then print f else return ()\n\nThere are many other ways to skin this cat. One is pattern matching.\nfather ((\"father\", c, f):xs) = print f\nfather ((r, c, f):xs) = return ()\n\nAnother that is specific to monadic actions is to use when.\nfather ((r, c, f):xs) = when (r == \"father\") (print f)\n\nOf course, that's just hiding the else, which is again return ():\nwhen p act = if p then act else pure () -- okay, it's actually pure, not return\n\n", "The idiomatic Haskell way to solve such issues is to avoid mixing computation and I/O, when possible.\nIn this case, instead of \"printing the names of all fathers\", you can first \"compute the names of all fathers\" (no I/O here) and then \"print the computed names\" (I/O here)\nrelations = \n [ (\"father\", \"arushi\", \"anandan\")\n , (\"mother\", \"arushi\", \"abigale\")\n , (\"father\", \"anandan\", \"ayuta\")\n , (\"mother\", \"anandan\", \"akanksha\")\n ]\n\n-- compute only the fathers\nfathers = [ f | (\"father\", _, f) <- relations ]\n\n-- print them\nmain :: IO ()\nmain = mapM_ putStrLn fathers\n\nNo if needed, since mapM_ iterates over the list for us, and all the list entries have to be printed.\n", "Every if must have an else.\nfather ((r, c, f):xs) =\n if r == \"father\"\n then print f\n else _what\n\nIf you try to compile that, you'll be informed that there's a hole\n_what :: IO ()\n\nSo you need to manufacture something of that type. Fortunately, that's easy:\nfather ((r, c, f):xs) =\n if r == \"father\"\n then print f\n else pure ()\n\npure x does nothing and returns x.\nSince what you're trying to do is quite common, there are two functions specifically designed for the task:\nwhen :: Applicative f => Bool -> f () -> f ()\nwhen b m = if b then m else pure ()\n\nunless :: Applicative f => Bool -> f () -> f ()\nunless = when . not\n\nYou can find both of these functions in Control.Monad.\nfather ((r, c, f):xs) =\n when (r == \"father\") $ print f\n\n", "You can write a function that always writes the name, but then ensure it only gets called on values containing father.\nrelations :: [(String,String,String)]\nrelations = [(\"father\", \"arushi\", \"anandan\")\n ,(\"mother\", \"arushi\", \"abigale\")\n ,(\"father\", \"anandan\", \"ayuta\")\n ,(\"mother\", \"anandan\", \"akanksha\")\n ]\n\nprintName :: (String,String,String) -> IO ()\nprintName (_, _, name) = print name\n\nprintFathers :: [(String,String,String)] -> [IO ()]\nprintFathers = fmap printName . filter (\\(f, _, _) -> f == \"father\")\n\nmain = sequence (printFathers relations)\n\nThe definition of filter hides the logic of skipping certain elements of the list. The argument to filter always returns either True or False, but the result of filter only contains those elements for which you want to call print.\n(sequence, here, just turns the list of IO values into the single IO value that main must be by \"swapping\" IO and []. You could incorporate this into printName by defining it as sequence . fmap printName . ..., and replace sequence . fmap foo with traverse foo.)\n\nNote that if foo then bar else baz is syntactic sugar for a complete case expression\ncase foo of\n True -> foo\n False -> baz\n\nHowever, a case expression doesn't have to handle every possible value of the foo argument. You could write\nfather ((r, c, f):xs) = (case r of \"father\" -> print f) : father xs\n\nIt would be instructive, though, to see what happens when r doesn't match \"father\".\n", "I feel the need to explain why an if must have an else in Haskell.\nHaskell is an implementation of typed lambda calculus and in lambda calculus we have expressions and values nothing else.\nIn it we evaluate/reduce expressions to values or into expressions that can't be reduced any further.\nNow in typed lambda calculus we add types and abstractions but we still to evaluate down to values and expressions one of these expressions being if predicate then value else value.\nThis if expression must reduce to a value therefore both branches of the if expression must reduce to values of the same type.\nIf we had an \"if predicate then value\" it means we would have a branch that doesn't reduce to a value.\n\nyou can use run, reduce and evaluate interchangeably in the context of this answer.\n\nWhen we run Haskell code we are reducing lambda terms into values or expressions that can't be reduced any further.\nThe compiler exists to help us write valid lambda terms.\nGoing by lambda calculus we see that the if statement must, when evaluated, reduce to a value (or be capable of doing so) and because Haskell is implemented typed lambda calculus an if expression in Haskell without an else wouldn't have the possibility of evaluating down to a value all the time.\nTL;DR\nThe \"if ... then ... else\" statement should when evaluated reduce to a value.\nAs long as both branches of the if statement evaluates to the same type it evaluates correctly.\nIf any branch doesn't evaluate to a value or are going to evaluate to values of different types that is not a valid lambda term and the code will not typecheck.\n", "Was struggling with this style as well, & with Haskell formatting requirements. But I found that instead of placing emphasis on a required [else], one can use [else do] to include all following lines of code without an additional indention on concurrent lines, such as..\nmain = do\n --if conditions exit main, \n if t < 1 || t > 100000 \n then return ()\n else\n do\n --code here, even with the else, \n -- is only run when if condition above -> is false\n\nProof code can be in simpler form\nif True\n return ()\nelse\n do\n-- add code below this line to prove return works\n\n" ]
[ 22, 11, 8, 6, 2, 0 ]
[]
[]
[ "haskell" ]
stackoverflow_0050473205_haskell.txt
Q: How can I use Amazon SES with Ghost? I saw at https://ghost.org/docs/config/#mail that SES is allowed. But I edited my config.production.json and ran ghost restart, but Ghost still says: Set up Mailgun to start sending newsletters! The config I used was: "mail": { "from": "My Name <example@example.com>", "transport": "SMTP", "options": { "host": "email-smtp.us-east-1.amazonaws.com", "port": 465, "service": "SES", "auth": { "user": "asdfadsffdsf", "pass": "asdfasdfadfs" } } }, I got the SMTP-ACCESS-KEY-ID and SES-SMTP-SECRET-ACCESS-KEY from https://us-east-1.console.aws.amazon.com/ses/home?region=us-east-1#/smtp What did I do wrong? A: There are a few issues with the configuration you provided: The "from" field should specify the email address that you have verified in Amazon SES, not the name of the sender. The format of the "from" field should be "example@example.com" instead of "My Name example@example.com". The "port" field should be set to "587" instead of "465". The port number "587" is the standard port number for SMTP with TLS encryption, which is required for Amazon SES. The "service" field should be removed, as it is not a valid option for the "options" object in the Ghost mail configuration. The "user" and "pass" fields in the "auth" object should be set to the IAM user's access key ID and secret access key, respectively. These values should be obtained from the AWS IAM page, not the Amazon SES page. Here is an example of how the mail configuration should be updated to fix these issues: "mail": { "from": "example@example.com", "transport": "SMTP", "options": { "host": "email-smtp.us-east-1.amazonaws.com", "port": 587, "auth": { "user": "IAM_USER_ACCESS_KEY_ID", "pass": "IAM_USER_SECRET_ACCESS_KEY" } } }, After making these changes, you should be able to successfully use Amazon SES with Ghost. Note that you may need to enable other settings or update your Amazon SES account to allow sending from Ghost, depending on your AWS account and Amazon SES configuration.Voila!
How can I use Amazon SES with Ghost?
I saw at https://ghost.org/docs/config/#mail that SES is allowed. But I edited my config.production.json and ran ghost restart, but Ghost still says: Set up Mailgun to start sending newsletters! The config I used was: "mail": { "from": "My Name <example@example.com>", "transport": "SMTP", "options": { "host": "email-smtp.us-east-1.amazonaws.com", "port": 465, "service": "SES", "auth": { "user": "asdfadsffdsf", "pass": "asdfasdfadfs" } } }, I got the SMTP-ACCESS-KEY-ID and SES-SMTP-SECRET-ACCESS-KEY from https://us-east-1.console.aws.amazon.com/ses/home?region=us-east-1#/smtp What did I do wrong?
[ "There are a few issues with the configuration you provided:\nThe \"from\" field should specify the email address that you have verified in Amazon SES, not the name of the sender. The format of the \"from\" field should be \"example@example.com\" instead of \"My Name example@example.com\".\nThe \"port\" field should be set to \"587\" instead of \"465\". The port number \"587\" is the standard port number for SMTP with TLS encryption, which is required for Amazon SES.\nThe \"service\" field should be removed, as it is not a valid option for the \"options\" object in the Ghost mail configuration.\nThe \"user\" and \"pass\" fields in the \"auth\" object should be set to the IAM user's access key ID and secret access key, respectively. These values should be obtained from the AWS IAM page, not the Amazon SES page.\nHere is an example of how the mail configuration should be updated to fix these issues:\n\"mail\": {\n \"from\": \"example@example.com\",\n \"transport\": \"SMTP\",\n \"options\": {\n \"host\": \"email-smtp.us-east-1.amazonaws.com\",\n \"port\": 587,\n \"auth\": {\n \"user\": \"IAM_USER_ACCESS_KEY_ID\",\n \"pass\": \"IAM_USER_SECRET_ACCESS_KEY\"\n }\n }\n },\n\nAfter making these changes, you should be able to successfully use Amazon SES with Ghost. Note that you may need to enable other settings or update your Amazon SES account to allow sending from Ghost, depending on your AWS account and Amazon SES configuration.Voila!\n" ]
[ 0 ]
[]
[]
[ "amazon_ses", "amazon_web_services", "ghost_blog" ]
stackoverflow_0074680319_amazon_ses_amazon_web_services_ghost_blog.txt
Q: Smart contract factory using an array of Struct as a parameter I am using a CampaignFactory contract to create multiple instances of the Campaign contract and keep track of them. Every campaign is initialized with an array of Struct called rewards. When I try to create a new Campaign with createCampaign in remix, I have the following error: [vm]from: 0x5B3...eddC4to: CampaignFactory.createCampaign((uint256,uint256,string)[]) 0xd91...39138value: 0 weidata: 0x6d9...00000logs: 0hash: 0xa4d...ad5fd transact to CampaignFactory.createCampaign errored: VM error: revert. Here is my code: // SPDX-License-Identifier: UNLICENSED pragma solidity ^0.8.15; struct Reward { uint256 contribution; uint256 maxNumber; string ImageLink; } contract CampaignFactory { Campaign[] public deployedCampaigns; function createCampaign(Reward[] memory _rewards) public { Campaign newCampaign = new Campaign(msg.sender); for (uint256 i = 0; i < _rewards.length; i++) { newCampaign.createReward( _rewards[i].contribution, _rewards[i].maxNumber, _rewards[i].ImageLink ); } deployedCampaigns.push(newCampaign); } function getDeployedCampaigns() public view returns (Campaign[] memory) { return deployedCampaigns; } } contract Campaign { Reward[] public rewards; address public manager; modifier restricted() { require(msg.sender == manager); _; } constructor(address creator) { manager = creator; } function createReward( uint256 _contribution, uint256 _maxNumber, string memory _imageLink ) public restricted { Reward memory newReward = Reward({ contribution: _contribution, maxNumber: _maxNumber, ImageLink: _imageLink }); rewards.push(newReward); } } A: When you deploy the Campaign contract, the user's address is assigned to manager. Then you're invoking the createReward() function. Its restricted modifier allows the function to be invoked only by the manager. But the function is invoked by your contract - not by the user directly. Which fails the validation and reverts the transaction. You could get the address of the original transaction sender in tx.origin, but using it for authorization is a security risk - https://swcregistry.io/docs/SWC-115 So instead, you can pass the user address in the function argument, and verify it there. function createCampaign(Reward[] memory _rewards) public { // pass the factory address to later validate `msg.sender` Campaign newCampaign = new Campaign(address(this), msg.sender); for (uint256 i = 0; i < _rewards.length; i++) { // pass the user address to validate if they're the `manager` newCampaign.createReward( _rewards[i].contribution, _rewards[i].maxNumber, _rewards[i].ImageLink, msg.sender ); } deployedCampaigns.push(newCampaign); } address factory; address public manager; modifier restrictedThroughFactory(address user) { require(msg.sender == factory && user == manager); _; } modifier restricted() { require(msg.sender == manager); _; } constructor(address _factory, address creator) { factory = _factory; manager = creator; } // original function that can be still executed directly by the user function createReward( uint256 _contribution, uint256 _maxNumber, string memory _imageLink ) public restricted { _createReward(_contribution, _maxNumber, _imageLink); } // overloaded function that can be only executed by the Factory contract // reverts if the `user` (can be passed only by the Factory contract) is not the `manager` function createReward( uint256 _contribution, uint256 _maxNumber, string memory _imageLink, address user ) public restrictedThroughFactory(user) { _createReward(_contribution, _maxNumber, _imageLink); } function _createReward( uint256 _contribution, uint256 _maxNumber, string memory _imageLink ) internal { Reward memory newReward = Reward({ contribution: _contribution, maxNumber: _maxNumber, ImageLink: _imageLink }); rewards.push(newReward); }
Smart contract factory using an array of Struct as a parameter
I am using a CampaignFactory contract to create multiple instances of the Campaign contract and keep track of them. Every campaign is initialized with an array of Struct called rewards. When I try to create a new Campaign with createCampaign in remix, I have the following error: [vm]from: 0x5B3...eddC4to: CampaignFactory.createCampaign((uint256,uint256,string)[]) 0xd91...39138value: 0 weidata: 0x6d9...00000logs: 0hash: 0xa4d...ad5fd transact to CampaignFactory.createCampaign errored: VM error: revert. Here is my code: // SPDX-License-Identifier: UNLICENSED pragma solidity ^0.8.15; struct Reward { uint256 contribution; uint256 maxNumber; string ImageLink; } contract CampaignFactory { Campaign[] public deployedCampaigns; function createCampaign(Reward[] memory _rewards) public { Campaign newCampaign = new Campaign(msg.sender); for (uint256 i = 0; i < _rewards.length; i++) { newCampaign.createReward( _rewards[i].contribution, _rewards[i].maxNumber, _rewards[i].ImageLink ); } deployedCampaigns.push(newCampaign); } function getDeployedCampaigns() public view returns (Campaign[] memory) { return deployedCampaigns; } } contract Campaign { Reward[] public rewards; address public manager; modifier restricted() { require(msg.sender == manager); _; } constructor(address creator) { manager = creator; } function createReward( uint256 _contribution, uint256 _maxNumber, string memory _imageLink ) public restricted { Reward memory newReward = Reward({ contribution: _contribution, maxNumber: _maxNumber, ImageLink: _imageLink }); rewards.push(newReward); } }
[ "When you deploy the Campaign contract, the user's address is assigned to manager.\nThen you're invoking the createReward() function. Its restricted modifier allows the function to be invoked only by the manager. But the function is invoked by your contract - not by the user directly. Which fails the validation and reverts the transaction.\n\nYou could get the address of the original transaction sender in tx.origin, but using it for authorization is a security risk - https://swcregistry.io/docs/SWC-115\nSo instead, you can pass the user address in the function argument, and verify it there.\nfunction createCampaign(Reward[] memory _rewards)\n public\n{\n // pass the factory address to later validate `msg.sender`\n Campaign newCampaign = new Campaign(address(this), msg.sender);\n\n for (uint256 i = 0; i < _rewards.length; i++) {\n // pass the user address to validate if they're the `manager`\n newCampaign.createReward(\n _rewards[i].contribution,\n _rewards[i].maxNumber,\n _rewards[i].ImageLink,\n msg.sender\n );\n }\n deployedCampaigns.push(newCampaign);\n}\n\naddress factory;\naddress public manager;\n\nmodifier restrictedThroughFactory(address user) {\n require(msg.sender == factory && user == manager);\n _;\n}\n\nmodifier restricted() {\n require(msg.sender == manager);\n _;\n}\n\nconstructor(address _factory, address creator) {\n factory = _factory;\n manager = creator;\n}\n\n// original function that can be still executed directly by the user\nfunction createReward(\n uint256 _contribution,\n uint256 _maxNumber,\n string memory _imageLink\n) public restricted {\n _createReward(_contribution, _maxNumber, _imageLink);\n}\n\n// overloaded function that can be only executed by the Factory contract\n// reverts if the `user` (can be passed only by the Factory contract) is not the `manager`\nfunction createReward(\n uint256 _contribution,\n uint256 _maxNumber,\n string memory _imageLink,\n address user\n) public restrictedThroughFactory(user) {\n _createReward(_contribution, _maxNumber, _imageLink);\n}\n\nfunction _createReward(\n uint256 _contribution,\n uint256 _maxNumber,\n string memory _imageLink\n) internal {\n Reward memory newReward = Reward({\n contribution: _contribution,\n maxNumber: _maxNumber,\n ImageLink: _imageLink\n });\n\n rewards.push(newReward);\n}\n\n" ]
[ 0 ]
[]
[]
[ "ethereum", "evm", "solidity" ]
stackoverflow_0074679754_ethereum_evm_solidity.txt
Q: How to handle data model with long text column + associated embeded metadata in an Android Room database I'm new to Android, and rather new to SQL in general. I have a data model where I have a Text that consists of TextMetadata as well as a long string, which is the text content itself. So Text { metadata: { author: string, title: string // more associated metadata }, textContent: long string, or potentially array of lines or paragraphs } I'd like to load a list of the metadata for all texts on the App's landing page, without incurring the cost of reading all the long strings (or having operations be slowed down because the table has a column with a long string?). What is the proper pattern here? Should I use two tables, and related them? Or can I use one table/one @Entity, with embedded metadata, and do some fancy stuff in the DAO to just list/sort/operate on the embedded metadata? Most of my background is with NoSQL databases, so I could be thinking about this entirely wrong. Advice on the general best practices here would be helpful, but I guess I have two core questions: Does having a long/very long string/TEXT column cause performance considerations when operating on that specific table/row? Is there a clean way using Kotlin annotations to express embedded metadata that would make it easy to fetch in the DAO, without having use a long SELECT for each individual column? A: To answer your first question, having a long string column in your database table can cause performance considerations when operating on that specific table/row. This is because the database engine needs to load the entire string into memory in order to perform any operations on it. This can be particularly problematic if you have a large number of rows in your table with long strings, as it can cause the database to run out of memory and crash. To avoid this problem, you can use two tables in your database and relate them using a foreign key. The first table would contain the metadata for each text, while the second table would contain the actual text content. This way, you can query the metadata table to get a list of all the texts without having to load the actual text content into memory. To answer your second question, you can use the @Embedded and @Relation annotations in Kotlin to express embedded metadata and define relationships between your database tables. The @Embedded annotation can be used to define a nested object within your entity, while the @Relation annotation can be used to define a relationship between two entities. Here is an example of how you could use these annotations to define your data model: @Entity data class Text( @PrimaryKey val id: Long, val metadata: TextMetadata, val textContent: String ) @Embedded data class TextMetadata( val author: String, val title: String ) You can then use the @Relation annotation in your DAO to define a relationship between the Text and TextMetadata entities. This would allow you to query the TextMetadata table to get a list of all the texts without having to load the actual text content into memory. A: It is generally not recommended to store long strings in a SQL database, as this can lead to performance issues. Instead, you can use a technique called "lazy loading" to load the long strings only when they are needed. This involves storing the long strings in a separate table, and using a foreign key to relate the two tables. To use this technique with Room, you can define two entities: one for the metadata, and one for the long strings. The metadata entity can have a foreign key column that refers to the primary key of the long strings table. In your DAO, you can use a LEFT JOIN query to load the metadata and the associated long strings, and then use the @Relation annotation to tell Room how to map the result of the query to your entity objects. Here is an example of how this might look in code: @Entity data class TextMetadata( val author: String, val title: String, // more associated metadata val textContentId: Long ) @Entity data class TextContent( @PrimaryKey val id: Long, val textContent: String ) data class TextWithContent( @Embedded val metadata: TextMetadata, @Relation(parentColumn = "textContentId", entityColumn = "id") val textContent: TextContent ) @Dao interface TextDao { @Query("SELECT * FROM text_metadata LEFT JOIN text_content ON text_metadata.textContentId = text_content.id") fun getAllTexts(): List<TextWithContent> } Using this approach, you can load only the metadata for all texts on the landing page, and then lazy-load the long strings when they are needed. This should improve the performance of your database operations. As for expressing embedded metadata with Kotlin annotations, you can use the @Embedded annotation to tell Room that a property of your entity represents an embedded object. For example, in the code above, the metadata property of the TextWithContent entity is marked with the @Embedded annotation to indicate that it represents an embedded TextMetadata object. I hope this helps! Let me know if you have any other questions. A: This is a good question that is also relevant to other environments. The Core Issue: How to store large data without effecting your database? As a rule of thumb you should avoid storing information in your database that is not queryable. Large strings, images, or event metadata which you will never query - does not belong in your db. I was surprised when I realized how many design patterns there are regarding to mongo db (which are relevant to other noSQL databases as well) So, we know that this data should NOT be stored in the DB. But, because the alternative (file system) is WAY worse than that (unless you would like to implement your own secured file-system-based store) - we should at least try to minimize its footprint. Our Strategy: save large data chunks in a different table without defining it as an entity (there is no need to wrap it as entity anyway) How Are We Going To Do That? Well, thankfully, android room has a direct access to sqLite and it can be used directly (read the docs). This is the place to remind us that android room is built on-top of sqLite - which is (in my own opinion) a fascinating database. I enjoy working with it very much and it's just getting better as the time goes by (personal opinion). Advantages? we are still using android APIs while storing large data in a performant, unified and secure way. yay Steps we are going to perform: Initiate a class which will manage a new database - for storing large data only Define a command that will create our table - constructed of 2 columns key (primary key) - the id of the item value - the item itself In original db for the Text entity - define a column that will hold the id (key) of the large text stored Whenever you save an item to your large items table - get the id and store it in your entity You can of course use only 1 table for this.. but.. I know that sqLite requires a certain amount of understanding and it is NOT as easy as android room so.. it's your choice whenever to use 1 or 2 tables in your solution Below is a code that demonstrates the main principal of my proposal object LargeDataContract { // all tables for handling large data will be defined here object TextEntry : BaseColumns { const val TABLE_NAME = "text_entry" const val COLUMN_NAME_KEY = "key" const val COLUMN_NAME_VALUE = "value" } } // in the future - append this "create statement" whenever you add more tables to your database private const val SQL_CREATE_ENTRIES = "CREATE TABLE ${TextEntry.TABLE_NAME} (" + "${TextEntry.COLUMN_NAME_KEY} INTEGER PRIMARY KEY," + "${TextEntry.COLUMN_NAME_VALUE} TEXT)" // create a helper that will assist you to initiate your database properly class LargeDataDbHelper(context: Context) : SQLiteOpenHelper(context, DATABASE_NAME, null, DATABASE_VERSION) { override fun onCreate(db: SQLiteDatabase) { db.execSQL(SQL_CREATE_ENTRIES) } companion object { // If you change the database schema, you must increment the database version. Also - please read `sqLite` documentation to better understand versioning ,upgrade and downgrade operations const val DATABASE_VERSION = 1 const val DATABASE_NAME = "LargeData.db" } } // create an instance and connect to your database val dbHelper = LargeDataDbHelper(context) // write an item to your database val db = dbHelper.writableDatabase val values = ContentValues().apply { put(TextEntry.COLUMN_NAME_VALUE, "some long value goes here") } val key = db?.insert(TextEntry.TABLE_NAME, null, values) // now take the key variable and store it in you entity. this is the only reference you should need Bottom Line: This approach will assist you to gain as much performance as possible while using android APIs. Sure thing, not the most "intuitive" solution, but - this is how we gain performance and making great apps as well as educating ourselves and upgrading our knowledge and skillset. Cheers
How to handle data model with long text column + associated embeded metadata in an Android Room database
I'm new to Android, and rather new to SQL in general. I have a data model where I have a Text that consists of TextMetadata as well as a long string, which is the text content itself. So Text { metadata: { author: string, title: string // more associated metadata }, textContent: long string, or potentially array of lines or paragraphs } I'd like to load a list of the metadata for all texts on the App's landing page, without incurring the cost of reading all the long strings (or having operations be slowed down because the table has a column with a long string?). What is the proper pattern here? Should I use two tables, and related them? Or can I use one table/one @Entity, with embedded metadata, and do some fancy stuff in the DAO to just list/sort/operate on the embedded metadata? Most of my background is with NoSQL databases, so I could be thinking about this entirely wrong. Advice on the general best practices here would be helpful, but I guess I have two core questions: Does having a long/very long string/TEXT column cause performance considerations when operating on that specific table/row? Is there a clean way using Kotlin annotations to express embedded metadata that would make it easy to fetch in the DAO, without having use a long SELECT for each individual column?
[ "To answer your first question, having a long string column in your database table can cause performance considerations when operating on that specific table/row. This is because the database engine needs to load the entire string into memory in order to perform any operations on it. This can be particularly problematic if you have a large number of rows in your table with long strings, as it can cause the database to run out of memory and crash.\nTo avoid this problem, you can use two tables in your database and relate them using a foreign key. The first table would contain the metadata for each text, while the second table would contain the actual text content. This way, you can query the metadata table to get a list of all the texts without having to load the actual text content into memory.\nTo answer your second question, you can use the @Embedded and @Relation annotations in Kotlin to express embedded metadata and define relationships between your database tables. The @Embedded annotation can be used to define a nested object within your entity, while the @Relation annotation can be used to define a relationship between two entities.\nHere is an example of how you could use these annotations to define your data model:\n@Entity\ndata class Text(\n @PrimaryKey val id: Long,\n val metadata: TextMetadata,\n val textContent: String\n)\n\n@Embedded\ndata class TextMetadata(\n val author: String,\n val title: String\n)\n\n\nYou can then use the @Relation annotation in your DAO to define a relationship between the Text and TextMetadata entities. This would allow you to query the TextMetadata table to get a list of all the texts without having to load the actual text content into memory.\n", "It is generally not recommended to store long strings in a SQL database, as this can lead to performance issues. Instead, you can use a technique called \"lazy loading\" to load the long strings only when they are needed. This involves storing the long strings in a separate table, and using a foreign key to relate the two tables.\nTo use this technique with Room, you can define two entities: one for the metadata, and one for the long strings. The metadata entity can have a foreign key column that refers to the primary key of the long strings table. In your DAO, you can use a LEFT JOIN query to load the metadata and the associated long strings, and then use the @Relation annotation to tell Room how to map the result of the query to your entity objects.\nHere is an example of how this might look in code:\n@Entity\ndata class TextMetadata(\n val author: String,\n val title: String,\n // more associated metadata\n val textContentId: Long\n)\n\n@Entity\ndata class TextContent(\n @PrimaryKey val id: Long,\n val textContent: String\n)\n\ndata class TextWithContent(\n @Embedded val metadata: TextMetadata,\n @Relation(parentColumn = \"textContentId\", entityColumn = \"id\")\n val textContent: TextContent\n)\n\n@Dao\ninterface TextDao {\n @Query(\"SELECT * FROM text_metadata LEFT JOIN text_content ON text_metadata.textContentId = text_content.id\")\n fun getAllTexts(): List<TextWithContent>\n}\n\nUsing this approach, you can load only the metadata for all texts on the landing page, and then lazy-load the long strings when they are needed. This should improve the performance of your database operations.\nAs for expressing embedded metadata with Kotlin annotations, you can use the @Embedded annotation to tell Room that a property of your entity represents an embedded object. For example, in the code above, the metadata property of the TextWithContent entity is marked with the @Embedded annotation to indicate that it represents an embedded TextMetadata object.\nI hope this helps! Let me know if you have any other questions.\n", "This is a good question that is also relevant to other environments.\nThe Core Issue: How to store large data without effecting your database?\nAs a rule of thumb you should avoid storing information in your database that is not queryable. Large strings, images, or event metadata which you will never query - does not belong in your db. I was surprised when I realized how many design patterns there are regarding to mongo db (which are relevant to other noSQL databases as well)\nSo, we know that this data should NOT be stored in the DB. But, because the alternative (file system) is WAY worse than that (unless you would like to implement your own secured file-system-based store) - we should at least try to minimize its footprint.\nOur Strategy: save large data chunks in a different table without defining it as an entity (there is no need to wrap it as entity anyway)\nHow Are We Going To Do That?\nWell, thankfully, android room has a direct access to sqLite and it can be used directly (read the docs). This is the place to remind us that android room is built on-top of sqLite - which is (in my own opinion) a fascinating database. I enjoy working with it very much and it's just getting better as the time goes by (personal opinion). Advantages? we are still using android APIs while storing large data in a performant, unified and secure way. yay\nSteps we are going to perform:\n\nInitiate a class which will manage a new database - for storing large data only\nDefine a command that will create our table - constructed of 2 columns\n\nkey (primary key) - the id of the item\nvalue - the item itself\n\n\nIn original db for the Text entity - define a column that will hold the id (key) of the large text stored\nWhenever you save an item to your large items table - get the id and store it in your entity\n\nYou can of course use only 1 table for this.. but.. I know that sqLite requires a certain amount of understanding and it is NOT as easy as android room so.. it's your choice whenever to use 1 or 2 tables in your solution\nBelow is a code that demonstrates the main principal of my proposal\nobject LargeDataContract {\n // all tables for handling large data will be defined here\n\n object TextEntry : BaseColumns {\n const val TABLE_NAME = \"text_entry\"\n const val COLUMN_NAME_KEY = \"key\"\n const val COLUMN_NAME_VALUE = \"value\"\n }\n}\n\n// in the future - append this \"create statement\" whenever you add more tables to your database\nprivate const val SQL_CREATE_ENTRIES =\n \"CREATE TABLE ${TextEntry.TABLE_NAME} (\" +\n \"${TextEntry.COLUMN_NAME_KEY} INTEGER PRIMARY KEY,\" +\n \"${TextEntry.COLUMN_NAME_VALUE} TEXT)\"\n\n// create a helper that will assist you to initiate your database properly\nclass LargeDataDbHelper(context: Context) : SQLiteOpenHelper(context, DATABASE_NAME, null, DATABASE_VERSION) {\n override fun onCreate(db: SQLiteDatabase) {\n db.execSQL(SQL_CREATE_ENTRIES)\n }\n \n companion object {\n // If you change the database schema, you must increment the database version. Also - please read `sqLite` documentation to better understand versioning ,upgrade and downgrade operations\n const val DATABASE_VERSION = 1\n const val DATABASE_NAME = \"LargeData.db\"\n }\n}\n\n// create an instance and connect to your database\nval dbHelper = LargeDataDbHelper(context)\n\n// write an item to your database\nval db = dbHelper.writableDatabase\n\nval values = ContentValues().apply {\n put(TextEntry.COLUMN_NAME_VALUE, \"some long value goes here\")\n}\n\nval key = db?.insert(TextEntry.TABLE_NAME, null, values)\n\n// now take the key variable and store it in you entity. this is the only reference you should need\n\nBottom Line: This approach will assist you to gain as much performance as possible while using android APIs. Sure thing, not the most \"intuitive\" solution, but - this is how we gain performance and making great apps as well as educating ourselves and upgrading our knowledge and skillset. Cheers\n" ]
[ 1, 0, 0 ]
[]
[]
[ "android_room", "android_room_embedded", "sqlite" ]
stackoverflow_0074588259_android_room_android_room_embedded_sqlite.txt
Q: Cannot assign to value: 'self' is immutable when assigning the struct within itself If I have this struct in Swift: class MyStruct { public var v1 : UInt64 = 0 public var v2 : Bool = false public var v3 : UInt16 = 0 func setDefaults() { var this = MyStruct() self = this } } Why can't I do: self = this It results in: Cannot assign to value: 'self' is immutable There must be a way to assign all the values in one assignment. What am I missing there? A: You may be looking for: class MyStruct { public var v1 : UInt64 = 0 public var v2 : Bool = false public var v3 : UInt16 = 0 mutating func setDefaults() { self = MyStruct() } }
Cannot assign to value: 'self' is immutable when assigning the struct within itself
If I have this struct in Swift: class MyStruct { public var v1 : UInt64 = 0 public var v2 : Bool = false public var v3 : UInt16 = 0 func setDefaults() { var this = MyStruct() self = this } } Why can't I do: self = this It results in: Cannot assign to value: 'self' is immutable There must be a way to assign all the values in one assignment. What am I missing there?
[ "You may be looking for:\nclass MyStruct {\n public var v1 : UInt64 = 0\n public var v2 : Bool = false\n public var v3 : UInt16 = 0\n\n mutating func setDefaults() {\n self = MyStruct()\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "self", "struct", "swift" ]
stackoverflow_0074680335_self_struct_swift.txt
Q: ReactJS change beteween 15.x and 16.x cause error My site works with react 15.6.2 when I update to 16.14.0 added also react-dom .... it errors out Per docs 15.x change to 16.x should work. src/js/config.js //This script configures require js and bootstraps the application //require.js will be concatenated with this file //using the Makefile so that the resulting boot.js //is the only JS file required to bootstrap the app. require.config({ paths: { "plugins" : 'empty:', //plugins are provided via Flask (see app.py) "env_settings" : 'empty:', //these settings get provided via Flask (see app.py) "text" : "../assets/js/text", "requirejs" : "../node_modules/requirejs/requirejs", "jquery" : "../node_modules/jquery/dist/jquery", "bootstrap" : "../node_modules/bootstrap/dist/js/bootstrap", "react-bootstrap" : "../node_modules/react-bootstrap", "moment" : "../node_modules/momentjs/moment", "director" : "../node_modules/director/build/director", "react": "../node_modules/react/react", "react-dom": "../node_modules/react-dom/react-dom", "codemirror" : "../node_modules/codemirror", "sprintf" :"../node_modules/sprintf/src/sprintf", "marked":"../node_modules/marked/lib/marked", "prism":"../node_modules/prism/prism", "prism-react":"../node_modules/prism-react/prism-react" //"object-assign":"../node_modules/object-assign/object-assign" }, shim : { "director" : { exports : 'Router' }, "bootstrap" : { deps : ['jquery'], exports : "Bootstrap", }, "prism" : { exports : 'Prism' }, "d3" : { exports : "d3" }, "threejs" : { exports : "THREE" }, "marked" : { exports : 'marked' } }, baseUrl : "/static/js", urlArgs: "bust=BUILD_TIMESTAMP" }) This is the error: Uncaught Error: Module name "react" has not been loaded yet for context: _. Use require([]) http://requirejs.org/docs/errors.html#notloaded This is produced boot.js (only file needed for the app with JS code) https://www.topcodersonline.com/boot.js What should I add, what could be missing? What changed between 15.x and 16.x? Here is the project code (ReactJS code): https://github.com/marcinguy/betterscan-ce/tree/master/quantifiedcode/frontend A: It looks like the error is occurring because you are using the require function to try to import the react and react-dom modules, but those modules haven't been loaded yet. In order to fix this error, you can try importing the react and react-dom modules using the import keyword instead. For example, you could try changing this code: require.config({ paths: { ... "react": "../node_modules/react/react", "react-dom": "../node_modules/react-dom/react-dom", ... }, ... }) to this: import React from '../node_modules/react/react'; import ReactDOM from '../node_modules/react-dom/react-dom'; ... You might also need to update your code to use the new React 16.x syntax, since there are some changes between React 15.x and 16.x. For example, the React.createClass method was removed in React 16.x, so if you were using that method to create your React components, you will need to update your code to use the new class syntax instead. I hope this helps!
ReactJS change beteween 15.x and 16.x cause error
My site works with react 15.6.2 when I update to 16.14.0 added also react-dom .... it errors out Per docs 15.x change to 16.x should work. src/js/config.js //This script configures require js and bootstraps the application //require.js will be concatenated with this file //using the Makefile so that the resulting boot.js //is the only JS file required to bootstrap the app. require.config({ paths: { "plugins" : 'empty:', //plugins are provided via Flask (see app.py) "env_settings" : 'empty:', //these settings get provided via Flask (see app.py) "text" : "../assets/js/text", "requirejs" : "../node_modules/requirejs/requirejs", "jquery" : "../node_modules/jquery/dist/jquery", "bootstrap" : "../node_modules/bootstrap/dist/js/bootstrap", "react-bootstrap" : "../node_modules/react-bootstrap", "moment" : "../node_modules/momentjs/moment", "director" : "../node_modules/director/build/director", "react": "../node_modules/react/react", "react-dom": "../node_modules/react-dom/react-dom", "codemirror" : "../node_modules/codemirror", "sprintf" :"../node_modules/sprintf/src/sprintf", "marked":"../node_modules/marked/lib/marked", "prism":"../node_modules/prism/prism", "prism-react":"../node_modules/prism-react/prism-react" //"object-assign":"../node_modules/object-assign/object-assign" }, shim : { "director" : { exports : 'Router' }, "bootstrap" : { deps : ['jquery'], exports : "Bootstrap", }, "prism" : { exports : 'Prism' }, "d3" : { exports : "d3" }, "threejs" : { exports : "THREE" }, "marked" : { exports : 'marked' } }, baseUrl : "/static/js", urlArgs: "bust=BUILD_TIMESTAMP" }) This is the error: Uncaught Error: Module name "react" has not been loaded yet for context: _. Use require([]) http://requirejs.org/docs/errors.html#notloaded This is produced boot.js (only file needed for the app with JS code) https://www.topcodersonline.com/boot.js What should I add, what could be missing? What changed between 15.x and 16.x? Here is the project code (ReactJS code): https://github.com/marcinguy/betterscan-ce/tree/master/quantifiedcode/frontend
[ "It looks like the error is occurring because you are using the require function to try to import the react and react-dom modules, but those modules haven't been loaded yet. In order to fix this error, you can try importing the react and react-dom modules using the import keyword instead.\nFor example, you could try changing this code:\nrequire.config({\n paths: {\n ...\n \"react\": \"../node_modules/react/react\",\n \"react-dom\": \"../node_modules/react-dom/react-dom\",\n ...\n },\n ...\n})\n\n\nto this:\nimport React from '../node_modules/react/react';\nimport ReactDOM from '../node_modules/react-dom/react-dom';\n\n...\n\n\nYou might also need to update your code to use the new React 16.x syntax, since there are some changes between React 15.x and 16.x. For example, the React.createClass method was removed in React 16.x, so if you were using that method to create your React components, you will need to update your code to use the new class syntax instead.\nI hope this helps!\n" ]
[ 1 ]
[]
[]
[ "reactjs" ]
stackoverflow_0074680341_reactjs.txt
Q: How can I split excel column base on a very spefic pattern that varies? I have a large excel spreadsheet with many hundreds of rows and I want to split the main info column into multiple columns and it's not the most consistant data. I am trying to split it using date stamps found in the text, here is some examples. I understand if this is an impossible task and yes, there really are date and time stamps in the middle of some entries. excel sheet I tried using formulas like =LEFT, =RIGHT, =MID but it only kinda worked sometimes and anytime there was also a time stamp next to the date it would mess it up, so I did not get far with that. I also didn't know how to make that work fully as the entries int eh data field vary from 1 entry to 4+ entries. I also tried using power query but either it was not working or I don't understand how to use it. So after being confused and overwhelmed I am now here. A: Regular expressions are the most useful tool for this task. Because Excel doesn't have built-in support for regex, you should either (i) write your own VBA script to use as a custom macro for extracting regex pattern groups, or (ii) install a third-party Excel add-in for this purpose. In the interest of expediency, I'll use the third party option via the Excel "Ablebits" add-in1. Click the "Insert" ribbon at the top and click "Get Add-ins" in the Add-ins category. In the "Store" tab, search for "Ablebits Text Toolkit." Select a column whose messy text you'd like to extract date and time groups from. Click the "Regex Tools" button in the Ablebits Text Toolkit side pane. In the "Regex" box, put the pattern which matches dates and optionally times: (\d{4}\/\d{2}\/\d{2}) (\d{2}:\d{2}:\d{2})? Select the "Extract" radio button Press the "Extract" button at the bottom of the side pane. This should put date and date-time matches in a column next to the selected column. Let me know if you run into issues with this solution! 1 Source from a tutorial on Ablebits.com (https://www.ablebits.com/office-addins-blog/regex-extract-strings-excel/)
How can I split excel column base on a very spefic pattern that varies?
I have a large excel spreadsheet with many hundreds of rows and I want to split the main info column into multiple columns and it's not the most consistant data. I am trying to split it using date stamps found in the text, here is some examples. I understand if this is an impossible task and yes, there really are date and time stamps in the middle of some entries. excel sheet I tried using formulas like =LEFT, =RIGHT, =MID but it only kinda worked sometimes and anytime there was also a time stamp next to the date it would mess it up, so I did not get far with that. I also didn't know how to make that work fully as the entries int eh data field vary from 1 entry to 4+ entries. I also tried using power query but either it was not working or I don't understand how to use it. So after being confused and overwhelmed I am now here.
[ "Regular expressions are the most useful tool for this task. Because Excel doesn't have built-in support for regex, you should either (i) write your own VBA script to use as a custom macro for extracting regex pattern groups, or (ii) install a third-party Excel add-in for this purpose.\nIn the interest of expediency, I'll use the third party option via the Excel \"Ablebits\" add-in1.\n\nClick the \"Insert\" ribbon at the top and click \"Get Add-ins\" in the Add-ins category.\nIn the \"Store\" tab, search for \"Ablebits Text Toolkit.\"\nSelect a column whose messy text you'd like to extract date and time groups from.\nClick the \"Regex Tools\" button in the Ablebits Text Toolkit side pane.\nIn the \"Regex\" box, put the pattern which matches dates and optionally times:\n\n(\\d{4}\\/\\d{2}\\/\\d{2}) (\\d{2}:\\d{2}:\\d{2})?\n\n\nSelect the \"Extract\" radio button\nPress the \"Extract\" button at the bottom of the side pane.\n\nThis should put date and date-time matches in a column next to the selected column. Let me know if you run into issues with this solution!\n\n1 Source from a tutorial on Ablebits.com (https://www.ablebits.com/office-addins-blog/regex-extract-strings-excel/)\n" ]
[ 0 ]
[]
[]
[ "excel", "excel_formula" ]
stackoverflow_0074680020_excel_excel_formula.txt
Q: Pythonic way of checking if a condition holds for any element of a list I have a list in Python, and I want to check if any elements are negative. Is there a simple function or syntax I can use to apply the "is negative" check to all the elements, and see if any of them is negative? I looked through the documentation and couldn't find anything similar. The best I could come up with was: if (True in [t < 0 for t in x]): # do something I find this rather inelegant. Is there a better way to do this in Python? Existing answers here use the built-in function any to do the iteration. See How do Python's any and all functions work? for an explanation of any and its counterpart, all. If the condition you want to check is "is found in another container", see How to check if one of the following items is in a list? and its counterpart, How to check if all of the following items are in a list?. Using any and all will work, but more efficient solutions are possible. A: any(): if any(t < 0 for t in x): # do something Also, if you're going to use "True in ...", make it a generator expression so it doesn't take O(n) memory: if True in (t < 0 for t in x): A: Use any(). if any(t < 0 for t in x): # do something A: Python has a built in any() function for exactly this purpose.
Pythonic way of checking if a condition holds for any element of a list
I have a list in Python, and I want to check if any elements are negative. Is there a simple function or syntax I can use to apply the "is negative" check to all the elements, and see if any of them is negative? I looked through the documentation and couldn't find anything similar. The best I could come up with was: if (True in [t < 0 for t in x]): # do something I find this rather inelegant. Is there a better way to do this in Python? Existing answers here use the built-in function any to do the iteration. See How do Python's any and all functions work? for an explanation of any and its counterpart, all. If the condition you want to check is "is found in another container", see How to check if one of the following items is in a list? and its counterpart, How to check if all of the following items are in a list?. Using any and all will work, but more efficient solutions are possible.
[ "any():\nif any(t < 0 for t in x):\n # do something\n\nAlso, if you're going to use \"True in ...\", make it a generator expression so it doesn't take O(n) memory:\nif True in (t < 0 for t in x):\n\n", "Use any().\nif any(t < 0 for t in x):\n # do something\n\n", "Python has a built in any() function for exactly this purpose.\n" ]
[ 246, 37, 11 ]
[ "a=x.copy()\na.sort()\nif a[0]<0:\n # do something\n\n" ]
[ -1 ]
[ "list", "python" ]
stackoverflow_0001342601_list_python.txt
Q: can media query changes based on main div width instead of window width? my Friends I have a problem.i have a project that has 3 buttons in it's header .these buttons change width of main div to my specific width has set in my Java script.i want something like media queries for example change my div color or hide or show something when div width changes. i know I can use Java script to define background color for every width but imagine that i have navbar that has sandwich menu button on mobile.this button can easily hide in an specific width that i have defined on my media queries;but what about hiding this button when my main div width decreases to mobile view width instead of changing browser width? please excuse me for this long text. <body> <header> <span role="button" class="header-collapse" ><i class="fas fa-angle-up"></i ></span> <ul class="screens"> <li> <a href="#"><i class="fas fa-mobile-alt"></i></a> </li> <li> <a href="#"><i class="fas fa-tablet-alt"></i></a> </li> <li> <a href="#"><i class="fas fa-desktop"></i></a> </li> </ul> </header> <main class="content"> </main> <footer> <span>Coded by Mahdi Baghaei</span> </footer> <script src="./js/app.js"></script> </body> *, html { padding: 0; margin: 0; box-sizing: border-box; } body { min-height: 100vh; display: flex; flex-direction: column; font-family: "poppins", sans-serif; } header { min-height: 5vh; position: relative; background-color: #e4e1fc; width: 15%; margin: 0 auto; transition: all 0.2s ease; } .header-collapse { position: absolute; display: block; left: 50%; transform: translateX(-50%); top: 100%; background-color: #e4e1fc; width: 2rem; height: 1rem; display: flex; justify-content: center; align-items: center; color: #2512cc; cursor: pointer; transition: background-color 0.3s ease, color 0.2s ease; } .header-collapse:hover { background-color: #a49af6; color: white; } .header-set { transform: translateY(-100%); } .header-collapse-set { transform: translateX(-50%) rotateZ(-180deg); } .screens { width: 100%; margin: 0 auto; display: flex; list-style: none; justify-content: space-around; height: 100%; align-items: center; padding: 1rem; } .screens li a { width: 2rem; height: 2rem; display: flex; flex-direction: column; text-align: center; justify-content: center; transition: background-color 0.3s ease, color 0.2s ease; color: #2512cc; align-items: center; } .screens li a:hover { background-color: #a49af6; color: white; } .screens li a i { font-size: 1rem; } .content { width: 100%; height: 75vh; transition: all 0.4s ease; margin: auto; border: 1px solid white; } .content-mobile { width: 768px; } .content-tablet { width: 1200px; } footer { margin-top: auto; background-color: #e4e1fc; min-height: 5vh; padding: 1rem; text-align: center; } footer span { font-size: 0.9rem; } @media screen and (max-width: 768px) { .content { background-color: red; } } @media screen and (min-width: 768px) { .content { background-color: green; } } @media screen and (min-width: 1200px) { .content { background-color: blue; } } const header = document.querySelector("header"); const headerCollapse = document.querySelector(".header-collapse"); const content = document.querySelector(".content"); const screenSwitch = document.querySelectorAll(".screens li"); headerCollapse.addEventListener("click", headerCollapseF); function headerCollapseF() { header.classList.toggle("header-set"); headerCollapse.classList.toggle("header-collapse-set"); } screenSwitch[0].addEventListener("click", mobileView); screenSwitch[1].addEventListener("click", tabletView); screenSwitch[2].addEventListener("click", standardView); function mobileView() { content.style.width = "768px"; content.style.border = "1px solid black"; } function tabletView() { content.style.width = "1200px"; content.style.border = "1px solid black"; } function standardView() { content.style.width = "100%"; content.style.border = "1px solid white"; } A: I have solved my problem by using iframe html tag thanks you all.
can media query changes based on main div width instead of window width?
my Friends I have a problem.i have a project that has 3 buttons in it's header .these buttons change width of main div to my specific width has set in my Java script.i want something like media queries for example change my div color or hide or show something when div width changes. i know I can use Java script to define background color for every width but imagine that i have navbar that has sandwich menu button on mobile.this button can easily hide in an specific width that i have defined on my media queries;but what about hiding this button when my main div width decreases to mobile view width instead of changing browser width? please excuse me for this long text. <body> <header> <span role="button" class="header-collapse" ><i class="fas fa-angle-up"></i ></span> <ul class="screens"> <li> <a href="#"><i class="fas fa-mobile-alt"></i></a> </li> <li> <a href="#"><i class="fas fa-tablet-alt"></i></a> </li> <li> <a href="#"><i class="fas fa-desktop"></i></a> </li> </ul> </header> <main class="content"> </main> <footer> <span>Coded by Mahdi Baghaei</span> </footer> <script src="./js/app.js"></script> </body> *, html { padding: 0; margin: 0; box-sizing: border-box; } body { min-height: 100vh; display: flex; flex-direction: column; font-family: "poppins", sans-serif; } header { min-height: 5vh; position: relative; background-color: #e4e1fc; width: 15%; margin: 0 auto; transition: all 0.2s ease; } .header-collapse { position: absolute; display: block; left: 50%; transform: translateX(-50%); top: 100%; background-color: #e4e1fc; width: 2rem; height: 1rem; display: flex; justify-content: center; align-items: center; color: #2512cc; cursor: pointer; transition: background-color 0.3s ease, color 0.2s ease; } .header-collapse:hover { background-color: #a49af6; color: white; } .header-set { transform: translateY(-100%); } .header-collapse-set { transform: translateX(-50%) rotateZ(-180deg); } .screens { width: 100%; margin: 0 auto; display: flex; list-style: none; justify-content: space-around; height: 100%; align-items: center; padding: 1rem; } .screens li a { width: 2rem; height: 2rem; display: flex; flex-direction: column; text-align: center; justify-content: center; transition: background-color 0.3s ease, color 0.2s ease; color: #2512cc; align-items: center; } .screens li a:hover { background-color: #a49af6; color: white; } .screens li a i { font-size: 1rem; } .content { width: 100%; height: 75vh; transition: all 0.4s ease; margin: auto; border: 1px solid white; } .content-mobile { width: 768px; } .content-tablet { width: 1200px; } footer { margin-top: auto; background-color: #e4e1fc; min-height: 5vh; padding: 1rem; text-align: center; } footer span { font-size: 0.9rem; } @media screen and (max-width: 768px) { .content { background-color: red; } } @media screen and (min-width: 768px) { .content { background-color: green; } } @media screen and (min-width: 1200px) { .content { background-color: blue; } } const header = document.querySelector("header"); const headerCollapse = document.querySelector(".header-collapse"); const content = document.querySelector(".content"); const screenSwitch = document.querySelectorAll(".screens li"); headerCollapse.addEventListener("click", headerCollapseF); function headerCollapseF() { header.classList.toggle("header-set"); headerCollapse.classList.toggle("header-collapse-set"); } screenSwitch[0].addEventListener("click", mobileView); screenSwitch[1].addEventListener("click", tabletView); screenSwitch[2].addEventListener("click", standardView); function mobileView() { content.style.width = "768px"; content.style.border = "1px solid black"; } function tabletView() { content.style.width = "1200px"; content.style.border = "1px solid black"; } function standardView() { content.style.width = "100%"; content.style.border = "1px solid white"; }
[ "I have solved my problem by using iframe html tag\nthanks you all.\n" ]
[ 1 ]
[]
[]
[ "css", "html", "javascript" ]
stackoverflow_0074533308_css_html_javascript.txt
Q: Getaddrinfo and Socket programming currently going through Beej guide to client server on c++. https://beej.us/guide/bgnet/html/#client-server-background Before reading, its not really a important question, since its working now, is just im curious why this PROFESSIONAL guy is doing it a certain way. So I'm just doing simple client and server initialisation. Cutting to the chase.... I need help with some explanation on the AI_PASSIVE, hostname parameter, and the purpose of reusing this addrinfo everywhere like its a better approach. BEEJ format of creating a socket is, socklen_t addr_size; struct addrinfo hints, *res; int sockfd, new_fd; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // use IPv4 or IPv6, whichever hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // fill in my IP for me getaddrinfo(NULL, MYPORT, &hints, &res); sockfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol); bind(sockfd, res->ai_addr, res->ai_addrlen); listen(sockfd, BACKLOG); using getaddrinfo to get an addrinfo linked list, addrinfo is a structure with all information pertaining to an address. So as you can see he uses the first of the linked list, to create, bind a socket by just repeating through the properties of the addrinfo. I have not tried using his code for SOCK_STREAM, because it just shows more of the same format, where i just throw the addrinfo properties into functions. But when doing SOCK_DGRAM, where i use recvFrom and sendTo, i have to explicitly add the addresses to send to and receive from. Becomes a mess when i try to use the the address from the addrinfo. (eg. res->ai_family, res->ai_socktype) he just goes through all that to fill up the params required by functions memset(&hints, 0, sizeof hints); hints.ai_family = AF_INET; // AF_INET or AF_INET6 to force version hints.ai_socktype = SOCK_DGRAM; // TCP stream sockets , for UDP is SOCK_DGRAM hints.ai_flags = AI_PASSIVE; if ((status = getaddrinfo(NULL, MYPORT, &hints, &res)) < 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(status)); return 2; } From understanding ai_flags set to AI_PASSIVE without stating a hostname in the params results in using the local address, but the value i get keeps changing based on what i have printed out. Also I cant get it to work without explicitly doing it manually which i found from other examples. What i ended up doing copying from some one from stackoverflow struct sockaddr_in serAddr; // binding the port to ip and port serAddr.sin_family = AF_INET; serAddr.sin_port = htons(8080); serAddr.sin_addr.s_addr = INADDR_ANY; // bind it to the port we passed in to getaddrinfo(): int bindres; if ((bindres = bind(sockfd, (struct sockaddr *)&serAddr, sizeof(serAddr))) < 0) { return 2; } So this works perfectly, everything is straightforward theres no guess work on what is being returned, cause im testing locally right now. I just want to know why BEEJ is doing it that certain way, and why is it not working as he has explained. A: The AI_PASSIVE flag in the hints struct tells getaddrinfo to return information suitable for binding a socket that will accept connections. When this flag is set and the hostname parameter is NULL, getaddrinfo will return information about the wildcard address (i.e. the address that will listen on all network interfaces). In other words, the AI_PASSIVE flag is used to specify that the addrinfo structure returned by getaddrinfo should be used to bind to a socket and listen for incoming connections. This is why the addrinfo structure returned by getaddrinfo is used to create and bind the socket in Beej's code. Beej's code is using the first element of the linked list returned by getaddrinfo because it is assumed that the first element of the list will be the most suitable for the purpose at hand (i.e. creating a socket to accept incoming connections). It is common to loop through the linked list returned by getaddrinfo and try each element until a successful connection is made or the end of the list is reached. However, as you have observed, it can be simpler and more straightforward to just fill in the fields of a sockaddr_in or sockaddr_in6 structure manually when using SOCK_DGRAM sockets. This is because SOCK_DGRAM sockets require the remote address to be specified when sending or receiving data, so it is necessary to fill in the fields of the sockaddr_in or sockaddr_in6 structure manually in this case.
Getaddrinfo and Socket programming
currently going through Beej guide to client server on c++. https://beej.us/guide/bgnet/html/#client-server-background Before reading, its not really a important question, since its working now, is just im curious why this PROFESSIONAL guy is doing it a certain way. So I'm just doing simple client and server initialisation. Cutting to the chase.... I need help with some explanation on the AI_PASSIVE, hostname parameter, and the purpose of reusing this addrinfo everywhere like its a better approach. BEEJ format of creating a socket is, socklen_t addr_size; struct addrinfo hints, *res; int sockfd, new_fd; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // use IPv4 or IPv6, whichever hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // fill in my IP for me getaddrinfo(NULL, MYPORT, &hints, &res); sockfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol); bind(sockfd, res->ai_addr, res->ai_addrlen); listen(sockfd, BACKLOG); using getaddrinfo to get an addrinfo linked list, addrinfo is a structure with all information pertaining to an address. So as you can see he uses the first of the linked list, to create, bind a socket by just repeating through the properties of the addrinfo. I have not tried using his code for SOCK_STREAM, because it just shows more of the same format, where i just throw the addrinfo properties into functions. But when doing SOCK_DGRAM, where i use recvFrom and sendTo, i have to explicitly add the addresses to send to and receive from. Becomes a mess when i try to use the the address from the addrinfo. (eg. res->ai_family, res->ai_socktype) he just goes through all that to fill up the params required by functions memset(&hints, 0, sizeof hints); hints.ai_family = AF_INET; // AF_INET or AF_INET6 to force version hints.ai_socktype = SOCK_DGRAM; // TCP stream sockets , for UDP is SOCK_DGRAM hints.ai_flags = AI_PASSIVE; if ((status = getaddrinfo(NULL, MYPORT, &hints, &res)) < 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(status)); return 2; } From understanding ai_flags set to AI_PASSIVE without stating a hostname in the params results in using the local address, but the value i get keeps changing based on what i have printed out. Also I cant get it to work without explicitly doing it manually which i found from other examples. What i ended up doing copying from some one from stackoverflow struct sockaddr_in serAddr; // binding the port to ip and port serAddr.sin_family = AF_INET; serAddr.sin_port = htons(8080); serAddr.sin_addr.s_addr = INADDR_ANY; // bind it to the port we passed in to getaddrinfo(): int bindres; if ((bindres = bind(sockfd, (struct sockaddr *)&serAddr, sizeof(serAddr))) < 0) { return 2; } So this works perfectly, everything is straightforward theres no guess work on what is being returned, cause im testing locally right now. I just want to know why BEEJ is doing it that certain way, and why is it not working as he has explained.
[ "The AI_PASSIVE flag in the hints struct tells getaddrinfo to return information suitable for binding a socket that will accept connections. When this flag is set and the hostname parameter is NULL, getaddrinfo will return information about the wildcard address (i.e. the address that will listen on all network interfaces).\nIn other words, the AI_PASSIVE flag is used to specify that the addrinfo structure returned by getaddrinfo should be used to bind to a socket and listen for incoming connections. This is why the addrinfo structure returned by getaddrinfo is used to create and bind the socket in Beej's code.\nBeej's code is using the first element of the linked list returned by getaddrinfo because it is assumed that the first element of the list will be the most suitable for the purpose at hand (i.e. creating a socket to accept incoming connections). It is common to loop through the linked list returned by getaddrinfo and try each element until a successful connection is made or the end of the list is reached.\nHowever, as you have observed, it can be simpler and more straightforward to just fill in the fields of a sockaddr_in or sockaddr_in6 structure manually when using SOCK_DGRAM sockets. This is because SOCK_DGRAM sockets require the remote address to be specified when sending or receiving data, so it is necessary to fill in the fields of the sockaddr_in or sockaddr_in6 structure manually in this case.\n" ]
[ 0 ]
[]
[]
[ "c++", "sockets" ]
stackoverflow_0074680119_c++_sockets.txt
Q: Instapy problem,why doesnt the code work? from instapy import InstaPy from instapy import smart_run import time my_username = '_georgekazaras' my_password = 'mypassword' def job(): session = InstaPy(username=my_username, password=my_password) with smart_run(session): session.set_relationship_bounds(enabled=True, delimit_by_numbers=True, max_followers=90000000000000, min_followers=1, min_following=30) session.set_do_follow(True, precentage=100) session.set_dont_like(['tag1', 'tag2', 'tag3', 'tag4', 'tag5', 'tag6', 'tag7']) session.like_by_tags(['cars', 'chess', 'sports']) job() I tried getting the code out of the function to see if it helps but it didnt, is the library not working anymore ,is iit maybe because of my instagram settings or is it something about the code ? A: You have a typo in set_do_follow(percentage not precentage)
Instapy problem,why doesnt the code work?
from instapy import InstaPy from instapy import smart_run import time my_username = '_georgekazaras' my_password = 'mypassword' def job(): session = InstaPy(username=my_username, password=my_password) with smart_run(session): session.set_relationship_bounds(enabled=True, delimit_by_numbers=True, max_followers=90000000000000, min_followers=1, min_following=30) session.set_do_follow(True, precentage=100) session.set_dont_like(['tag1', 'tag2', 'tag3', 'tag4', 'tag5', 'tag6', 'tag7']) session.like_by_tags(['cars', 'chess', 'sports']) job() I tried getting the code out of the function to see if it helps but it didnt, is the library not working anymore ,is iit maybe because of my instagram settings or is it something about the code ?
[ "You have a typo in set_do_follow(percentage not precentage)\n" ]
[ 0 ]
[]
[]
[ "function", "instagram", "instapy", "module", "python" ]
stackoverflow_0074680374_function_instagram_instapy_module_python.txt
Q: How to scale QImage to a small size with good quality I have eight different videos. And I am trying to show these videos in a split window. My video quality is 720p. But I need to fit in small frame. When I resize the video with p = convert_to_Qt_format.scaled(256, 450, Qt.KeepAspectRatio) as a 256x450 I couldn't get good quality of video. How can I resize as a good quality. What is your suggestion to me? @pyqtSlot(list) def update_image(self, cv_img = []): """Updates the image_label with a new opencv image""" qt_img = [] for i in range(0,8): # qt_img.append(0) qt_img.append(self.convert_cv_qt(cv_img[i])) self.ui.video1.setPixmap(qt_img[0]) self.ui.video2.setPixmap(qt_img[1]) self.ui.video3.setPixmap(qt_img[2]) self.ui.video4.setPixmap(qt_img[3]) self.ui.video5.setPixmap(qt_img[4]) self.ui.video6.setPixmap(qt_img[5]) self.ui.video7.setPixmap(qt_img[6]) self.ui.video8.setPixmap(qt_img[7]) def convert_cv_qt(self, cv_img): """Convert from an opencv image to QPixmap""" rgb_image = cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB) h, w, ch = rgb_image.shape bytes_per_line = ch * w convert_to_Qt_format = QtGui.QImage(rgb_image.data, w, h, bytes_per_line, QtGui.QImage.Format_RGB888) p = convert_to_Qt_format.scaled(256, 450, Qt.KeepAspectRatio) return QPixmap.fromImage(p) A: Note that QImage::scaled() has an optional parameter transformMode that defaults to Qt::FastTransformation. If you pass Qt::SmoothTransformation the results should be better, because bilinear filtering is used.
How to scale QImage to a small size with good quality
I have eight different videos. And I am trying to show these videos in a split window. My video quality is 720p. But I need to fit in small frame. When I resize the video with p = convert_to_Qt_format.scaled(256, 450, Qt.KeepAspectRatio) as a 256x450 I couldn't get good quality of video. How can I resize as a good quality. What is your suggestion to me? @pyqtSlot(list) def update_image(self, cv_img = []): """Updates the image_label with a new opencv image""" qt_img = [] for i in range(0,8): # qt_img.append(0) qt_img.append(self.convert_cv_qt(cv_img[i])) self.ui.video1.setPixmap(qt_img[0]) self.ui.video2.setPixmap(qt_img[1]) self.ui.video3.setPixmap(qt_img[2]) self.ui.video4.setPixmap(qt_img[3]) self.ui.video5.setPixmap(qt_img[4]) self.ui.video6.setPixmap(qt_img[5]) self.ui.video7.setPixmap(qt_img[6]) self.ui.video8.setPixmap(qt_img[7]) def convert_cv_qt(self, cv_img): """Convert from an opencv image to QPixmap""" rgb_image = cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB) h, w, ch = rgb_image.shape bytes_per_line = ch * w convert_to_Qt_format = QtGui.QImage(rgb_image.data, w, h, bytes_per_line, QtGui.QImage.Format_RGB888) p = convert_to_Qt_format.scaled(256, 450, Qt.KeepAspectRatio) return QPixmap.fromImage(p)
[ "Note that QImage::scaled() has an optional parameter transformMode that defaults to Qt::FastTransformation. If you pass Qt::SmoothTransformation the results should be better, because bilinear filtering is used.\n" ]
[ 0 ]
[]
[]
[ "pyqt5", "python", "qimage", "qt", "video_processing" ]
stackoverflow_0074679910_pyqt5_python_qimage_qt_video_processing.txt
Q: GoogleSheet best 5 results with a maximum in another column I am looking for a formula or something similar to output 5 best results. The table looks like this. Name Column B Column C a 4 0 b 29 28 c 30 32 d 26 26 e 16 14 f 40 42 g 10 16 h 2 0 much more data Column B may not exceed a total of 100 for 5 results. The maximum with 5 results from column C need to be determined. The 5 best names should be displayed. I hope you understand my problem. Thanks! I tried with LARGE function but failed. A: Try with: =SORTN(FILTER(A:C,B:B<100),5,,3,0)
GoogleSheet best 5 results with a maximum in another column
I am looking for a formula or something similar to output 5 best results. The table looks like this. Name Column B Column C a 4 0 b 29 28 c 30 32 d 26 26 e 16 14 f 40 42 g 10 16 h 2 0 much more data Column B may not exceed a total of 100 for 5 results. The maximum with 5 results from column C need to be determined. The 5 best names should be displayed. I hope you understand my problem. Thanks! I tried with LARGE function but failed.
[ "Try with:\n=SORTN(FILTER(A:C,B:B<100),5,,3,0)\n\n" ]
[ 0 ]
[]
[]
[ "google_sheets" ]
stackoverflow_0074679842_google_sheets.txt
Q: How to "nail" a dynamic body? So, what I want to achieve is to have the body's center of masses in a static position, but freely rotating. Like it's nailed to a wall or something Tried to look at joints, but that doesn't feel right... A: To achieve the effect of a body with a fixed center of mass that can rotate freely, you can use a combination of a weld joint and a revolute joint in Box2D. A weld joint can be used to keep two bodies together at a fixed distance and angle, effectively simulating the effect of "nailing" the body to a wall. A revolute joint, on the other hand, allows two bodies to rotate around a common point, which can be used to allow the body to rotate freely around the weld joint. Here is one way to achieve that: // Define the body b2BodyDef bodyDef; bodyDef.type = b2_dynamicBody; bodyDef.position.Set(0.0f, 0.0f); b2Body* body = world.CreateBody(&bodyDef); // Define the shape b2CircleShape shape; shape.m_radius = 0.5f; // Define the fixture b2FixtureDef fixtureDef; fixtureDef.shape = &shape; fixtureDef.density = 1.0f; // Attach the fixture to the body body->CreateFixture(&fixtureDef); // Define the weld joint b2WeldJointDef weldJointDef; weldJointDef.bodyA = body; weldJointDef.bodyB = world.CreateBody(new b2BodyDef); weldJointDef.localAnchorA.Set(0.0f, 0.0f); weldJointDef.localAnchorB.Set(0.0f, 0.0f); weldJointDef.referenceAngle = 0.0f; // Create the weld joint world.CreateJoint(&weldJointDef); // Define the revolute joint b2RevoluteJointDef revoluteJointDef; revoluteJointDef.bodyA = body; revoluteJointDef.bodyB = world.CreateBody(new b2BodyDef); revoluteJointDef.localAnchorA.Set(0.0f, 0.0f); revoluteJointDef.localAnchorB.Set(0.0f, 0.0f); // Create the revolute joint world.CreateJoint(&revoluteJointDef); In this example, a body and fixture are created and attached to each other, as usual. Then, a weld joint is created between the body and a static body, with the weld joint anchored at the center of mass of the body. This fixes the position of the body's center of mass. Next, a revolute joint is created between the body and another static body, also anchored at the center of mass of the body. This allows the body to rotate freely around the weld joint. I hope this helps! Let me know if you have any other questions.
How to "nail" a dynamic body?
So, what I want to achieve is to have the body's center of masses in a static position, but freely rotating. Like it's nailed to a wall or something Tried to look at joints, but that doesn't feel right...
[ "To achieve the effect of a body with a fixed center of mass that can rotate freely, you can use a combination of a weld joint and a revolute joint in Box2D.\nA weld joint can be used to keep two bodies together at a fixed distance and angle, effectively simulating the effect of \"nailing\" the body to a wall.\nA revolute joint, on the other hand, allows two bodies to rotate around a common point, which can be used to allow the body to rotate freely around the weld joint.\nHere is one way to achieve that:\n// Define the body\nb2BodyDef bodyDef;\nbodyDef.type = b2_dynamicBody;\nbodyDef.position.Set(0.0f, 0.0f);\nb2Body* body = world.CreateBody(&bodyDef);\n\n// Define the shape\nb2CircleShape shape;\nshape.m_radius = 0.5f;\n\n// Define the fixture\nb2FixtureDef fixtureDef;\nfixtureDef.shape = &shape;\nfixtureDef.density = 1.0f;\n\n// Attach the fixture to the body\nbody->CreateFixture(&fixtureDef);\n\n// Define the weld joint\nb2WeldJointDef weldJointDef;\nweldJointDef.bodyA = body;\nweldJointDef.bodyB = world.CreateBody(new b2BodyDef);\nweldJointDef.localAnchorA.Set(0.0f, 0.0f);\nweldJointDef.localAnchorB.Set(0.0f, 0.0f);\nweldJointDef.referenceAngle = 0.0f;\n\n// Create the weld joint\nworld.CreateJoint(&weldJointDef);\n\n// Define the revolute joint\nb2RevoluteJointDef revoluteJointDef;\nrevoluteJointDef.bodyA = body;\nrevoluteJointDef.bodyB = world.CreateBody(new b2BodyDef);\nrevoluteJointDef.localAnchorA.Set(0.0f, 0.0f);\nrevoluteJointDef.localAnchorB.Set(0.0f, 0.0f);\n\n// Create the revolute joint\nworld.CreateJoint(&revoluteJointDef);\n\nIn this example, a body and fixture are created and attached to each other, as usual. Then, a weld joint is created between the body and a static body, with the weld joint anchored at the center of mass of the body. This fixes the position of the body's center of mass.\nNext, a revolute joint is created between the body and another static body, also anchored at the center of mass of the body. This allows the body to rotate freely around the weld joint.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "box2d" ]
stackoverflow_0074680285_box2d.txt
Q: DevTools Lighthouse: Best Practices displays "..." as a Deprecation/Warning I'm testing the performance of my website with Lighthouse DevTool but I can't score 100% in Best Practices because keeps showing this as a problem: Does anybody know what this means? How could I solve it? A: Also getting this, so you're not alone. Wondering if it was a recent Chrome bug I tried here - https://www.webpagetest.org/ and got the same useless error. After a bit more digging, I found this closed issue on the lighthouse github repo. https://github.com/GoogleChrome/lighthouse/issues/14233 So, it's been fixed, but only in version 10. Chrome 105 seems to be using lighthouse 9.6.2 and npm at the present time seems to install 9.6.7 (which has the same bug). You can run the latest version by pulling the repo from github, building and running on the command line. See instructions here https://github.com/GoogleChrome/lighthouse#develop Note, the build doesn't work on an M1 mac, due to i386 specific build tools. I had to dig out my old macbook to get this working. A: You can find the specific errors in the "Issues" tab where the "Console" output is.
DevTools Lighthouse: Best Practices displays "..." as a Deprecation/Warning
I'm testing the performance of my website with Lighthouse DevTool but I can't score 100% in Best Practices because keeps showing this as a problem: Does anybody know what this means? How could I solve it?
[ "Also getting this, so you're not alone. Wondering if it was a recent Chrome bug I tried here - https://www.webpagetest.org/ and got the same useless error.\nAfter a bit more digging, I found this closed issue on the lighthouse github repo.\nhttps://github.com/GoogleChrome/lighthouse/issues/14233\nSo, it's been fixed, but only in version 10. Chrome 105 seems to be using lighthouse 9.6.2 and npm at the present time seems to install 9.6.7 (which has the same bug).\nYou can run the latest version by pulling the repo from github, building and running on the command line. See instructions here\nhttps://github.com/GoogleChrome/lighthouse#develop\nNote, the build doesn't work on an M1 mac, due to i386 specific build tools. I had to dig out my old macbook to get this working.\n", "You can find the specific errors in the \"Issues\" tab where the \"Console\" output is.\n\n" ]
[ 1, 0 ]
[]
[]
[ "devtools", "google_chrome_devtools" ]
stackoverflow_0073666680_devtools_google_chrome_devtools.txt
Q: Concatenate or Textjoin for table rows based on criteria I have a GPS track file that I am importing into Excel (multiple cars in same file) and I want to manipulate and export the data so that it conforms to a gpx file type for a single chosen car. Some of the columns are not needed from the original file and some text needs to be added between the existing columns. I have built a macro that will do half of what I want but it copies the entire row for that car instead of getting the data in the form I need. In excel I can use the textjoin formula to achieve the goal I have but I want it to be a macro and that's where I am having the problem. Below is some sample data and my macro. I would enter the car number I am looking for into C21 on sheet1 and only rows that are for that car# (column b) would be moved to sheet2. The format I need is "trkpt lat="insert lat" lon="insert lon" time/insert time/" and this is where I would concat or textjoin specific portions of the original row onto sheet2 but in the above mentioned format. Here is an example of the data and my macro that is only working to copy the entire row Date/Time Car# Junk Lat Lon Junk2 Converted Date/Time 20221125050122ES 6 0 27.19483 -82.43863 x 2022-11-25T05:01:22-05:00 20221125050158ES 6 0 27.20587 -82.44154 x 2022-11-25T05:01:58-05:00 20221125052215ES 1 0 27.35147 -82.47196 x 2022-11-25T05:22:15-05:00 20221125052355ES 2 0 27.14018 -82.41795 x 2022-11-25T05:23:55-05:00 20221125052449ES 2 0 27.15536 -82.42394 x 2022-11-25T05:24:49-05:00 20221125052519ES 1 0 27.35149 -82.47195 x 2022-11-25T05:25:19-05:00 20221125052539ES 2 0 27.16463 -82.431 x 2022-11-25T05:25:39-05:00 20221125054932ES 3 0 27.2988 -82.44879 x 2022-11-25T05:49:32-05:00 20221125055059ES 3 0 27.27847 -82.44901 x 2022-11-25T05:50:59-05:00 20221125055519ES 4 0 27.31564 -82.26689 x 2022-11-25T05:55:19-05:00 20221125060022ES 4 0 27.31564 -82.26692 x 2022-11-25T06:00:22-05:00 20221125060106ES 6 0 27.18927 -82.43754 x 2022-11-25T06:01:06-05:00 20221125062409ES 2 0 27.14827 -82.41893 x 2022-11-25T06:24:09-05:00 20221125064901ES 3 0 27.29893 -82.4458 x 2022-11-25T06:49:01-05:00 20221125065650ES 4 0 27.31566 -82.26689 x 2022-11-25T06:56:50-05:00 20221125065821ES 4 0 27.31564 -82.26691 x 2022-11-25T06:58:21-05:00 20221125072115ES 1 0 27.35146 -82.47197 x 2022-11-25T07:21:15-05:00 Sub Getdata() Dim DriverRange As Range Worksheets(1).Select Set DriverRange = Worksheets(1).Range("B1", Range("B" & Rows.Count).End(xlUp)) For Each cell In DriverRange If cell.Value = Worksheets(1).Range("C21") Then lr = Worksheets(2).Range("A" & Rows.Count).End(xlUp).Row cell.EntireRow.Copy Destination:=Worksheets(2).Range("A" & lr + 1) End If Next cell End Sub output desired when searching for car 6 trkpt lat="27.19483" lon="-82.43863" time/2022-11-25T05:01:22-05:00/ trkpt lat="27.20587" lon="-82.44154" time/2022-11-25T05:01:58-05:00/ trkpt lat="27.18927" lon="-82.43754" time/2022-11-25T06:01:06-05:00/ I have tried several versions of the textjoin worksheet function that would replace the cell.entirerow.copy line of code but it does not grab the correct rows that match up with the car I want. I feel I am headed in the right direction but am missing something. A: Please, try the next code. It should be very fast, using arrays and dropping the processing result at once. I cannot see the column headers, but the code assumes that the data to be processed starts from "A:A" column and ends to "G:G" one, second row: Sub Getdata() Dim wsSource As Worksheet, wsDest As Worksheet, lastR As Long Dim arrS, arrD, i As Long, k As Long Const carNo As Long = 6 'place here the car number Set wsSource = Worksheets(1) Set wsDest = Worksheets(2) lastR = wsSource.Range("A" & wsSource.rows.count).End(xlUp).row arrS = wsSource.Range("A2:G" & lastR).Value 'place the range in an array for faster iteration/processing ReDim arrD(1 To UBound(arrS), 1 To 3) 'redim the destination array as its maximum possible number of rows For i = 1 To UBound(arrS) If arrS(i, 2) = carNo Then k = k + 1 arrD(k, 1) = "trkpt lat=""" & arrS(i, 4) & """" arrD(k, 2) = "lon=""" & arrS(i, 5) & """" arrD(k, 3) = "time/" & arrS(i, 7) & "/" End If Next i If k > 0 Then wsDest.Range("A2").Resize(k, 3).Value = arrD End If MsgBox "Ready...": wsDest.Activate End Sub Please, send some feedback after testing it.
Concatenate or Textjoin for table rows based on criteria
I have a GPS track file that I am importing into Excel (multiple cars in same file) and I want to manipulate and export the data so that it conforms to a gpx file type for a single chosen car. Some of the columns are not needed from the original file and some text needs to be added between the existing columns. I have built a macro that will do half of what I want but it copies the entire row for that car instead of getting the data in the form I need. In excel I can use the textjoin formula to achieve the goal I have but I want it to be a macro and that's where I am having the problem. Below is some sample data and my macro. I would enter the car number I am looking for into C21 on sheet1 and only rows that are for that car# (column b) would be moved to sheet2. The format I need is "trkpt lat="insert lat" lon="insert lon" time/insert time/" and this is where I would concat or textjoin specific portions of the original row onto sheet2 but in the above mentioned format. Here is an example of the data and my macro that is only working to copy the entire row Date/Time Car# Junk Lat Lon Junk2 Converted Date/Time 20221125050122ES 6 0 27.19483 -82.43863 x 2022-11-25T05:01:22-05:00 20221125050158ES 6 0 27.20587 -82.44154 x 2022-11-25T05:01:58-05:00 20221125052215ES 1 0 27.35147 -82.47196 x 2022-11-25T05:22:15-05:00 20221125052355ES 2 0 27.14018 -82.41795 x 2022-11-25T05:23:55-05:00 20221125052449ES 2 0 27.15536 -82.42394 x 2022-11-25T05:24:49-05:00 20221125052519ES 1 0 27.35149 -82.47195 x 2022-11-25T05:25:19-05:00 20221125052539ES 2 0 27.16463 -82.431 x 2022-11-25T05:25:39-05:00 20221125054932ES 3 0 27.2988 -82.44879 x 2022-11-25T05:49:32-05:00 20221125055059ES 3 0 27.27847 -82.44901 x 2022-11-25T05:50:59-05:00 20221125055519ES 4 0 27.31564 -82.26689 x 2022-11-25T05:55:19-05:00 20221125060022ES 4 0 27.31564 -82.26692 x 2022-11-25T06:00:22-05:00 20221125060106ES 6 0 27.18927 -82.43754 x 2022-11-25T06:01:06-05:00 20221125062409ES 2 0 27.14827 -82.41893 x 2022-11-25T06:24:09-05:00 20221125064901ES 3 0 27.29893 -82.4458 x 2022-11-25T06:49:01-05:00 20221125065650ES 4 0 27.31566 -82.26689 x 2022-11-25T06:56:50-05:00 20221125065821ES 4 0 27.31564 -82.26691 x 2022-11-25T06:58:21-05:00 20221125072115ES 1 0 27.35146 -82.47197 x 2022-11-25T07:21:15-05:00 Sub Getdata() Dim DriverRange As Range Worksheets(1).Select Set DriverRange = Worksheets(1).Range("B1", Range("B" & Rows.Count).End(xlUp)) For Each cell In DriverRange If cell.Value = Worksheets(1).Range("C21") Then lr = Worksheets(2).Range("A" & Rows.Count).End(xlUp).Row cell.EntireRow.Copy Destination:=Worksheets(2).Range("A" & lr + 1) End If Next cell End Sub output desired when searching for car 6 trkpt lat="27.19483" lon="-82.43863" time/2022-11-25T05:01:22-05:00/ trkpt lat="27.20587" lon="-82.44154" time/2022-11-25T05:01:58-05:00/ trkpt lat="27.18927" lon="-82.43754" time/2022-11-25T06:01:06-05:00/ I have tried several versions of the textjoin worksheet function that would replace the cell.entirerow.copy line of code but it does not grab the correct rows that match up with the car I want. I feel I am headed in the right direction but am missing something.
[ "Please, try the next code. It should be very fast, using arrays and dropping the processing result at once. I cannot see the column headers, but the code assumes that the data to be processed starts from \"A:A\" column and ends to \"G:G\" one, second row:\nSub Getdata()\n Dim wsSource As Worksheet, wsDest As Worksheet, lastR As Long\n Dim arrS, arrD, i As Long, k As Long\n Const carNo As Long = 6 'place here the car number\n \n Set wsSource = Worksheets(1)\n Set wsDest = Worksheets(2)\n \n lastR = wsSource.Range(\"A\" & wsSource.rows.count).End(xlUp).row\n arrS = wsSource.Range(\"A2:G\" & lastR).Value 'place the range in an array for faster iteration/processing\n \n ReDim arrD(1 To UBound(arrS), 1 To 3) 'redim the destination array as its maximum possible number of rows\n For i = 1 To UBound(arrS)\n If arrS(i, 2) = carNo Then\n k = k + 1\n arrD(k, 1) = \"trkpt lat=\"\"\" & arrS(i, 4) & \"\"\"\"\n arrD(k, 2) = \"lon=\"\"\" & arrS(i, 5) & \"\"\"\"\n arrD(k, 3) = \"time/\" & arrS(i, 7) & \"/\"\n End If\n Next i\n If k > 0 Then\n wsDest.Range(\"A2\").Resize(k, 3).Value = arrD\n End If\n \n MsgBox \"Ready...\": wsDest.Activate\nEnd Sub\n\nPlease, send some feedback after testing it.\n" ]
[ 0 ]
[]
[]
[ "concatenation", "excel", "excel_365", "textjoin", "vba" ]
stackoverflow_0074678609_concatenation_excel_excel_365_textjoin_vba.txt
Q: How to match group of lines between two matches? So, I have a result of a tool that goes like this: >Cluster 1 0 1967nt, >001126F:363892-365859... * 1 1676nt, >Aag2_family_100_all/000015F:2300484-2302160... at -/100.00% 2 1544nt, >Aag2_family_100_all/000453F:1675071-1676615... at +/100.00% 3 1208nt, >Aag2_family_100_all/000453F:1675260-1676468... at +/100.00% 4 1676nt, >Aag2_family_100_all/001252F:481349-483025... at -/100.00% 5 1676nt, >Aag2_family_100_all/001305F:490050-491726... at -/100.00% 6 1676nt, >Aag2_family_100_all/001828F:112497-114173... at -/100.00% 7 206nt, >Aag2_family_100_all/002989F:21276-21482... at +/100.00% >Cluster 2 0 1902nt, >000723F:1251286-1253188... * 1 1863nt, >Aag2_family_100_all/000723F:1251295-1253158... at +/100.00% 2 800nt, >Aag2_family_100_all/000723F:1252107-1252907... at +/100.00% And I'm trying to match all the lines in groups, like: 0 1967nt, >001126F:363892-365859... * 1 1676nt, >Aag2_family_100_all/000015F:2300484-2302160... at -/100.00% 2 1544nt, >Aag2_family_100_all/000453F:1675071-1676615... at +/100.00% 3 1208nt, >Aag2_family_100_all/000453F:1675260-1676468... at +/100.00% 4 1676nt, >Aag2_family_100_all/001252F:481349-483025... at -/100.00% 5 1676nt, >Aag2_family_100_all/001305F:490050-491726... at -/100.00% 6 1676nt, >Aag2_family_100_all/001828F:112497-114173... at -/100.00% 7 206nt, >Aag2_family_100_all/002989F:21276-21482... at +/100.00% And deposit in an variable, and relate the line starting with 0 with the others line like: 000723F:1251286-1253188 = [">Aag2_family_100_all/000723F:1251295-1253158, >Aag2_family_100_all/000723F:1252107-1252907"] I'm trying on python using the re library. I tried working on with line by line, but I think my logic is wrong, really rookie on regex import re result = [] with open("cluster_whit_eefinder.clstr", "r") as cluster: for line in cluster: if re.search(r'>Cluster.*', line): print(line) A: I think you missed .read() for cluster.
How to match group of lines between two matches?
So, I have a result of a tool that goes like this: >Cluster 1 0 1967nt, >001126F:363892-365859... * 1 1676nt, >Aag2_family_100_all/000015F:2300484-2302160... at -/100.00% 2 1544nt, >Aag2_family_100_all/000453F:1675071-1676615... at +/100.00% 3 1208nt, >Aag2_family_100_all/000453F:1675260-1676468... at +/100.00% 4 1676nt, >Aag2_family_100_all/001252F:481349-483025... at -/100.00% 5 1676nt, >Aag2_family_100_all/001305F:490050-491726... at -/100.00% 6 1676nt, >Aag2_family_100_all/001828F:112497-114173... at -/100.00% 7 206nt, >Aag2_family_100_all/002989F:21276-21482... at +/100.00% >Cluster 2 0 1902nt, >000723F:1251286-1253188... * 1 1863nt, >Aag2_family_100_all/000723F:1251295-1253158... at +/100.00% 2 800nt, >Aag2_family_100_all/000723F:1252107-1252907... at +/100.00% And I'm trying to match all the lines in groups, like: 0 1967nt, >001126F:363892-365859... * 1 1676nt, >Aag2_family_100_all/000015F:2300484-2302160... at -/100.00% 2 1544nt, >Aag2_family_100_all/000453F:1675071-1676615... at +/100.00% 3 1208nt, >Aag2_family_100_all/000453F:1675260-1676468... at +/100.00% 4 1676nt, >Aag2_family_100_all/001252F:481349-483025... at -/100.00% 5 1676nt, >Aag2_family_100_all/001305F:490050-491726... at -/100.00% 6 1676nt, >Aag2_family_100_all/001828F:112497-114173... at -/100.00% 7 206nt, >Aag2_family_100_all/002989F:21276-21482... at +/100.00% And deposit in an variable, and relate the line starting with 0 with the others line like: 000723F:1251286-1253188 = [">Aag2_family_100_all/000723F:1251295-1253158, >Aag2_family_100_all/000723F:1252107-1252907"] I'm trying on python using the re library. I tried working on with line by line, but I think my logic is wrong, really rookie on regex import re result = [] with open("cluster_whit_eefinder.clstr", "r") as cluster: for line in cluster: if re.search(r'>Cluster.*', line): print(line)
[ "I think you missed .read() for cluster.\n" ]
[ 0 ]
[]
[]
[ "python", "python_re" ]
stackoverflow_0074680389_python_python_re.txt
Q: "Code" command not found in VS Code on Mac I have just begun to code so everything is new to me. I have started using VS Code with Python 3.11 and macOS Monterey 12.6. However, very soon I discovered that when I enter the "code" command at the terminal command line, it comes back "command not found." I have to go the command palette, uninstall the code command and then reinstall it for it to work. The problem with this is that it resets everytime I close VS Code so I have to go back and repeat the whole process of unstalling and reinstalling the code command. I have read some other posts on stackoverflow that seem to address this issue but my knowledge of programming is extremely rudimentary and I was unable to understand the answers provided. Any help would be much appreciated. Thank you. A: Hey I just had the same issue because also new. But I have found this video that helped. My issue was that I didnt have VS in my applications folder. https://www.youtube.com/watch?v=kRhdm4K9mLY
"Code" command not found in VS Code on Mac
I have just begun to code so everything is new to me. I have started using VS Code with Python 3.11 and macOS Monterey 12.6. However, very soon I discovered that when I enter the "code" command at the terminal command line, it comes back "command not found." I have to go the command palette, uninstall the code command and then reinstall it for it to work. The problem with this is that it resets everytime I close VS Code so I have to go back and repeat the whole process of unstalling and reinstalling the code command. I have read some other posts on stackoverflow that seem to address this issue but my knowledge of programming is extremely rudimentary and I was unable to understand the answers provided. Any help would be much appreciated. Thank you.
[ "Hey I just had the same issue because also new. But I have found this video that helped. My issue was that I didnt have VS in my applications folder.\nhttps://www.youtube.com/watch?v=kRhdm4K9mLY\n" ]
[ 0 ]
[]
[]
[ "visual_studio_code" ]
stackoverflow_0074634438_visual_studio_code.txt
Q: C++, Undefined reference to function templated with lambda function I have 4 files: websocket.hpp #include <memory> #include <functional> namespace boost { namespace asio { class io_context; } // ns asio } // ns boost namespace izycoinscppapi { namespace feedhandler { struct websockets { websockets(const websockets &) = delete; websockets& operator= (const websockets &) = delete; websockets(websockets &&) noexcept = default; websockets& operator= (websockets &&) noexcept = default; using on_message_received_cb = std::function<void(const char *channel, const char *ptr, std::size_t size)>; websockets( boost::asio::io_context &ioctx ,std::string host ,std::string port ,on_message_received_cb cb = {} ); ~websockets(); using handle = void *; void unsubscribe(const handle &h); void async_unsubscribe(const handle &h); void unsubscribe_all(); void async_unsubscribe_all(); template<typename F> handle w_start_subscription(std::string target, std::string payload, F cb); private: struct impl; std::unique_ptr<impl> pimpl; }; } // ns feedhandler } // ns izycoinscppapi #endif websocket.cpp websockets::websockets( boost::asio::io_context &ioctx ,std::string host ,std::string port ,on_message_received_cb cb ) :pimpl{std::make_unique<impl>(ioctx, std::move(host), std::move(port), std::move(cb))} {} websockets::~websockets() {} template<typename F> websockets::handle websockets::w_start_subscription(std::string target, std::string payload, F cb) { return pimpl->start_subscription(target, payload, std::move(cb)); } wsapi.hpp #ifndef __izycoinscppapi__exchanges__binance__wsapi_hpp #define __izycoinscppapi__exchanges__binance__wsapi_hpp #include <memory> #include <functional> #include <vector> #include <izycoinscppapi/feedhandler/websocket.hpp> namespace boost { namespace asio { class io_context; } // ns asio } // ns boost namespace izycoinscppapi { namespace feedhandler { namespace ws { struct fh_price_t; } // ns ws struct websockets; } // ns feedhandler namespace exchanges { namespace binance { namespace errors { struct error_t; } namespace ws { struct book_ticker_t; struct wsapi { wsapi( boost::asio::io_context &ioctx ,std::string host ,std::string port ,feedhandler::websockets::on_message_received_cb cb = {} ); ~wsapi(); // https://github.com/binance/binance-spot-api-docs/blob/master/web-socket-streams.md#individual-symbol-book-ticker-streams using on_book_received_cb = std::function<bool(const char *fl, int ec, std::string errmsg, book_ticker_t receivedmsg, feedhandler::ws::fh_price_t returnmsg, binance::errors::error_t err)>; feedhandler::websockets::handle books(std::vector<std::string> pairs, on_book_received_cb cb); void async_unsubscribe(const feedhandler::websockets::handle &h); void async_unsubscribe_all(); private: std::unique_ptr<feedhandler::websockets> pwebsockets; }; } // ns ws } // ns binance } // ns exchanges } // ns izycoinscppapi #endif // wsapi.cpp feedhandler::websockets::handle wsapi::books(std::vector<std::string> pairs, on_book_received_cb cb) { std::string target = build_target(pairs, "bookTicker"); std::string payload{""}; return pwebsockets->w_start_subscription<on_book_received_cb>(target, payload, std::move(cb)); } As you can see, the w_start_subcription function exists in websocket.hpp and websocket.cpp, it is called in wsapi.cpp via the books function call Here is my main.cpp file auto books_handler = wsapi.books ( selected_pairs, [&timer1, &output_file] (const char *fl, int ec, std::string errmsg, auto received_msg, auto msg, auto err) {} ); When I compile with CMake, I get this error undefined reference to « void* izycoinscppapi::feedhandler::websockets::w_start_subscription<std::function<bool (char const*, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, izycoinscppapi::exchanges::binance::ws::book_ticker_t, izycoinscppapi::feedhandler::ws::fh_price_t, izycoinscppapi::exchanges::binance::errors::error_t)> >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::function<bool (char const*, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, izycoinscppapi::exchanges::binance::ws::book_ticker_t, izycoinscppapi::feedhandler::ws::fh_price_t, izycoinscppapi::exchanges::binance::errors::error_t)>) ». But the function exists. I have no idea why I get this error message. A: You need to add a forward declaration of the websockets class before the declaration of the wsapi class so that the compiler knows that the websockets class exists when you try to use it as a type in the declaration of the pwebsockets member variable. You can do this by adding a line like the following before the declaration of the wsapi class: namespace izycoinscppapi { namespace feedhandler { class websockets; } // ns feedhandler namespace exchanges { namespace binance { namespace ws { This tells the compiler that there is a class called websockets within the izycoinscppapi::feedhandler namespace, without giving the details of what that class looks like. This is enough for the compiler to be able to understand the declaration of the pwebsockets member variable.
C++, Undefined reference to function templated with lambda function
I have 4 files: websocket.hpp #include <memory> #include <functional> namespace boost { namespace asio { class io_context; } // ns asio } // ns boost namespace izycoinscppapi { namespace feedhandler { struct websockets { websockets(const websockets &) = delete; websockets& operator= (const websockets &) = delete; websockets(websockets &&) noexcept = default; websockets& operator= (websockets &&) noexcept = default; using on_message_received_cb = std::function<void(const char *channel, const char *ptr, std::size_t size)>; websockets( boost::asio::io_context &ioctx ,std::string host ,std::string port ,on_message_received_cb cb = {} ); ~websockets(); using handle = void *; void unsubscribe(const handle &h); void async_unsubscribe(const handle &h); void unsubscribe_all(); void async_unsubscribe_all(); template<typename F> handle w_start_subscription(std::string target, std::string payload, F cb); private: struct impl; std::unique_ptr<impl> pimpl; }; } // ns feedhandler } // ns izycoinscppapi #endif websocket.cpp websockets::websockets( boost::asio::io_context &ioctx ,std::string host ,std::string port ,on_message_received_cb cb ) :pimpl{std::make_unique<impl>(ioctx, std::move(host), std::move(port), std::move(cb))} {} websockets::~websockets() {} template<typename F> websockets::handle websockets::w_start_subscription(std::string target, std::string payload, F cb) { return pimpl->start_subscription(target, payload, std::move(cb)); } wsapi.hpp #ifndef __izycoinscppapi__exchanges__binance__wsapi_hpp #define __izycoinscppapi__exchanges__binance__wsapi_hpp #include <memory> #include <functional> #include <vector> #include <izycoinscppapi/feedhandler/websocket.hpp> namespace boost { namespace asio { class io_context; } // ns asio } // ns boost namespace izycoinscppapi { namespace feedhandler { namespace ws { struct fh_price_t; } // ns ws struct websockets; } // ns feedhandler namespace exchanges { namespace binance { namespace errors { struct error_t; } namespace ws { struct book_ticker_t; struct wsapi { wsapi( boost::asio::io_context &ioctx ,std::string host ,std::string port ,feedhandler::websockets::on_message_received_cb cb = {} ); ~wsapi(); // https://github.com/binance/binance-spot-api-docs/blob/master/web-socket-streams.md#individual-symbol-book-ticker-streams using on_book_received_cb = std::function<bool(const char *fl, int ec, std::string errmsg, book_ticker_t receivedmsg, feedhandler::ws::fh_price_t returnmsg, binance::errors::error_t err)>; feedhandler::websockets::handle books(std::vector<std::string> pairs, on_book_received_cb cb); void async_unsubscribe(const feedhandler::websockets::handle &h); void async_unsubscribe_all(); private: std::unique_ptr<feedhandler::websockets> pwebsockets; }; } // ns ws } // ns binance } // ns exchanges } // ns izycoinscppapi #endif // wsapi.cpp feedhandler::websockets::handle wsapi::books(std::vector<std::string> pairs, on_book_received_cb cb) { std::string target = build_target(pairs, "bookTicker"); std::string payload{""}; return pwebsockets->w_start_subscription<on_book_received_cb>(target, payload, std::move(cb)); } As you can see, the w_start_subcription function exists in websocket.hpp and websocket.cpp, it is called in wsapi.cpp via the books function call Here is my main.cpp file auto books_handler = wsapi.books ( selected_pairs, [&timer1, &output_file] (const char *fl, int ec, std::string errmsg, auto received_msg, auto msg, auto err) {} ); When I compile with CMake, I get this error undefined reference to « void* izycoinscppapi::feedhandler::websockets::w_start_subscription<std::function<bool (char const*, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, izycoinscppapi::exchanges::binance::ws::book_ticker_t, izycoinscppapi::feedhandler::ws::fh_price_t, izycoinscppapi::exchanges::binance::errors::error_t)> >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::function<bool (char const*, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, izycoinscppapi::exchanges::binance::ws::book_ticker_t, izycoinscppapi::feedhandler::ws::fh_price_t, izycoinscppapi::exchanges::binance::errors::error_t)>) ». But the function exists. I have no idea why I get this error message.
[ "You need to add a forward declaration of the websockets class before the declaration of the wsapi class so that the compiler knows that the websockets class exists when you try to use it as a type in the declaration of the pwebsockets member variable.\nYou can do this by adding a line like the following before the declaration of the wsapi class:\nnamespace izycoinscppapi {\nnamespace feedhandler {\nclass websockets;\n} // ns feedhandler\nnamespace exchanges {\nnamespace binance {\nnamespace ws {\n\nThis tells the compiler that there is a class called websockets within the izycoinscppapi::feedhandler namespace, without giving the details of what that class looks like. This is enough for the compiler to be able to understand the declaration of the pwebsockets member variable.\n" ]
[ 0 ]
[]
[]
[ "c++", "cmake", "g++", "lambda", "templates" ]
stackoverflow_0074680117_c++_cmake_g++_lambda_templates.txt
Q: Splitting text file by line number with using loops in linux I want to divide a text file according to their number of lines. I made a little search about it and see some options like split -l and sed -n but I want to know if it is possible doing it by using loops? I tried split and sed. A: Using awk: $ ls $ awk -F: '{print $0 > $1".txt"}' /etc/passwd $ ls -1 _apt.txt _flatpak.txt _rpc.txt avahi-autoipd.txt You can access number of line with the NR predefined variable With a while loop: while read -r line; do echo "$((++c)) $line" > "${c}.txt" done < file
Splitting text file by line number with using loops in linux
I want to divide a text file according to their number of lines. I made a little search about it and see some options like split -l and sed -n but I want to know if it is possible doing it by using loops? I tried split and sed.
[ "Using awk:\n$ ls\n$ awk -F: '{print $0 > $1\".txt\"}' /etc/passwd\n$ ls -1\n_apt.txt\n_flatpak.txt\n_rpc.txt\navahi-autoipd.txt\n\nYou can access number of line with the NR predefined variable\nWith a while loop:\nwhile read -r line; do\n echo \"$((++c)) $line\" > \"${c}.txt\"\ndone < file \n\n" ]
[ 0 ]
[]
[]
[ "bash" ]
stackoverflow_0074680314_bash.txt
Q: R_lme4_mixed-effects_modelling_SPSSexample_replication_cross-platform Hello stackoverflow community! I am new to mixed-effects modelling (MEM) or mixed-models. In order to gain a better understanding of MEM, I decided to replicate two examples in r (lme4 package) from the textbook "Experimental Design and Analysis" by Dr. Howard J. Seltman. In the textbook, the author used SPSS to solve the two examples and included the relevant output tables. Model 1, referred to as [tag:video game example], models "the linear relationship between trial and score with separate intercepts and slopes for each age group, and including a random per-subject intercept." The data for the video game example is available at the link below: https://www.stat.cmu.edu/~hseltman/309/Book/data/MMvideo.txt The model 1 output tables are found on the page no. 370/382 (actual book/pdf book) of the textbook which is also linked below (or see image): https://www.stat.cmu.edu/~hseltman/309/Book/Book.pdf My model 1 (video game example) is: lmer(score ~ trial + (1|id) + (1+agegrp|agegrp), data=data) where, trial is a fixed-effect. (1|id) is a random per-subject intercept. (1+agegrp|agegrp) is a random slope and random intercept for each age group. The model 1 returns an error: boundary (singular) fit: see help('isSingular') Model 2, referred to as [tag:classroom example], includes "main effects for stdTest, grade level, and treatment group" and "random effect (intercept) to account for school to school differences that induces correlation among scores for students within a school." Link for the classroom example data is included below: https://www.stat.cmu.edu/~hseltman/309/Book/data/schools.txt The model 2 output tables are found on the page no. 377/391 (actual book/pdf book) of the textbook which is also linked below (or see image): https://www.stat.cmu.edu/~hseltman/309/Book/Book.pdf My model 2 (classroom example) is: lmer(score ~ stdTest + grade + treatment + (1|student) + (1|student:classroom), data=data) where, stdTest, grade level, and treatment group are the fixed-effect. (1|student) is a random effect (intercept). (1|student:classroom) for students nested within a school. The model 2 returns an error: number of levels of each grouping factor must be < number of observations (problems: student, classroom:student) Could someone please help me model these two examples correctly to produce the desired outputs? Thank you, in advance, for your help. A: I think you had the models specified the wrong way. This looks like the way to replicate those two models: library(lme4) library(lmerTest) library(dplyr) dat <- rio::import("https://www.stat.cmu.edu/~hseltman/309/Book/data/MMvideo.txt") dat <- dat %>% mutate(agegrp = factor(agegrp, levels=c("(40,50]", "(20,30]", "(30,40]"))) m1 <- lmer(score ~ agegrp*trial + (1|id) , data=dat) summary(m1) #> Linear mixed model fit by REML. t-tests use Satterthwaite's method [ #> lmerModLmerTest] #> Formula: score ~ agegrp * trial + (1 | id) #> Data: dat #> #> REML criterion at convergence: 708.4 #> #> Scaled residuals: #> Min 1Q Median 3Q Max #> -2.39575 -0.54403 0.07855 0.65601 2.02271 #> #> Random effects: #> Groups Name Variance Std.Dev. #> id (Intercept) 6.457 2.541 #> Residual 4.633 2.152 #> Number of obs: 150, groups: id, 28 #> #> Fixed effects: #> Estimate Std. Error df t value Pr(>|t|) #> (Intercept) 14.0223 1.1097 55.4281 12.637 < 2e-16 *** #> agegrp(20,30] -7.2586 1.5704 72.9807 -4.622 1.60e-05 *** #> agegrp(30,40] -3.4887 1.4510 64.2373 -2.404 0.0191 * #> trial 3.3150 0.2152 118.8662 15.401 < 2e-16 *** #> agegrp(20,30]:trial 3.7988 0.3229 118.8662 11.766 < 2e-16 *** #> agegrp(30,40]:trial 2.1433 0.2914 118.8662 7.354 2.68e-11 *** #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> Correlation of Fixed Effects: #> (Intr) ag(20,30] ag(30,40] trial a(20,30]: #> aggr(20,30] -0.707 #> aggr(30,40] -0.765 0.617 #> trial -0.582 0.411 0.445 #> agg(20,30]: 0.388 -0.617 -0.297 -0.667 #> agg(30,40]: 0.430 -0.304 -0.603 -0.739 0.492 dat2 <- rio::import("https://www.stat.cmu.edu/~hseltman/309/Book/data/schools.txt") dat2 <- dat2 %>% mutate(grade =factor(grade, levels=c(5,3)), treatment = factor(treatment, levels=c(1,0))) m2 <- lmer(score ~ grade + treatment + stdTest + (1|classroom), data=dat2) summary(m2) #> Linear mixed model fit by REML. t-tests use Satterthwaite's method [ #> lmerModLmerTest] #> Formula: score ~ grade + treatment + stdTest + (1 | classroom) #> Data: dat2 #> #> REML criterion at convergence: 3023.7 #> #> Scaled residuals: #> Min 1Q Median 3Q Max #> -3.02847 -0.68306 0.03838 0.64510 2.94562 #> #> Random effects: #> Groups Name Variance Std.Dev. #> classroom (Intercept) 10.05 3.170 #> Residual 25.87 5.086 #> Number of obs: 490, groups: classroom, 20 #> #> Fixed effects: #> Estimate Std. Error df t value Pr(>|t|) #> (Intercept) -23.0943 6.8025 15.9160 -3.395 0.003722 ** #> grade3 -5.9424 1.6566 16.0861 -3.587 0.002447 ** #> treatment0 1.7941 1.6351 16.0676 1.097 0.288698 #> stdTest 0.4438 0.0879 15.8672 5.049 0.000122 *** #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> Correlation of Fixed Effects: #> (Intr) grade3 trtmn0 #> grade3 -0.246 #> treatment0 0.007 -0.410 #> stdTest -0.985 0.179 -0.079 Created on 2022-12-04 by the reprex package (v2.0.1) One thing to note is that the default behaviour in R is to have the first level of the factor be the reference. In the book examples, the last level was the reference, so you have to make that explicit in the data managing as above.
R_lme4_mixed-effects_modelling_SPSSexample_replication_cross-platform
Hello stackoverflow community! I am new to mixed-effects modelling (MEM) or mixed-models. In order to gain a better understanding of MEM, I decided to replicate two examples in r (lme4 package) from the textbook "Experimental Design and Analysis" by Dr. Howard J. Seltman. In the textbook, the author used SPSS to solve the two examples and included the relevant output tables. Model 1, referred to as [tag:video game example], models "the linear relationship between trial and score with separate intercepts and slopes for each age group, and including a random per-subject intercept." The data for the video game example is available at the link below: https://www.stat.cmu.edu/~hseltman/309/Book/data/MMvideo.txt The model 1 output tables are found on the page no. 370/382 (actual book/pdf book) of the textbook which is also linked below (or see image): https://www.stat.cmu.edu/~hseltman/309/Book/Book.pdf My model 1 (video game example) is: lmer(score ~ trial + (1|id) + (1+agegrp|agegrp), data=data) where, trial is a fixed-effect. (1|id) is a random per-subject intercept. (1+agegrp|agegrp) is a random slope and random intercept for each age group. The model 1 returns an error: boundary (singular) fit: see help('isSingular') Model 2, referred to as [tag:classroom example], includes "main effects for stdTest, grade level, and treatment group" and "random effect (intercept) to account for school to school differences that induces correlation among scores for students within a school." Link for the classroom example data is included below: https://www.stat.cmu.edu/~hseltman/309/Book/data/schools.txt The model 2 output tables are found on the page no. 377/391 (actual book/pdf book) of the textbook which is also linked below (or see image): https://www.stat.cmu.edu/~hseltman/309/Book/Book.pdf My model 2 (classroom example) is: lmer(score ~ stdTest + grade + treatment + (1|student) + (1|student:classroom), data=data) where, stdTest, grade level, and treatment group are the fixed-effect. (1|student) is a random effect (intercept). (1|student:classroom) for students nested within a school. The model 2 returns an error: number of levels of each grouping factor must be < number of observations (problems: student, classroom:student) Could someone please help me model these two examples correctly to produce the desired outputs? Thank you, in advance, for your help.
[ "I think you had the models specified the wrong way. This looks like the way to replicate those two models:\nlibrary(lme4)\nlibrary(lmerTest)\nlibrary(dplyr)\ndat <- rio::import(\"https://www.stat.cmu.edu/~hseltman/309/Book/data/MMvideo.txt\")\ndat <- dat %>% \n mutate(agegrp = factor(agegrp, levels=c(\"(40,50]\", \"(20,30]\", \"(30,40]\")))\n\nm1 <- lmer(score ~ agegrp*trial + (1|id) , data=dat)\nsummary(m1)\n#> Linear mixed model fit by REML. t-tests use Satterthwaite's method [\n#> lmerModLmerTest]\n#> Formula: score ~ agegrp * trial + (1 | id)\n#> Data: dat\n#> \n#> REML criterion at convergence: 708.4\n#> \n#> Scaled residuals: \n#> Min 1Q Median 3Q Max \n#> -2.39575 -0.54403 0.07855 0.65601 2.02271 \n#> \n#> Random effects:\n#> Groups Name Variance Std.Dev.\n#> id (Intercept) 6.457 2.541 \n#> Residual 4.633 2.152 \n#> Number of obs: 150, groups: id, 28\n#> \n#> Fixed effects:\n#> Estimate Std. Error df t value Pr(>|t|) \n#> (Intercept) 14.0223 1.1097 55.4281 12.637 < 2e-16 ***\n#> agegrp(20,30] -7.2586 1.5704 72.9807 -4.622 1.60e-05 ***\n#> agegrp(30,40] -3.4887 1.4510 64.2373 -2.404 0.0191 * \n#> trial 3.3150 0.2152 118.8662 15.401 < 2e-16 ***\n#> agegrp(20,30]:trial 3.7988 0.3229 118.8662 11.766 < 2e-16 ***\n#> agegrp(30,40]:trial 2.1433 0.2914 118.8662 7.354 2.68e-11 ***\n#> ---\n#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n#> \n#> Correlation of Fixed Effects:\n#> (Intr) ag(20,30] ag(30,40] trial a(20,30]:\n#> aggr(20,30] -0.707 \n#> aggr(30,40] -0.765 0.617 \n#> trial -0.582 0.411 0.445 \n#> agg(20,30]: 0.388 -0.617 -0.297 -0.667 \n#> agg(30,40]: 0.430 -0.304 -0.603 -0.739 0.492\n\ndat2 <- rio::import(\"https://www.stat.cmu.edu/~hseltman/309/Book/data/schools.txt\")\ndat2 <- dat2 %>% \n mutate(grade =factor(grade, levels=c(5,3)), \n treatment = factor(treatment, levels=c(1,0)))\n\nm2 <- lmer(score ~ grade + treatment + stdTest + (1|classroom), data=dat2)\nsummary(m2)\n#> Linear mixed model fit by REML. t-tests use Satterthwaite's method [\n#> lmerModLmerTest]\n#> Formula: score ~ grade + treatment + stdTest + (1 | classroom)\n#> Data: dat2\n#> \n#> REML criterion at convergence: 3023.7\n#> \n#> Scaled residuals: \n#> Min 1Q Median 3Q Max \n#> -3.02847 -0.68306 0.03838 0.64510 2.94562 \n#> \n#> Random effects:\n#> Groups Name Variance Std.Dev.\n#> classroom (Intercept) 10.05 3.170 \n#> Residual 25.87 5.086 \n#> Number of obs: 490, groups: classroom, 20\n#> \n#> Fixed effects:\n#> Estimate Std. Error df t value Pr(>|t|) \n#> (Intercept) -23.0943 6.8025 15.9160 -3.395 0.003722 ** \n#> grade3 -5.9424 1.6566 16.0861 -3.587 0.002447 ** \n#> treatment0 1.7941 1.6351 16.0676 1.097 0.288698 \n#> stdTest 0.4438 0.0879 15.8672 5.049 0.000122 ***\n#> ---\n#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n#> \n#> Correlation of Fixed Effects:\n#> (Intr) grade3 trtmn0\n#> grade3 -0.246 \n#> treatment0 0.007 -0.410 \n#> stdTest -0.985 0.179 -0.079\n\nCreated on 2022-12-04 by the reprex package (v2.0.1)\nOne thing to note is that the default behaviour in R is to have the first level of the factor be the reference. In the book examples, the last level was the reference, so you have to make that explicit in the data managing as above.\n" ]
[ 0 ]
[]
[]
[ "lme4", "mixed_models", "r" ]
stackoverflow_0074679781_lme4_mixed_models_r.txt
Q: Post Spring Boot 3 update - Unable to instantiate factory class [org.springframework.cloud.config.client.ConfigServerConfigDataLocationResolver] I've updated backend service to use latest Spring Boot 3, however after update, SB application fails due to some issues with setting up deferred logs during config load. Could anyone help with the idea what might be wrong, below is the stack trace: java.lang.IllegalArgumentException: Unable to instantiate factory class [org.springframework.cloud.config.client.ConfigServerConfigDataLocationResolver] for factory type [org.springframework.boot.context.config.ConfigDataLocationResolver] at org.springframework.core.io.support.SpringFactoriesLoader$FailureHandler.lambda$throwing$0(SpringFactoriesLoader.java:650) at org.springframework.core.io.support.SpringFactoriesLoader$FailureHandler.lambda$handleMessage$3(SpringFactoriesLoader.java:674) at org.springframework.core.io.support.SpringFactoriesLoader.instantiateFactory(SpringFactoriesLoader.java:231) at org.springframework.core.io.support.SpringFactoriesLoader.load(SpringFactoriesLoader.java:206) at org.springframework.core.io.support.SpringFactoriesLoader.load(SpringFactoriesLoader.java:160) at org.springframework.boot.context.config.ConfigDataLocationResolvers.<init>(ConfigDataLocationResolvers.java:66) at org.springframework.boot.context.config.ConfigDataEnvironment.createConfigDataLocationResolvers(ConfigDataEnvironment.java:160) at org.springframework.boot.context.config.ConfigDataEnvironment.<init>(ConfigDataEnvironment.java:148) at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.getConfigDataEnvironment(ConfigDataEnvironmentPostProcessor.java:101) at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:96) at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:89) at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEnvironmentPreparedEvent(EnvironmentPostProcessorApplicationListener.java:109) at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEvent(EnvironmentPostProcessorApplicationListener.java:94) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:176) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:169) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:143) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:131) at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:352) at org.springframework.boot.SpringApplication.run(SpringApplication.java:303) at org.springframework.boot.test.context.SpringBootContextLoader.lambda$loadContext$3(SpringBootContextLoader.java:137) at org.springframework.util.function.ThrowingSupplier.get(ThrowingSupplier.java:59) at org.springframework.util.function.ThrowingSupplier.get(ThrowingSupplier.java:47) at org.springframework.boot.SpringApplication.withHook(SpringApplication.java:1386) at org.springframework.boot.test.context.SpringBootContextLoader$ContextLoaderHook.run(SpringBootContextLoader.java:543) at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:137) at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:108) at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:183) at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117) at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:127) at org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:192) at org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:131) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:249) at org.springframework.test.context.junit.jupiter.SpringExtension.postProcessTestInstance(SpringExtension.java:138) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$10(ClassBasedTestDescriptor.java:377) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.executeAndMaskThrowable(ClassBasedTestDescriptor.java:382) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$11(ClassBasedTestDescriptor.java:377) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:310) at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:762) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeTestInstancePostProcessors(ClassBasedTestDescriptor.java:376) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$instantiateAndPostProcessTestInstance$6(ClassBasedTestDescriptor.java:289) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.instantiateAndPostProcessTestInstance(ClassBasedTestDescriptor.java:288) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$4(ClassBasedTestDescriptor.java:278) at java.base/java.util.Optional.orElseGet(Optional.java:364) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$5(ClassBasedTestDescriptor.java:277) at org.junit.jupiter.engine.execution.TestInstancesProvider.getTestInstances(TestInstancesProvider.java:31) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$before$2(ClassBasedTestDescriptor.java:203) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:202) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:84) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86) at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86) at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53) at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53) Caused by: java.lang.IllegalArgumentException: Log types cannot be injected, please use DeferredLogFactory at org.springframework.boot.context.config.ConfigDataLocationResolvers.lambda$new$0(ConfigDataLocationResolvers.java:64) at org.springframework.core.io.support.SpringFactoriesLoader$ArgumentResolver.lambda$ofSupplied$3(SpringFactoriesLoader.java:585) at org.springframework.core.io.support.SpringFactoriesLoader$ArgumentResolver$1.resolve(SpringFactoriesLoader.java:601) at org.springframework.core.io.support.SpringFactoriesLoader$ArgumentResolver.lambda$and$0(SpringFactoriesLoader.java:551) at org.springframework.core.io.support.SpringFactoriesLoader$ArgumentResolver$1.resolve(SpringFactoriesLoader.java:601) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575) at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616) at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622) at org.springframework.core.io.support.SpringFactoriesLoader$FactoryInstantiator.resolveArgs(SpringFactoriesLoader.java:387) at org.springframework.core.io.support.SpringFactoriesLoader$FactoryInstantiator.instantiate(SpringFactoriesLoader.java:377) at org.springframework.core.io.support.SpringFactoriesLoader.instantiateFactory(SpringFactoriesLoader.java:228) ... 95 common frames omitted Upgraded Spring Boot version and performed other needed changes (updated checkstyle plugin, moved to jakarta EE, replaced deprecations). Code is compiling, however spring context is not starting up due to above mentioned issue. A: Using Spring Cloud 2022.0.0-RC2 should fix the issue.
Post Spring Boot 3 update - Unable to instantiate factory class [org.springframework.cloud.config.client.ConfigServerConfigDataLocationResolver]
I've updated backend service to use latest Spring Boot 3, however after update, SB application fails due to some issues with setting up deferred logs during config load. Could anyone help with the idea what might be wrong, below is the stack trace: java.lang.IllegalArgumentException: Unable to instantiate factory class [org.springframework.cloud.config.client.ConfigServerConfigDataLocationResolver] for factory type [org.springframework.boot.context.config.ConfigDataLocationResolver] at org.springframework.core.io.support.SpringFactoriesLoader$FailureHandler.lambda$throwing$0(SpringFactoriesLoader.java:650) at org.springframework.core.io.support.SpringFactoriesLoader$FailureHandler.lambda$handleMessage$3(SpringFactoriesLoader.java:674) at org.springframework.core.io.support.SpringFactoriesLoader.instantiateFactory(SpringFactoriesLoader.java:231) at org.springframework.core.io.support.SpringFactoriesLoader.load(SpringFactoriesLoader.java:206) at org.springframework.core.io.support.SpringFactoriesLoader.load(SpringFactoriesLoader.java:160) at org.springframework.boot.context.config.ConfigDataLocationResolvers.<init>(ConfigDataLocationResolvers.java:66) at org.springframework.boot.context.config.ConfigDataEnvironment.createConfigDataLocationResolvers(ConfigDataEnvironment.java:160) at org.springframework.boot.context.config.ConfigDataEnvironment.<init>(ConfigDataEnvironment.java:148) at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.getConfigDataEnvironment(ConfigDataEnvironmentPostProcessor.java:101) at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:96) at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:89) at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEnvironmentPreparedEvent(EnvironmentPostProcessorApplicationListener.java:109) at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEvent(EnvironmentPostProcessorApplicationListener.java:94) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:176) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:169) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:143) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:131) at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:352) at org.springframework.boot.SpringApplication.run(SpringApplication.java:303) at org.springframework.boot.test.context.SpringBootContextLoader.lambda$loadContext$3(SpringBootContextLoader.java:137) at org.springframework.util.function.ThrowingSupplier.get(ThrowingSupplier.java:59) at org.springframework.util.function.ThrowingSupplier.get(ThrowingSupplier.java:47) at org.springframework.boot.SpringApplication.withHook(SpringApplication.java:1386) at org.springframework.boot.test.context.SpringBootContextLoader$ContextLoaderHook.run(SpringBootContextLoader.java:543) at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:137) at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:108) at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:183) at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117) at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:127) at org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:192) at org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:131) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:249) at org.springframework.test.context.junit.jupiter.SpringExtension.postProcessTestInstance(SpringExtension.java:138) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$10(ClassBasedTestDescriptor.java:377) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.executeAndMaskThrowable(ClassBasedTestDescriptor.java:382) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$11(ClassBasedTestDescriptor.java:377) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:310) at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:762) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeTestInstancePostProcessors(ClassBasedTestDescriptor.java:376) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$instantiateAndPostProcessTestInstance$6(ClassBasedTestDescriptor.java:289) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.instantiateAndPostProcessTestInstance(ClassBasedTestDescriptor.java:288) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$4(ClassBasedTestDescriptor.java:278) at java.base/java.util.Optional.orElseGet(Optional.java:364) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$5(ClassBasedTestDescriptor.java:277) at org.junit.jupiter.engine.execution.TestInstancesProvider.getTestInstances(TestInstancesProvider.java:31) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$before$2(ClassBasedTestDescriptor.java:203) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:202) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:84) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86) at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86) at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53) at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53) Caused by: java.lang.IllegalArgumentException: Log types cannot be injected, please use DeferredLogFactory at org.springframework.boot.context.config.ConfigDataLocationResolvers.lambda$new$0(ConfigDataLocationResolvers.java:64) at org.springframework.core.io.support.SpringFactoriesLoader$ArgumentResolver.lambda$ofSupplied$3(SpringFactoriesLoader.java:585) at org.springframework.core.io.support.SpringFactoriesLoader$ArgumentResolver$1.resolve(SpringFactoriesLoader.java:601) at org.springframework.core.io.support.SpringFactoriesLoader$ArgumentResolver.lambda$and$0(SpringFactoriesLoader.java:551) at org.springframework.core.io.support.SpringFactoriesLoader$ArgumentResolver$1.resolve(SpringFactoriesLoader.java:601) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575) at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616) at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622) at org.springframework.core.io.support.SpringFactoriesLoader$FactoryInstantiator.resolveArgs(SpringFactoriesLoader.java:387) at org.springframework.core.io.support.SpringFactoriesLoader$FactoryInstantiator.instantiate(SpringFactoriesLoader.java:377) at org.springframework.core.io.support.SpringFactoriesLoader.instantiateFactory(SpringFactoriesLoader.java:228) ... 95 common frames omitted Upgraded Spring Boot version and performed other needed changes (updated checkstyle plugin, moved to jakarta EE, replaced deprecations). Code is compiling, however spring context is not starting up due to above mentioned issue.
[ "Using Spring Cloud 2022.0.0-RC2 should fix the issue.\n" ]
[ 0 ]
[]
[]
[ "java", "logging", "spring_boot", "spring_boot_3" ]
stackoverflow_0074629444_java_logging_spring_boot_spring_boot_3.txt
Q: Problem using xng-breadcrumb 9.0.0 library, application cannot boostrapping I faced problem after first time installing xng-breadcrumb 9.0.0. My app after ng serve cannot start: TypeError: Cannot create property 'message' on string 'C:\Users\...\node_modules\xng-breadcrumb\fesm2015\xng-breadcrumb.mjs: This application depends upon a library published using Angular version 14.0.0, which requires Angular version 14.0.0 or newer to work correctly. Screenshot from terminal: It seams that I have too old version of Angular which I think isn't old: Is it problem related to compatibility of versions? Or what can caauses this issue and hhow can I solve it? I tried to install older version of xng-breadcrumb but same result. I can add that I have just updated Angular and Node.js today if it's matter. Thanks for help for newbie. Edit: I suppose that now everything is updated: But after that I faced other problem (I don't know if screenshots are more acceptable rather than of piece of code): The first issue I fixed this way: // @import "~bootstrap/scss/bootstrap"; @import "node_modules/bootstrap/scss/bootstrap"; But stays the second problem with: Property 'get' in type 'ToastInjector' is not assignable to the same property in base type 'Injector'. A: The solution was to update @angular-devkit/build-angular and @angular-devkit/core by using commannds: npm install @angular-devkit/core ng update @angular-devkit/build-angular
Problem using xng-breadcrumb 9.0.0 library, application cannot boostrapping
I faced problem after first time installing xng-breadcrumb 9.0.0. My app after ng serve cannot start: TypeError: Cannot create property 'message' on string 'C:\Users\...\node_modules\xng-breadcrumb\fesm2015\xng-breadcrumb.mjs: This application depends upon a library published using Angular version 14.0.0, which requires Angular version 14.0.0 or newer to work correctly. Screenshot from terminal: It seams that I have too old version of Angular which I think isn't old: Is it problem related to compatibility of versions? Or what can caauses this issue and hhow can I solve it? I tried to install older version of xng-breadcrumb but same result. I can add that I have just updated Angular and Node.js today if it's matter. Thanks for help for newbie. Edit: I suppose that now everything is updated: But after that I faced other problem (I don't know if screenshots are more acceptable rather than of piece of code): The first issue I fixed this way: // @import "~bootstrap/scss/bootstrap"; @import "node_modules/bootstrap/scss/bootstrap"; But stays the second problem with: Property 'get' in type 'ToastInjector' is not assignable to the same property in base type 'Injector'.
[ "The solution was to update @angular-devkit/build-angular and @angular-devkit/core by using commannds:\n\nnpm install @angular-devkit/core\nng update @angular-devkit/build-angular\n\n" ]
[ 0 ]
[]
[]
[ "angular", "node.js", "xng_breadcrumb" ]
stackoverflow_0074672006_angular_node.js_xng_breadcrumb.txt